post-break work

Barak ebay-ed two additional Mindflex headsets, and we tore down three of our four. Our plan was to create a new headset around a pair of noise-cancelling headphones for proper binaural beat entrainment. This meant we had to replace all of the leads on the Neurosky chip with longer silicon wires. This also meant we needed professional help, because the chips were tinier than our soldering skills were questionable.

Luckily, my father is an electrical engineer, so I flew our Neurosky chips home with me over Thanksgiving break and put him to work:

When I returned to NY, our chips were ready to go (with shiny new copper tape electrodes):

Barak came back with cheap headphones, the speakers from which we would install in the shooting range earmuffs we bought off Amazon.

For ICM playtesting that week, I created p5 sketches that—gradually for four minutes— pulsed binaural beats and a flashing background according to two states: relaxed mode at 4hz, and focused mode at 40hz. I had users wear the headphones and an EEG device, close their eyes, and face the flashing screen:

The video below shows a relaxed mode session, as well as the corresponding brain activity of the user. As we predicted, the 4hz entrainment from the p5 sketch seems to encourage ~4z brain waves (ie delta/theta frequency bands):

So that was pretty exciting!

We had also decided to incorporate a heart rate sensor, so that the tempo of the eventual entrainment music could sync to it. Here’s Barak’s clever serial-monitor-visualization of his heart rate:

 

The fact that the user would enter a dome made it necessary for the headset to be wireless. We purchased a Node MCU for the task, which meant the data would be sent over wifi instead of a serial port. In order to securely fasten the Node MCU to the Neurosky chip, we 3d-printed a little mount:

After making good progress on the headset, it was time to start on the geodesic dome. We purchased second-hand Hubs from a guy in Canada, as well as pretty much every single 5/8″ dowel from Home Depot. It was a big job, so we enlisted some help from my ever-helpful boyfriend:

And then it was off to Spandex World to get some miliskin for projecting on:

For the audio/visual entrainment piece, Barak used a combination of Ableton, an audio synthesizer, and Lumen, a visual synthesizer. Ableton receives heart rate information from our sensor over a midi port, which influences the tempo of the audio, and Lumen receives midi information from Ableton, which influences the frequency of the visuals. Here’s Barak playing with the different templates:

  

The actual colors would be red and blue for focused and relaxed mode, respectively. According to this paper, these colors encourage the frequency bands (beta/gamma and delta/theta, respectively) that we were targeting.

Yes, we realized covering the dome from the outside looked a little shabby, so we decided, on Ben Light’s suggestion, to use grommets:

More to come on dome development!

The last component was the olfactory entrainment. According to this paper, rosemary and lavender would encourage beta/gamma and delta/theta bands, respectively. We purchased $10 diffusers off Amazon to hack, as well as rosemary and lavender essential oils. Wiring up this circuit was a bit of a mindfuck for mysterious reasons we can’t even explain, but ultimately we ended up using transistors so that only the appropriate diffuser would turn on after the user chose their desired state:

Also notable is that we used our first rotary switch:

But after all that work (and acrylic), we ultimately decided to only go with the relaxed mode—it made more sense in context of the Winter Show. I mean, who would choose to become more alert during such an intensely stimulating environment?

Lastly, here’s a screenshot of the EEG visualization I’m working on for the show:

The canvas expands as the data streams in, which will be saved as an image at the end of each session. There’s also an option to start a new session, which will empty all arrays as well as signal the processing sketch to start over.

More soon!

So it begins

Where did I get the chutzpah to even consider making my own EEG device? While rummaging around on the internet, I found an 2010 blog post by an ITP alum who hacked the Mindflex—a game by Mattel—because the data-parsing chip was by Neurosky, a brand I recognized while window-shopping for consumer-grade EEG devices. I had ordered a couple of these Mindflex headsets off eBay and was hoarding them in my locker until I felt comfortable enough with my pcomp-ing abilities to break them open.

Luckily, I’d been able to rope the brilliant Barak Chamo onto this project, so it basically felt like I could do anything. On our first official team meeting, we broke open the packages:

 

 

 

 

And then it was time to work on disassembly. We wanted to go further with the deconstruction than our ITP predecessor, and found guidance in this teardown.

We couldn’t believe how comically simple this headset was, particularly this sad excuse for an electrode:

But this realization was as empowering as it was hilarious. Obviously, we could do better than a piece of conductive fabric.

Emboldened, we took the teardown a step further by completely desoldering the Neurosky chip from the Mindflex’s microcontroller.

 

 

 

 

And by some miracle, it still worked! We were able to get serial data from the naked chip. So I got to work on the p5 visualization:

 

ICM/Pcomp final project

For my final project, I would like to make a brain entrainment pod wherein the user can choose two states: relaxed or focused. Each state will trigger light and sound settings that emit at frequencies associated with either the Default Mode Network or Task Positive Network, respectively. The concept is basically an updated and elaborate dream machine: the user’s exposure to pulsating light and sound will reproduce those frequencies in their brain. I will attempt to lead the brain into either the DMN or TPN by replicating the dominant frequencies present during either open monitoring meditation or focused attention meditation, respectively.

As a result, the user (theoretically) will not have to actually meditate in the traditional sense, but instead receive “treatment” that hopefully will yield the same results of meditation by an advanced practitioner. I would like to corroborate this theory by including an EEG device that measures the user’s brain activity while undergoing “treatment”. Because there is such disappointment over the reliability and price-point of open-source/consumer-grade EEGs, we will attempt to design our own device tailored to our purpose (while also preparing to purchase one if that proves to be an impossible task).

I believe light and sound will be the most effective sensory inputs for entrainment, as you can define their frequencies, but I also hope that we can hide these pulsations underneath visualizations and music that are actually aesthetically pleasing, so as to not alarm or disturb the user. Once we receive the user’s mental state via serial communication, we will generate visuals/audio based on their data. However, the stimuli won’t be a reflection of their state—it will be a response to it and their decision to either be “relaxed” or “focused”.

The visuals—generated in p5—will be projected on the walls of our dome (likely a purchased geodesic dome), and the audio will be combination of binaural beats and actual music (TBD), also generated in/played through p5.

I first became interested in brain entrainment when I discovered the Dream Machine, and kinetic light sculpture by artist Brion Gysin and engineer Ian Sommerville, circa the early 1960s. The Dream Machine was originally a cut paper cylinder placed on a record player and illuminated from the inside; the frequency of its pulsing light produced alpha activity in the brain, which is associated with relaxation.

So it should follow that we can use this method to produce any sort of activity in the brain. I’m especially interested in deactivating the DMN as a long-term therapeutic tool for depression, but for this project the user will decide what they want. (Generally, the DMN is associated with increased lower-gamma levels in the prefrontal cortex, but we will be grabbing the settings of advanced meditation practitioners for this project).

Another example is the vibroacoustic recliner (used therapeutically by Dr. George Patrick):

Image result for somatron vibroacoustic recliner by Dr. George Patrick

I became interested in EEG devices after seeing ITP alum Lisa Park’s thesis project:

 

Mood board for visuals:

 

Our project will be for users who are anxious or stressed out and need reprieve. The Winter Show is pretty chaotic, and our entrainment pod will totally immerse users in a completely different environment. Barak has dreams for it to stay at ITP permanently, so students have a nearby retreat from the floor/their crippling self-doubt.

ICM class notes

function deviceMoved();

function deviceTurned();

function deviceShaken();

rotationX and rotationY

touches is an array

navigator.geolocation.getCurrentPosition(); pulls GPS coordinates

windowResized(); and resizeCanvas();

*requires https://

face recognition: https://github.com/auduno/clmtrackr

Binaural beat machine

I’m interested in brainwave entrainment, so for my ICM homework this week, I created a sketch that plays two sine wave tones of the user’s choice. If the frequencies come within a 40hz range of each other, they combine to form the illusion of a third tone, aka a binaural beat. This third tone’s frequency is the difference of the two.

The p5.sound library has a great class called FFT, which enables you to visualize the frequency of any sound playing in your sketch. I wanted to visualize how the sound waves would interact as the user changes their frequencies; especially interesting are the shapes that emerge with binaural beats. To make it clear when you’ve reached such a threshold, the visualization turns from blue to red:

To properly induce binaural beats, each tone must be heard by one ear exclusively, but simultaneously. Luckily, there’s the .pan() function for that. The first tone will play in the left earbud; the second, in the right.

Try it here:

ICM w6: class notes

AJAX: Asynchronous Javascript and XML

Anything after a ? in a URL is data you’re sending to a server

Data sources:

  • https://itp.nyu.edu/ranch/api/projects-finder/arduino
  • https://github.com/ITPNYU/ICM-2017/wiki/Data-Sources
  • https://itp.nyu.edu/registration/alum/linkedInGetRawData.php

jsoup.org

e-reader for the hard-of-seeing

For my sanity’s sake, this week I combined the pcomp and ICM assignments: for the former, we were to have three inputs send ASCII data to a p5 sketch; for the latter, we were to manipulate DOM elements.

I decided to make an e-reader of sorts for… people like my parents. People who still squint while wearing 2x magnification glasses and follow the words they’re reading with their fingers so as to not lose their place. So the e-reader should display one sentence of a text at a time, with a way to increase or decrease the font size easily.

I grabbed some random text from Project Gutenberg, but couldn’t figure out how to store a text file into a variable in p5 or javascript, so I just created a <p> tag into the <body> of my index.html file and dumped everything in there. When p5 selects an HTML element, it selects the tag itself, so I couldn’t store the text into an array and split by punctuation like I originally intended. Sooo I cheated by wrapping each sentence in a <p> tag and then containing the entire thing in a <div>. Then I selected all the<p> tags and displayed/hid them by their indexes.

Everything else went relatively smoothly. Arduino buttons would control the displaying of the paragraph tags, and a potentiometer would control the font-size style. Here’s the Arduino code:

Circuit:

And the p5 code: http://alpha.editor.p5js.org/xujenna/sketches/SJd9j9fTZ

Result: