post-break work

Barak ebay-ed two additional Mindflex headsets, and we tore down three of our four. Our plan was to create a new headset around a pair of noise-cancelling headphones for proper binaural beat entrainment. This meant we had to replace all of the leads on the Neurosky chip with longer silicon wires. This also meant we needed professional help, because the chips were tinier than our soldering skills were questionable.

Luckily, my father is an electrical engineer, so I flew our Neurosky chips home with me over Thanksgiving break and put him to work:

When I returned to NY, our chips were ready to go (with shiny new copper tape electrodes):

Barak came back with cheap headphones, the speakers from which we would install in the shooting range earmuffs we bought off Amazon.

For ICM playtesting that week, I created p5 sketches that—gradually for four minutes— pulsed binaural beats and a flashing background according to two states: relaxed mode at 4hz, and focused mode at 40hz. I had users wear the headphones and an EEG device, close their eyes, and face the flashing screen:

The video below shows a relaxed mode session, as well as the corresponding brain activity of the user. As we predicted, the 4hz entrainment from the p5 sketch seems to encourage ~4z brain waves (ie delta/theta frequency bands):

So that was pretty exciting!

We had also decided to incorporate a heart rate sensor, so that the tempo of the eventual entrainment music could sync to it. Here’s Barak’s clever serial-monitor-visualization of his heart rate:

 

The fact that the user would enter a dome made it necessary for the headset to be wireless. We purchased a Node MCU for the task, which meant the data would be sent over wifi instead of a serial port. In order to securely fasten the Node MCU to the Neurosky chip, we 3d-printed a little mount:

After making good progress on the headset, it was time to start on the geodesic dome. We purchased second-hand Hubs from a guy in Canada, as well as pretty much every single 5/8″ dowel from Home Depot. It was a big job, so we enlisted some help from my ever-helpful boyfriend:

And then it was off to Spandex World to get some miliskin for projecting on:

For the audio/visual entrainment piece, Barak used a combination of Ableton, an audio synthesizer, and Lumen, a visual synthesizer. Ableton receives heart rate information from our sensor over a midi port, which influences the tempo of the audio, and Lumen receives midi information from Ableton, which influences the frequency of the visuals. Here’s Barak playing with the different templates:

  

The actual colors would be red and blue for focused and relaxed mode, respectively. According to this paper, these colors encourage the frequency bands (beta/gamma and delta/theta, respectively) that we were targeting.

Yes, we realized covering the dome from the outside looked a little shabby, so we decided, on Ben Light’s suggestion, to use grommets:

More to come on dome development!

The last component was the olfactory entrainment. According to this paper, rosemary and lavender would encourage beta/gamma and delta/theta bands, respectively. We purchased $10 diffusers off Amazon to hack, as well as rosemary and lavender essential oils. Wiring up this circuit was a bit of a mindfuck for mysterious reasons we can’t even explain, but ultimately we ended up using transistors so that only the appropriate diffuser would turn on after the user chose their desired state:

Also notable is that we used our first rotary switch:

But after all that work (and acrylic), we ultimately decided to only go with the relaxed mode—it made more sense in context of the Winter Show. I mean, who would choose to become more alert during such an intensely stimulating environment?

Lastly, here’s a screenshot of the EEG visualization I’m working on for the show:

The canvas expands as the data streams in, which will be saved as an image at the end of each session. There’s also an option to start a new session, which will empty all arrays as well as signal the processing sketch to start over.

More soon!

So it begins

Where did I get the chutzpah to even consider making my own EEG device? While rummaging around on the internet, I found an 2010 blog post by an ITP alum who hacked the Mindflex—a game by Mattel—because the data-parsing chip was by Neurosky, a brand I recognized while window-shopping for consumer-grade EEG devices. I had ordered a couple of these Mindflex headsets off eBay and was hoarding them in my locker until I felt comfortable enough with my pcomp-ing abilities to break them open.

Luckily, I’d been able to rope the brilliant Barak Chamo onto this project, so it basically felt like I could do anything. On our first official team meeting, we broke open the packages:

 

 

 

 

And then it was time to work on disassembly. We wanted to go further with the deconstruction than our ITP predecessor, and found guidance in this teardown.

We couldn’t believe how comically simple this headset was, particularly this sad excuse for an electrode:

But this realization was as empowering as it was hilarious. Obviously, we could do better than a piece of conductive fabric.

Emboldened, we took the teardown a step further by completely desoldering the Neurosky chip from the Mindflex’s microcontroller.

 

 

 

 

And by some miracle, it still worked! We were able to get serial data from the naked chip. So I got to work on the p5 visualization:

 

pcomp help session notes

Playing sound with Arduino without a mp3 shield:

  1. Another device can access your local host as long as they’re on the same wifi network
  2. IP address, then path to the file
  3. can play music this way (through p5) without a shield!! (functionality is through p5)

ICM/Pcomp final project

For my final project, I would like to make a brain entrainment pod wherein the user can choose two states: relaxed or focused. Each state will trigger light and sound settings that emit at frequencies associated with either the Default Mode Network or Task Positive Network, respectively. The concept is basically an updated and elaborate dream machine: the user’s exposure to pulsating light and sound will reproduce those frequencies in their brain. I will attempt to lead the brain into either the DMN or TPN by replicating the dominant frequencies present during either open monitoring meditation or focused attention meditation, respectively.

As a result, the user (theoretically) will not have to actually meditate in the traditional sense, but instead receive “treatment” that hopefully will yield the same results of meditation by an advanced practitioner. I would like to corroborate this theory by including an EEG device that measures the user’s brain activity while undergoing “treatment”. Because there is such disappointment over the reliability and price-point of open-source/consumer-grade EEGs, we will attempt to design our own device tailored to our purpose (while also preparing to purchase one if that proves to be an impossible task).

I believe light and sound will be the most effective sensory inputs for entrainment, as you can define their frequencies, but I also hope that we can hide these pulsations underneath visualizations and music that are actually aesthetically pleasing, so as to not alarm or disturb the user. Once we receive the user’s mental state via serial communication, we will generate visuals/audio based on their data. However, the stimuli won’t be a reflection of their state—it will be a response to it and their decision to either be “relaxed” or “focused”.

The visuals—generated in p5—will be projected on the walls of our dome (likely a purchased geodesic dome), and the audio will be combination of binaural beats and actual music (TBD), also generated in/played through p5.

I first became interested in brain entrainment when I discovered the Dream Machine, and kinetic light sculpture by artist Brion Gysin and engineer Ian Sommerville, circa the early 1960s. The Dream Machine was originally a cut paper cylinder placed on a record player and illuminated from the inside; the frequency of its pulsing light produced alpha activity in the brain, which is associated with relaxation.

So it should follow that we can use this method to produce any sort of activity in the brain. I’m especially interested in deactivating the DMN as a long-term therapeutic tool for depression, but for this project the user will decide what they want. (Generally, the DMN is associated with increased lower-gamma levels in the prefrontal cortex, but we will be grabbing the settings of advanced meditation practitioners for this project).

Another example is the vibroacoustic recliner (used therapeutically by Dr. George Patrick):

Image result for somatron vibroacoustic recliner by Dr. George Patrick

I became interested in EEG devices after seeing ITP alum Lisa Park’s thesis project:

 

Mood board for visuals:

 

Our project will be for users who are anxious or stressed out and need reprieve. The Winter Show is pretty chaotic, and our entrainment pod will totally immerse users in a completely different environment. Barak has dreams for it to stay at ITP permanently, so students have a nearby retreat from the floor/their crippling self-doubt.

pcomp midterm

For the pcomp midterm, I agreed to help my friend Ilana make a jukebox for her boyfriend’s birthday. The jukebox would feature her boyfriend’s original music, each activated by a specific, sentimentally associated photograph. Each photograph would close their song’s switch via copper tape applied to its back. This project seemed like a challenging learning experience, so I was happy to work with her.

Because of time limitations, we ran over to Tinkersphere to buy an mp3 shield. This turned out to be an immediately regrettable decision, because there was very little documentation, the provided link to the datasheet was broken, and the library—incredibly—didn’t work.


*screams*

Luckily, Aaron (our miracle-worker of a resident) was able to hack the Adafruit mp3 shield library (sorry Adafruit) to work with our questionable Tinkersphere purchase. Unfortunately, that only opened the floodgates of pain and suffering, as there was still a lot of crazy mp3 shield logic deal with (delays, booleans for each song, the concept of interrupts and how they apply to serial communication…). Eventually, many office hours (thanks Yuli and Chino) and even more if statements got the job done. However, we weren’t able to figure out how to get combinations of switch states to allow for more songs.


preview of the madness

So we hooked this up to push buttons to test the code, then threw together a rough prototype to test the technical concept:

 

With the circuit working, it was time to work on the enclosure. We bought a utensil tray from The Container Store (shout out to this video), and laser cut an interface, first with cardboard:

Then with acrylic:

To create a product independent from the computer, we hooked the circuit up to a battery, which was hooked up to an on/off switch. We had some trouble with the switch—we bought a strange 3-state one from Tinkersphere—and ended up borrowing one of Barak’s after completely ruining it (ie Ilana burning herself and melting the plastic after a short circuit!).

Transferring the circuit to the box was a struggle, and for the next iteration we have to switch to the multi-stranded wire, because as it is with the solid-core wiring, the box doesn’t close. But at least the circuit works:

Now for some sleep…

e-reader for the hard-of-seeing

For my sanity’s sake, this week I combined the pcomp and ICM assignments: for the former, we were to have three inputs send ASCII data to a p5 sketch; for the latter, we were to manipulate DOM elements.

I decided to make an e-reader of sorts for… people like my parents. People who still squint while wearing 2x magnification glasses and follow the words they’re reading with their fingers so as to not lose their place. So the e-reader should display one sentence of a text at a time, with a way to increase or decrease the font size easily.

I grabbed some random text from Project Gutenberg, but couldn’t figure out how to store a text file into a variable in p5 or javascript, so I just created a <p> tag into the <body> of my index.html file and dumped everything in there. When p5 selects an HTML element, it selects the tag itself, so I couldn’t store the text into an array and split by punctuation like I originally intended. Sooo I cheated by wrapping each sentence in a <p> tag and then containing the entire thing in a <div>. Then I selected all the<p> tags and displayed/hid them by their indexes.

Everything else went relatively smoothly. Arduino buttons would control the displaying of the paragraph tags, and a potentiometer would control the font-size style. Here’s the Arduino code:

Circuit:

And the p5 code: http://alpha.editor.p5js.org/xujenna/sketches/SJd9j9fTZ

Result:

 

 

pcomp help session: serial communication

Arduino

  • Serial.write(65): sends one byte (encodes in binary)
    • sends 65
  • Serial.print(65): sends two bytes (sends as a string)
    • sends 54 53
  • serial monitor interprets everything as ASCII

p5

  • callback functions need to be defined in draw()
  • Serial.read() with Serial.write()
  • Println() + readLine()
    • readLine() looks for println() to execute

Steps:

  1. Serial print values from Arduino
  2. Run p5.serialcontrol app
  3. Include serial.js library in index.html of sketch
  4. create serial object in p5 and define callback functions

 


Arduino code

 


p5 code

Pcomp video notes

Asynchronous serial communication

  • Devices keep time independently, but transmit/receive at the same rate
  • Devices need to agree on:
    1. rate at which data is sent and read
    2. voltage levels (representing 1 or 0 bit)
    3. voltage logic (regular or inverted)
  • Devices connected by three connections:
    1. common ground (so devices have a common reference point to measure voltage by)
    2. one wire as transmit line, for the sender
    3. one wire as receive line, for the reader
  • Arduino transmits spikes in voltage (1 bit * 8 = 1 byte per character) that correspond to binary; binary translates to ASCII
    • two bytes for return and new line characters
  • DIY Protocol: Serial.available(): tells us how many bytes have been received by the Arduino (and stored in the buffer), but not processed
  • analogRead() is 10 bits (0-1023), needs to be divided by 4 inside Serial.write() because Arduino transmits one byte (8 bits, or 0-255) at a time
    • Serial.write() sends data as raw binary, serial monitor in Arduino interprets as ASCII values (interpretation is built-in, makes sending strings easier)
    • println() takes a string, converts it to ASCII
    • Serial.write() sends binary representation, serial monitor converts to ASCII
    • Dec value is base 10, Hex is base 16

Serial Output from Arduino to p5.js

  • control over transmission and reception (we are both sender and receiver)
    • can decide whether an incoming byte should be interpreted as a byte (numerical value), or convert it to ASCII (for a message)
  • Serial handshaking:
    • Arduino sends info 100x/second, which is faster than receiver is processing it
    • handshaking avoids filling up the buffer
    • receiver (p5) sends a byte to transmitter once it’s ready to receive more data: Serial.write('x');
    • transmitter (arduino) waits for the signal: Serial.available() > 0
      • has to send initial signal
  • Reading strings:
    • instead of using Serial.write (which sends raw binary; only holds a single byte so needs to be divided by 4, ie compressed), use print and println to send ASCII data and interpret the string received in p5 as a float