going foward

For the last week of Rest of You, I tried to automate all of the trackers that I made in the past seven weeks, and feed them into a locally hosted visualization. I didn’t get quite that far, but am getting there:

I liked the viz I created last week, so my obvious first step was to automate the python scripts that retrieved the data. The productivity scores are requested from the RescueTime API, and my Chrome history is just a JSON converted from a SQL file that Chrome locally maintains. Both python scripts update their datasets hourly. The JSONs are linked to a local d3.js visualization, so I can refresh any time to see an update. The design could use some cleaning up, but the data is there.

The next step was to automate the entrainment process as well. Rather than having to monitor my own mind-wandering, then muster the self-discipline to open up my six-minute entrainment meditation, a python script checks my productivity levels and opens the webpage if they dip below 50%:

 

If I “meditate”, the event gets encoded as a bright teal line in the viz, so I can see how the entrainment affects my behavior. I was apparently pretty productive this week, and didn’t trigger the entrainment script at all, so I had to waste some time this morning to see it in action:

Finally, I also got around to hooking up my python keylogger to IBM Watson’s Tone Analyzer API. I was hoping to feed its analysis into my visualization as well, to see if mood was correlated to mind-wandering, but the results were a little questionable:

Moving forward, I’d also like to integrate Affectiva’s attention, engagement, and valence variables into the viz—I just need to figure out how to open the locally-hosted demo in the background, so it captures a candid moment for a few minutes every hour. I would also like to figure out how to automatically count the number of windows and browser tabs I have open, as I’m pretty confident these are reliable markers for stress and mind-wandering. Finally, I also plan to create a questionnaire that prompts me every hour, to have corroborate the correlations of all my tracked data.

talking to the elephant

This week, I did a small survey on talking to my subconscious. I thought the assignment was a perfect excuse to try a sensory deprivation tank—at least, the Brooklyn version of one:

Unfortunately, the session for me was merely an hour’s worth of boredom and discomfort; not at all the spiritual experience one guest described, in which “God spoke to [her] in [her] own voice.”

The assignment was also a good excuse to break out my entrainment studies from last semester, and I created a viz of my chrome history (a good metric of mind-wandering for me) one hour before and after “entrainment”.

Lastly, I made an appointment with Dr. Donatone over at the Health and Wellness Center across the street. After a short interview, she decided I required hypnosis for my hypodermic needle phobia. The hypnosis lasted about ten minutes, and was simply a form of meditation in which, after a short body scan, I was forced to imagine cartoon drawings of needles and tourniquets and report aloud how my body responded to them (spoiler: my response was absolute fear and loathing across the board, even when she asked me to image a syringe made out of balloons).

two more years

I wanted to continue my analysis of my flickr photostream by comparing last week’s recently created dataset to an older set of photos. Since there was no way (that I could find) of “walking” backwards through a photoset with the python flickrapi module, I thought I would “walk” through my entire photostream of 20,000 photos while pushing necessary information into a list as I went, then reverse-feeding that list into the Clarifai API for analysis:

After a few false starts (in retrospect, this was probably flickr warning me off), I was able to walk through about 11,000 photos before flickr shut me down for exceeding their rate limit. Now that I know flickr has a rate limit, I suppose the ideal way to get this job done would be to pay away Clarifai’s operations limit, and run my entire photostream through its API in one go. The one downside is that this would take hours, since simply looping through half my photostream with flickr’s api alone took almost two.

The other obstacle is that I’m an inveterate miser with zero present income, and so am still waiting to hear back from Clarifai on a student “collaboration” discount. A girl can dream.

In the meantime, I crudely copied the last four thousand URLs that my aborted python script had printed and threw them into Clarifai. This second “dataset” accounts for a time period from October 2015 until July 2013. I then loaded the results into the same d3 sketch as last week.

One thing to note is that for these new visualizations, I gathered more “concepts” per photo than last week, in hopes that more interesting/abstract concepts—like “togetherness”, which may inherently inspire less confidence than an objective prediction like “people”—might emerge. And they did:

 


Top 10 concepts in 2017: 1) people, 2) outdoors, 3) no person, 4) adult, 5) travel, 6) woman, 7) indoors, 8) man, 9) portrait, 10) nature


Top 10 concepts in 2014: 1) no person, 2) outdoors, 3) travel, 4) people, 5) nature, 6) one, 7) architecture, 8) old, 9) portrait, 10) sky

At least for the more recent set of photos. One thing that’s immediately obvious: I photographed way fewer “concepts” in 2014 than I did in 2017. I also took fewer photos.

Another striking observation is that “no person” is byfar the most common concept of the 2014 photoset, while “people”—literally the opposite—is the most common for 2017. Looking at the top 10 concepts, one could definitely speculate that I had more company in 2017 than I did in 2014.

While this visualization does its job as an abstract overview of the data, I wanted the photos themselves to tell the story. So on my click, I had the page spit out the photos that were trapped inside their respective bars.


“People” in 2017: 12 total unique individuals out of ~126 photos


“People” in 2014: 8 total unique individuals (not including office parties!) out of ~220 photos

Comparing the “people” category for both photosets, I clearly saw fewer unique people over a longer period of time (ie, how many photos fit in the window) in 2014, while in 2017, I saw more unique individuals over a shorter period of time, even while half the photos displayed were repeats.

Also notable was that the 2014 sample seemed to be entirely processed in instagram, which may be coincidence; I probably just happened to have choosen a period where I backed up all my instagram files at once? Will have to look into that one, but it’s amazing to me that I bothered to process so many mundane photos through instagram, though they would never be posted publicly. Perhaps I truly thought a filtered reality looked better, or maybe I was just constantly looking for an opportunity for public validation.

So what’s with the discrepancy? It will surprise no one to learn that I was extremely depressed from 2013-2014, a period which overlaps with the second photoset in this experiment. This analysis truly corroborates the idea that mental health is a social issue.

For my next steps, I’d like to train a custom model to identify selfies, which I believe is a strong marker (at least, personally) of depression. I’d also like to incorporate Clarifai’s color model to the workflow, run my Instagram history through it, and display it as a time series visualization. I’m absolutely certain this will be able to map my history of depression with excruciating accuracy.

what I wanted to remember

For this week’s Rest of You assignment, I decided to run my Flickr photostream through Clarifai, an image recognition model that you can use with an extremely effortless API. Thank God for that, because the Flickr API was anything but.


This was a really good (and hard-earned) moment for me.

I’ve basically been backing up my photos to Flickr ever since I started using smart phones in 2013; I’m also an adamant Samsung Galaxy user solely because of their superior cameras. As such, I figured my photo backups would be a rich database to find trends in, and considering that I had more than 20,000 photos backed up on Flickr, I decided to try to automate the process by making both APIs talk to each other.

Sadly, they only got through 4,800 photos before I hit my Clarifai processing limit—ie, somewhere in the middle of 2016. Unfortunately, the really telling data is around 2013-2015, so I’ll have to sign up with my other e-mail accounts to finish the rest of my history, or slowly work through it for the next five months.

Here’s a screenshot of Clarifai’s response to my API call, along with my code:

Tragically, I didn’t consider JSON-fiying these responses as they came in (who knew there was a difference between single and double quotes in JSON world? Probably a lot of people.), so there was a subsequent and painful formatting process to follow. Thanks, boyfriend! Note to self: learn VIM for data processing.

Below is a more legible example of their response, in JSON format, along with the respective image:

After the scary Python part was over, it was time to speak my slightly more native language, d3.js. I isolated the concept objects, threw out all but Clarifai’s top two predictions for each photo, and applied a rollup to count the unique keys. As you can see from the screenshot below, over the past 1.5 years, I photographed 451 unique concepts:

Some of these concepts were redundant (like “people” and “adult”), which will have to be dealt with later, as the foregoing work took ages to do! Below, the top twenty concepts sorted in descending order:

And here is the same data visualized in a quick d3 sketch:

pupil labs is awesome

Pupil Labs detects your pupil size and gaze position pretty well, and lets you export excessively granular data. Here’s a map of my tracked gaze as I got a tutorial from Cristobal:

According to the Thinking, Fast and Slow chapter I read last week, pupil dilation correlates with the exertion of mental effort. With this in mind, I decided to do another reading experiment in two vastly different environments: 1) on the floor on pb&j day, and 2) alone at home. I was particularly interested to see my eye movements, as my recently developed deficit in attention requires me to read sentences, and even entire paragraphs, multiple times after realizing that I’ve looked at the words without actually processing them in the slightest.

Here’s the map of my eye movements in the second environment:

Looks cool, but is not very informative, so I decided to throw together a quick p5 sketch (with d3 support) to animate the movements over time, and add in the corresponding diameter data.

Here’s a new visualization with the same at-home dataset:

And here’s one for pb&j day:

So the image positioning for both are eyeballed, but it’s pretty clear by the density of the movement data for the latter set that sitting next to the pb&j cart between classes upsets my concentration, and forced me to reread the same lines an embarrassing number of times. Pupil diameter (concomitantly encoded in the position tracking lines, as well as supplementally represented with the circles the lower-right corner) was also on average larger at school than in my quiet home environment, suggesting that more effort was required at the former.

That is, if you can ignore the extremely anomalous data that came in at the end of the at-home dataset, which explains the huge circle left behind in the supplemental diameter visualization.

I tried uploading the sketches to gist, and you can try loading the at-home viz here, but the huge dataset will probably cause the browser to freeze at some point. Will try to clean up the data later and re-upload.

confabulations

In the first chapter of The Happiness Hypothesis, Haidt writes about the first epileptic patients who, in the 1960s, underwent “split-brain” surgery in hopes of mitigating their seizures. While effective for that purpose, researchers soon discovered problems with hemispherical independence. What was particularly interesting to me was this bit on confabulation:

Confabulation is so frequent in work with split-brain patients and other people suffering brain damage that Gazzaniga refers to the language centers on the left side of the brain as the interpreter module, whose job is to give a running commentary on whatever the self is doing, even though the interpreter module has no access to the real causes or motives of the self’s behavior. For example, if the word “walk” is flashed to the right hemisphere, the patient might stand up and walk away. When asked why he is getting up, he might say, “I’m going to get a Coke.” The interpreter module is good at making up explanations, but not at knowing that it has done so.

While not quite the same thing, I’d been talking to my therapist the night before about how people (myself included, obv) intellectualize away insecurities, and thought the coincidence pretty charming. While some people, like conservative homophobes who solicit sex in gas station restrooms, may be painfully aware of the motivations for their blaring hypocrisy, some insecurities are quietly inveterate enough to literally feel like the interpreter module is good at making up explanations, but not at knowing that it has done so.

Because one’s self-concept is borne out of the language center of the brain, there is huge incentive to write pretty words rather than the ugly truth. What we choose to include in our “personal narratives” is a map of where we’ve been and where we hope to go; it gives us purpose, a sense of identity, the strength to keep on keeping on. The idea of personal identity and narrative is especially exacerbated now that social media gives us a platform to present our “best life” to the world, as well as a backstage to curate its content. But both our digital veneers and internal monologues are just confabulations; they are illusions that we can rewrite them at any time.

Unfortunately, if we’re not truthful to or forgiving of ourselves, these illusions can steam-roll over needs and fears that we were too ashamed to put into words. It’s much easier to say, “I’m going to get a Coke”, than wondering why you just took 10,000 selfies to post just one. But these are things we’ll inevitably need to reckon with if we’re to have healthy relationships with ourselves and others.

Blinking : Mind-Wandering

Mind-wandering is a symptom for a wide variety of things— ADHD, depression, intoxication, introspection, etc.— and I’d like to explore ways of measuring it as a biometric. An easy indicator for it is blinking: the more you blink, the more distracted you are. Conveniently, I’ve been meaning to play around with the “Beyond Reality Face” tracker, which includes a js library for blink detection. It was incredibly accurate, as well as an incredible workout for my browser.

I ran the demo code on a local host, had it spit out a timestamp each time I blinked, and left it on in the background while I did some reading for the “Illusion” assignment. I read aloud for the first ten minutes, hoping that elocution would ward off distraction; afterwards, I read silently for another ten minutes. Having been extremely frazzled this past week, I figured there would be a noticeable difference, despite the limited data. I copy/pasted the console data into a text editor, saved it as a csv, and threw it into some old d3 code of mine:

 

While I read aloud, I blinked 89 times in about 13 minutes, which is an average rate of 7 blinks per minute. While I read silently, I blinked 118 times in about 11 minutes; an average of 11 blinks per minute. A quick Google search suggests that the average is 15-20 times per minute, so perhaps I’m not as scatterbrained as I thought!

Update with a dataset from this morning: