QH W4 HW: what i learned this week

JENNA XU
QUANT HUMANISTS
SPRING 2018
26/02/2018

Several sources of inspiration for this week’s assignment:

Humanist service design guidelines: Ie, rather than keeping users hooked on a product, we should be designing products that facilitate our human needs: sleep, human connection, engagement with the physical world, etc. By the Center for Humane Technology.

This Vox article also galvanized me to reconsider the importance of social connection on mental illness, while reminding me of the town of Geel, which treats mental disorders with social inclusion.

Anytime you consider anything less than everything you are missing something: J. Paul Neeley’s beautiful talk on considering everything really validated my insatiable impulse to collect the data possible.

 

 

two more years

I wanted to continue my analysis of my flickr photostream by comparing last week’s recently created dataset to an older set of photos. Since there was no way (that I could find) of “walking” backwards through a photoset with the python flickrapi module, I thought I would “walk” through my entire photostream of 20,000 photos while pushing necessary information into a list as I went, then reverse-feeding that list into the Clarifai API for analysis:

After a few false starts (in retrospect, this was probably flickr warning me off), I was able to walk through about 11,000 photos before flickr shut me down for exceeding their rate limit. Now that I know flickr has a rate limit, I suppose the ideal way to get this job done would be to pay away Clarifai’s operations limit, and run my entire photostream through its API in one go. The one downside is that this would take hours, since simply looping through half my photostream with flickr’s api alone took almost two.

The other obstacle is that I’m an inveterate miser with zero present income, and so am still waiting to hear back from Clarifai on a student “collaboration” discount. A girl can dream.

In the meantime, I crudely copied the last four thousand URLs that my aborted python script had printed and threw them into Clarifai. This second “dataset” accounts for a time period from October 2015 until July 2013. I then loaded the results into the same d3 sketch as last week.

One thing to note is that for these new visualizations, I gathered more “concepts” per photo than last week, in hopes that more interesting/abstract concepts—like “togetherness”, which may inherently inspire less confidence than an objective prediction like “people”—might emerge. And they did:

 


Top 10 concepts in 2017: 1) people, 2) outdoors, 3) no person, 4) adult, 5) travel, 6) woman, 7) indoors, 8) man, 9) portrait, 10) nature


Top 10 concepts in 2014: 1) no person, 2) outdoors, 3) travel, 4) people, 5) nature, 6) one, 7) architecture, 8) old, 9) portrait, 10) sky

At least for the more recent set of photos. One thing that’s immediately obvious: I photographed way fewer “concepts” in 2014 than I did in 2017. I also took fewer photos.

Another striking observation is that “no person” is byfar the most common concept of the 2014 photoset, while “people”—literally the opposite—is the most common for 2017. Looking at the top 10 concepts, one could definitely speculate that I had more company in 2017 than I did in 2014.

While this visualization does its job as an abstract overview of the data, I wanted the photos themselves to tell the story. So on my click, I had the page spit out the photos that were trapped inside their respective bars.


“People” in 2017: 12 total unique individuals out of ~126 photos


“People” in 2014: 8 total unique individuals (not including office parties!) out of ~220 photos

Comparing the “people” category for both photosets, I clearly saw fewer unique people over a longer period of time (ie, how many photos fit in the window) in 2014, while in 2017, I saw more unique individuals over a shorter period of time, even while half the photos displayed were repeats.

Also notable was that the 2014 sample seemed to be entirely processed in instagram, which may be coincidence; I probably just happened to have choosen a period where I backed up all my instagram files at once? Will have to look into that one, but it’s amazing to me that I bothered to process so many mundane photos through instagram, though they would never be posted publicly. Perhaps I truly thought a filtered reality looked better, or maybe I was just constantly looking for an opportunity for public validation.

So what’s with the discrepancy? It will surprise no one to learn that I was extremely depressed from 2013-2014, a period which overlaps with the second photoset in this experiment. This analysis truly corroborates the idea that mental health is a social issue.

For my next steps, I’d like to train a custom model to identify selfies, which I believe is a strong marker (at least, personally) of depression. I’d also like to incorporate Clarifai’s color model to the workflow, run my Instagram history through it, and display it as a time series visualization. I’m absolutely certain this will be able to map my history of depression with excruciating accuracy.

what I wanted to remember

For this week’s Rest of You assignment, I decided to run my Flickr photostream through Clarifai, an image recognition model that you can use with an extremely effortless API. Thank God for that, because the Flickr API was anything but.


This was a really good (and hard-earned) moment for me.

I’ve basically been backing up my photos to Flickr ever since I started using smart phones in 2013; I’m also an adamant Samsung Galaxy user solely because of their superior cameras. As such, I figured my photo backups would be a rich database to find trends in, and considering that I had more than 20,000 photos backed up on Flickr, I decided to try to automate the process by making both APIs talk to each other.

Sadly, they only got through 4,800 photos before I hit my Clarifai processing limit—ie, somewhere in the middle of 2016. Unfortunately, the really telling data is around 2013-2015, so I’ll have to sign up with my other e-mail accounts to finish the rest of my history, or slowly work through it for the next five months.

Here’s a screenshot of Clarifai’s response to my API call, along with my code:

Tragically, I didn’t consider JSON-fiying these responses as they came in (who knew there was a difference between single and double quotes in JSON world? Probably a lot of people.), so there was a subsequent and painful formatting process to follow. Thanks, boyfriend! Note to self: learn VIM for data processing.

Below is a more legible example of their response, in JSON format, along with the respective image:

After the scary Python part was over, it was time to speak my slightly more native language, d3.js. I isolated the concept objects, threw out all but Clarifai’s top two predictions for each photo, and applied a rollup to count the unique keys. As you can see from the screenshot below, over the past 1.5 years, I photographed 451 unique concepts:

Some of these concepts were redundant (like “people” and “adult”), which will have to be dealt with later, as the foregoing work took ages to do! Below, the top twenty concepts sorted in descending order:

And here is the same data visualized in a quick d3 sketch:

ML4A with Gene Kogan

http://ml4a.github.io/

paperspace or colab

ml4a suite:

  • ConvnetOSC meant for wekinator
    • ml4a ConvnetOSC + wekinator + processing or p5 w/ OSC
  • KeyboardOSC: controls keyboard with wekinator
  • AudioClassifier: classify sounds that you make

more on http://ml4a.github.io/guides

 

pupil labs is awesome

Pupil Labs detects your pupil size and gaze position pretty well, and lets you export excessively granular data. Here’s a map of my tracked gaze as I got a tutorial from Cristobal:

According to the Thinking, Fast and Slow chapter I read last week, pupil dilation correlates with the exertion of mental effort. With this in mind, I decided to do another reading experiment in two vastly different environments: 1) on the floor on pb&j day, and 2) alone at home. I was particularly interested to see my eye movements, as my recently developed deficit in attention requires me to read sentences, and even entire paragraphs, multiple times after realizing that I’ve looked at the words without actually processing them in the slightest.

Here’s the map of my eye movements in the second environment:

Looks cool, but is not very informative, so I decided to throw together a quick p5 sketch (with d3 support) to animate the movements over time, and add in the corresponding diameter data.

Here’s a new visualization with the same at-home dataset:

And here’s one for pb&j day:

So the image positioning for both are eyeballed, but it’s pretty clear by the density of the movement data for the latter set that sitting next to the pb&j cart between classes upsets my concentration, and forced me to reread the same lines an embarrassing number of times. Pupil diameter (concomitantly encoded in the position tracking lines, as well as supplementally represented with the circles the lower-right corner) was also on average larger at school than in my quiet home environment, suggesting that more effort was required at the former.

That is, if you can ignore the extremely anomalous data that came in at the end of the at-home dataset, which explains the huge circle left behind in the supplemental diameter visualization.

I tried uploading the sketches to gist, and you can try loading the at-home viz here, but the huge dataset will probably cause the browser to freeze at some point. Will try to clean up the data later and re-upload.

QH W3 HW: Dear Data

For week 2’s assignment, my boyfriend and I logged our feelings every hour for about four days. We had recently realized just how differently we perceived and experienced the same relationship, and thought it would be interesting to do a comparison.

The variables we manually tracked on a Google spreadsheet were time, reaction, whether it was a positive or negative feeling, description of the event that triggered the reaction, and an overall “satisfaction score”:

On Monday, he forfeited his log and I mapped both datasets on the postcards Matthew gave us in class:


First dataset is his, bottom is mine

I choose a simple bar graph in order to flatten an extremely nuanced and qualitative dataset into something visually digestible. The bars were encoded with two colors: pink for the score, and a secondary color that indicated the type of trigger. The fluctuations in bar height in my boyfriend’s postcard illuminated just how much my boyfriend’s anxiety and sensitivity affects his experience of our relationship, while my rather uniform results illustrated how generally unperturbed I am, and/or how oblivious I am to his emotions.

QH W2 HW: Reflection

JENNA XU
QUANT HUMANISTS
SPRING 2018
02/05/2018

This week, I learned that I am totally ADD and greedy and want all the data. I also learned that all the data will definitely be too much data, and decided I will not be getting an iPhone just to use a select few iOS-only apps, so I’m going to use this space to determine my final line-up of trackers. Here goes:

  1. The “how are you feeling?” trackers:
    • Affectiva, as we saw below, which uses computer vision and a webcam to translate facial markers to emotions. Will be using this to track my blinking as well.
    • IBM’s Tone Analyzer and a key logger for a sentiment analysis keyboard. Will probably transfer text and format data manually for now, until I get time to set up something more automated.
  2. The “is your mind wandering/lonely?” trackers:
    • Affectiva calculates attention, engagement, and valence values, in addition to blinks
    • HabitLab, by Stanford HCI Group, to track my insidious social media usage, as well as to build better habits (might switch to RescueTime if this doesn’t suit my needs, but HabitLab had a much more attractive UI)
  3. The “what makes you happy?” tracker:
    • The Track Your Happiness app is only available of iOS, but it’s simple enough that I may be able to build something similar if I have time (or use this: https://www.askmeevery.com/)
    • Activity tracker: Google maps/pedometer in smartphone
    • Weather
    • Social activities

QH W2 HW: Document Your Methodology

JENNA XU
QUANT HUMANISTS
SPRING 2018
05/02/2018

Forget all the trackers I wrote about last week. I’ve met someone new.

MIT’s Affectiva uses computer vision/machine learning to interpret facial biometrics into emotions, in real time, through webcam input. I downloaded a browser demo to play around with and see what the data would look like. Happily, the SDK spits it all out in a nice JSON format:

The big question was figuring out what data to collect, and how frequently. The demo seemed to be returning data for every frame—way too granular, especially considering that I intend to collect on the long-term.

After some experimentation, I decided to only record the emotion variables that reached a value of 95 out of 100, and the expression variables that reached 99.5 out of 100 (these were more sensitive). With each of these, I also pushed the values for attention, valence, engagement—because I’m most interested in tracking mind-wandering—as well as the “dominant emoji” and a timestamp. I figured this would give me a pretty good picture of my mood shifts throughout the day, at a reasonable pace.

Well, after a mere hour or two, my laptop fans were going at full speed, and a preliminary download of the data looked like this:


Existing quietly at a rate of 493,283 JSON columns / hour. 

To test the physical limits of my laptop, I decided to throw this thing into a d3 sketch:


NBD, just a webpage with 16,000 DOM elements. This is going well.

Also, just kidding about the other trackers, I still plan on using/hacking many of them. I just got a little uh, sidetracked this week. I also tried out the Beyond Reality js face tracking library, which was very impressive, but Affectiva can do everything it does and more. 😍

confabulations

In the first chapter of The Happiness Hypothesis, Haidt writes about the first epileptic patients who, in the 1960s, underwent “split-brain” surgery in hopes of mitigating their seizures. While effective for that purpose, researchers soon discovered problems with hemispherical independence. What was particularly interesting to me was this bit on confabulation:

Confabulation is so frequent in work with split-brain patients and other people suffering brain damage that Gazzaniga refers to the language centers on the left side of the brain as the interpreter module, whose job is to give a running commentary on whatever the self is doing, even though the interpreter module has no access to the real causes or motives of the self’s behavior. For example, if the word “walk” is flashed to the right hemisphere, the patient might stand up and walk away. When asked why he is getting up, he might say, “I’m going to get a Coke.” The interpreter module is good at making up explanations, but not at knowing that it has done so.

While not quite the same thing, I’d been talking to my therapist the night before about how people (myself included, obv) intellectualize away insecurities, and thought the coincidence pretty charming. While some people, like conservative homophobes who solicit sex in gas station restrooms, may be painfully aware of the motivations for their blaring hypocrisy, some insecurities are quietly inveterate enough to literally feel like the interpreter module is good at making up explanations, but not at knowing that it has done so.

Because one’s self-concept is borne out of the language center of the brain, there is huge incentive to write pretty words rather than the ugly truth. What we choose to include in our “personal narratives” is a map of where we’ve been and where we hope to go; it gives us purpose, a sense of identity, the strength to keep on keeping on. The idea of personal identity and narrative is especially exacerbated now that social media gives us a platform to present our “best life” to the world, as well as a backstage to curate its content. But both our digital veneers and internal monologues are just confabulations; they are illusions that we can rewrite them at any time.

Unfortunately, if we’re not truthful to or forgiving of ourselves, these illusions can steam-roll over needs and fears that we were too ashamed to put into words. It’s much easier to say, “I’m going to get a Coke”, than wondering why you just took 10,000 selfies to post just one. But these are things we’ll inevitably need to reckon with if we’re to have healthy relationships with ourselves and others.