API of You W1 HW

I’ve worked a lot with JSONs before, so for this week’s assignment, I decided to just visualize a big JSON file that I’ve been putting off for a while: the sentiment analysis results of about 20 day’s worth of key logs. Below is an example of what a (smaller) object might look like for an hour’s worth of selected logs:

I’ve been procrastinating on visualizing this data because it’s nested enough to require effort, but here it is:

Not sure why I have a four-day gap in the middle of my data!

There are eight “tones” possible with IBM Watson’s Tone Analyzer API (color-coded above), and sentences are often given many tones. I added a rollover tooltip that displays the sentence in question—for this reason, I’m not yet ready to put the live viz online 🙂

Impossible Maps W1 HW

So I have some comments.

I may be a bit biased, being a near-daily Google Maps user myself, but I quite like Google Maps better than Apple Maps. For Part 1, the author points out that far more cities are labeled in Apple Maps, particularly in zoom 8. Labelling 44 cities in such a small space completely clutters up the tile and renders it all but illegible. You don’t need that much detail at a higher-level view. Also, at a higher-level view, chances are you’re driving and not walking—which is probably why Google prioritized “shields” over cities.

However, at a lower level, Apple has these interesting high-fidelity, individual landmark markers rather than using a generic marker for each type of POIs. As a person who navigates by landmark and gets confused by street names, I actually do appreciate this detail.

Because of this, and because Google Maps tends to label far more roads and “shields” than Apple, I want to hypothesize that perhaps Apple Maps is prioritizing the pedestrian, while Google Maps is prioritizing the driver. But Apple Maps seems to give you more information at higher level zooms, then dissolves into minimalism as you zoom in and expect to get more information. As a Manhattanite, I need those subway station markers!

I would also like to express my horror at this “Frakenstein map”:

What good is that much information when you can’t read it? And when do you ever need that much information?

If users do indeed crave “the whole picture”, perhaps there should be two map modes: one for navigation, which emphasizes roads and their labels; and the other for general exploration, which emphasizes cities and POI’s. As a chronic pedestrian and global traveller, I honestly have no need for the former information—I’m either walking to a building or subway station and therefore only need street names at a low level zoom, or I’m zoomed all the way out and planning my next vacation, and therefore only need political borders and major city names.

QH W7 HW: Final Project Proposal Outline

Digital phenotyping + self-care/intervention

  • Background research/ project landscape:
    • Mindstrong: This is basically what I came to ITP to do: develop a tracking system that can detect/predict/prevent the onset of depression, although Mindstrong is aiming to tackle many mental illnesses with only smartphone data, whereas I am primarily gathering data from laptop use. Co-founded by Dr. Tom Insel, formerly the lead of Verily’s mental health team.
    • PRIME app: an app developed by UCSF researchers for clinical trials studying the effect of social support on the severity on schizophrenia
    • Fine: An mood reporter app that tracks self-reported data (not available for use)
    • trackyourhappiness.org: an ongoing doctoral research project by Matt Killingsworth at Harvard, which prompts you throughout the day to find out what factors are associated with happiness.
    • PHQ-9: standard self-reported questionnaire for depression severity
    • Exist: links all your tracking apps to find correlations
    • I feel like shit game: an interactive self-care flow chart; asks you questions about your state and offers self-care suggestions
    • Headspace: there’s a meditative exercise tailored for nearly every mood possible
  • Hypothesis / Definition of question(s):
    • What factors in my life contribute to stress, anxiety, low morale/motivation, and negative affect in general?
    • What factors contribute to high morale, motivation, positive affect, and a more balanced feeling of well-being?
    • How might I facilitate the latter factors?
    • What interventions are appropriate?
  • Objectives:
    1. a system that tracks:
    2. a dashboard for data viz
    3. machine-learned correlations (ultimately)
    4. self-care recommendations
  • Goals:
    • To address issues with current treatments:
      • Therapy:
        • lag time (“A therapist, the joke goes, knows in great detail how a patient is doing every Thursday at 3 o’clock.”)
        • No performance feedback outside of potentially dishonest self-reporting
        • patient fear of disappointing therapist
        • variations in individual therapist efficacy
      • Medication:
    • To catalyze self-awareness of emotions and their triggers
    • To facilitate self-care
    • To encourage healthier digital habits
  • Technical considerations/next steps:
    • Finish trackers:
      • tab/window counter: chrome extension
      • new affectiva approach: chrome extension background page?
      • unreturned messages
      • self-reported mood
    • Research visualizations
    • Research more behavioral metrics

SLIDES HERE

QH WK 06: Untrack Me

So my keylogger has been talking hourly to IBM’s tone analyzer API for a couple weeks now, and I’ve been noticing strange responses to the chunks of time I’ve spent coding:

When I first noticed this behavior, I thought it amusing yet unforgivably flawed; I was trying to do some Serious Sentiment Analyses here, and clearly I was not experiencing such extreme mood swings while programming (albeit, the examples above illustrate my experience pretty well).

But in the context of this assignment, it provides an easy opportunity for manipulation— I just needed to figure out what exactly was triggering these false positives. So I gathered all the erratic predictions and fed them “word” by word into IBM’s demo, adding and deleting until I found precisely which characters the model was responding to. For the foregoing examples:

A few other absurd samples:

I also found that several individual words triggered extremely confident predictions:

 

So what can be used to consistently hack IBM’s predictions?

For starters, any built-in function (that’s also a complete word) will get picked up for displaying confidence:

Even when followed by text that’s obviously not-so-confident:

Even when in nonsensical function salad:

 

To sound analytical, one must simply add the word “if” anywhere in a sentence:

Regardless of whether or not the surrounding words are analytical:

 

To seem instantly and dramatically happier, just add {}:

Curly brackets are so joyful that they even neutralize fear and sadness:

 

Lastly, the singular most effective way to express anger is with this emoji: =P

Its rage is so complete that it literally sucks the joy out of life:

going foward

For the last week of Rest of You, I tried to automate all of the trackers that I made in the past seven weeks, and feed them into a locally hosted visualization. I didn’t get quite that far, but am getting there:

I liked the viz I created last week, so my obvious first step was to automate the python scripts that retrieved the data. The productivity scores are requested from the RescueTime API, and my Chrome history is just a JSON converted from a SQL file that Chrome locally maintains. Both python scripts update their datasets hourly. The JSONs are linked to a local d3.js visualization, so I can refresh any time to see an update. The design could use some cleaning up, but the data is there.

The next step was to automate the entrainment process as well. Rather than having to monitor my own mind-wandering, then muster the self-discipline to open up my six-minute entrainment meditation, a python script checks my productivity levels and opens the webpage if they dip below 50%:

 

If I “meditate”, the event gets encoded as a bright teal line in the viz, so I can see how the entrainment affects my behavior. I was apparently pretty productive this week, and didn’t trigger the entrainment script at all, so I had to waste some time this morning to see it in action:

Finally, I also got around to hooking up my python keylogger to IBM Watson’s Tone Analyzer API. I was hoping to feed its analysis into my visualization as well, to see if mood was correlated to mind-wandering, but the results were a little questionable:

Moving forward, I’d also like to integrate Affectiva’s attention, engagement, and valence variables into the viz—I just need to figure out how to open the locally-hosted demo in the background, so it captures a candid moment for a few minutes every hour. I would also like to figure out how to automatically count the number of windows and browser tabs I have open, as I’m pretty confident these are reliable markers for stress and mind-wandering. Finally, I also plan to create a questionnaire that prompts me every hour, to have corroborate the correlations of all my tracked data.

QH W5 HW: Quant Self Intervention

I’ve been in grave danger ever since discovering Sex and the City on Amazon Video.

I normally avoid all media like the plague, partially because I find the content objectionable for SJW reasons, partially because of my own insecurities which I will probably intellectualize away with SJW reasons forever, but mostly because I can’t trust my addictive personality to watch even one video without immediately downward-spiraling for ten hours into a pit of shame and self-loathing.

The one exception I make is for SATC, for no good reason other than the fact that I was able to cobble together the entire series on the cheap during my phase as a Housing Works regular (this is generally how I consume media: once everyone’s completely over it). Despite this DVD collection’s efficacy as a coping mechanism, its inconvenient—and now obsolete, thanks to Apple’s sanctimony— physical form was never a threat to my daily functioning.

Until:

By some loving grace of God, I only discovered this year that SATC was included with Amazon Prime, but somehow I’ve already watched three seasons of it, plus the first movie, and 1.5 seasons of SJP’s new show, Divorce, which I highly do not recommend, and only watched while nursing the sugar headache that SJP’s younger self tends to cause (I’m Team Kim).

So this week, I decided to put an end to this nonsense. RescumeTime, a tracker that I installed near the beginning of QH, converts my activity into a handy “productivity” score—one that I get to define by categorizing any website or application I use on a scale from “Very Productive” to “Very Distracting”:

I decided to use python to grab this score every hour through RescueTime’s API, and open my slightly meditative, mostly masochistic, brain entraining p5 sketch if the score dips under 50%:

Here it is in action:

 

The idea of this intervention is to allow myself the room to indulge in “very distracting” activity if I need it, but to catch myself before I spiral out of control and have to live with the concomitant guilt forever. The “entrainment” part—regardless if it actually entrains my brain or not—is, at the very least, a way of resetting myself and my OCD.

This d3 sketch illustrates how my activity changes before and after “entrainment”, and I’m working on automating the chrome history collection so that I can have a viz that automatically updates in real(-ish) time.

talking to the elephant

This week, I did a small survey on talking to my subconscious. I thought the assignment was a perfect excuse to try a sensory deprivation tank—at least, the Brooklyn version of one:

Unfortunately, the session for me was merely an hour’s worth of boredom and discomfort; not at all the spiritual experience one guest described, in which “God spoke to [her] in [her] own voice.”

The assignment was also a good excuse to break out my entrainment studies from last semester, and I created a viz of my chrome history (a good metric of mind-wandering for me) one hour before and after “entrainment”.

Lastly, I made an appointment with Dr. Donatone over at the Health and Wellness Center across the street. After a short interview, she decided I required hypnosis for my hypodermic needle phobia. The hypnosis lasted about ten minutes, and was simply a form of meditation in which, after a short body scan, I was forced to imagine cartoon drawings of needles and tourniquets and report aloud how my body responded to them (spoiler: my response was absolute fear and loathing across the board, even when she asked me to image a syringe made out of balloons).