Final

Background:

This semester, I began work on a system of trackers for a whole host of potential/evidenced metrics of depression, in hopes of monitoring its cyclical nature and identifying correlations with my activity and environment. Because I had done a lot of prior research, there were specific metrics that I had in mind, but oftentimes appropriate apps were either only available for iOS, didn’t provide an API, didn’t track with enough granularity, or didn’t exist at all.

Being a grad student, I have not the funds for an iPhone (new or old), and so I decided to put my newly acquired python skills to the test.

 

Data collection with homemade trackers:

  1. Mood Reporter: Because affect is difficult to measure, psychiatry traditionally employs self-administered questionnaires as diagnostic tools for mood disorders; these usually attempt to quantify the severity of DSM-IV criteria. The module for depression is called the PHQ-9, and I’ve adapted several of its questions into my own questionnaire, which python deploys every hour via the command line:

    The responses are then appended to a tsv:
  2. Productivity: via python and the RescueTime API, my productivity score is appended to a json every hour:
  3. Facial analysis: Via my laptop’s webcam, the Affectiva API analyses my face for a minute every hour; all its responses are saved to a json file. My python script grabs the min and max attention and valence values, as well as the expressions made (plotted with emoji) and the amount of times I blinked (calculated by dividing the number of times the eyeClosure variable hit 99.9%, divided by 2). These calculations are then appended to another JSON file that feeds into my visualization. The final entry for each hour looks like this:

  4. Keylogger Sentiment Analysis: The idea for this is simply to discern the sentiment of everything I type. I wrote a keylogger in python, which collects any coherent phrase to be sent to IBM Watson’s Tone Analyzer every hour. The response looks like this:

    The API provides several sentiment categories: joy, confidence, analysis, tentativeness, sadness, fear, and anger.

 

The Dashboard:

In order to understand any of this data, I would need to create a dashboard. What was important to me was to create an environment where potential correlations could be seen; since much of this is speculative, this basically meant doing a big data dump into the browser. I visualized everything in d3js.

My local dashboard has access to the hourly updated data, which is unbelievably satisfying; the public version has about 2.5 weeks worth.

 

Next steps:

I’m in the process of building yet another tracker: a Chrome extension which will record my tab/window activity (the amount of which is probably/definitely positively correlated with stress and anxiety in my life!).

I would also like to add a chart that allows me to compare the trendlines of all the metrics, as a preliminary attempt to guess at correlations. This will definitely require me to do a lot of data reformatting.

I also need to visualize the data from the tracking apps I did download (Google Fit and Exist.io), and include other environmental information like weather, calendar events, etc.

Honestly, I will probably be working on this for the rest of my life lol

API of You W2 + W3 Homework

For the third week’s assignment, I finessed last week’s viz into something much more coherent.

For the final, I would like to create a meaningful, comprehensive dashboard for all the data I’ve collected with my homemade trackers. I’ve chosen to measure several facets of my life, motivated by scientific evidence and/or personal belief that they may be metrics for stress, anxiety, and/or depression. Currently, this data is either scattered in isolated visualizations, or just sitting around in json/csv/tsv files. Additionally, this data is only tracked and available on my local machine.

We have my foregoing keylogger data:

This “mind wandering” viz that receives data from my chrome history and the RescueTime API:

Data from Affectiva’s emotion recognition model, which I am mostly using for valence and engagement (the viz for which clearly needs work):

Most importantly, I’d like to figure out some way to visualize this self-reported mood data, which prompts me hourly:

Time allowing, I would also like to include a report on my daily photo subjects, similar to this flickr archive analysis I did with the Clarifai API:

There’s also geolocation and physical activity/sleep data that I’d also like to include, which is being tracked by apps on my phone.

 

API of You W1 HW

I’ve worked a lot with JSONs before, so for this week’s assignment, I decided to just visualize a big JSON file that I’ve been putting off for a while: the sentiment analysis results of about 20 day’s worth of key logs. Below is an example of what a (smaller) object might look like for an hour’s worth of selected logs:

I’ve been procrastinating on visualizing this data because it’s nested enough to require effort, but here it is:

Not sure why I have a four-day gap in the middle of my data!

There are eight “tones” possible with IBM Watson’s Tone Analyzer API (color-coded above), and sentences are often given many tones. I added a rollover tooltip that displays the sentence in question—for this reason, I’m not yet ready to put the live viz online 🙂