QH W2 HW: Document Your Methodology

JENNA XU
QUANT HUMANISTS
SPRING 2018
05/02/2018

Forget all the trackers I wrote about last week. I’ve met someone new.

MIT’s Affectiva uses computer vision/machine learning to interpret facial biometrics into emotions, in real time, through webcam input. I downloaded a browser demo to play around with and see what the data would look like. Happily, the SDK spits it all out in a nice JSON format:

The big question was figuring out what data to collect, and how frequently. The demo seemed to be returning data for every frame—way too granular, especially considering that I intend to collect on the long-term.

After some experimentation, I decided to only record the emotion variables that reached a value of 95 out of 100, and the expression variables that reached 99.5 out of 100 (these were more sensitive). With each of these, I also pushed the values for attention, valence, engagement—because I’m most interested in tracking mind-wandering—as well as the “dominant emoji” and a timestamp. I figured this would give me a pretty good picture of my mood shifts throughout the day, at a reasonable pace.

Well, after a mere hour or two, my laptop fans were going at full speed, and a preliminary download of the data looked like this:


Existing quietly at a rate of 493,283 JSON columns / hour. 

To test the physical limits of my laptop, I decided to throw this thing into a d3 sketch:


NBD, just a webpage with 16,000 DOM elements. This is going well.

Also, just kidding about the other trackers, I still plan on using/hacking many of them. I just got a little uh, sidetracked this week. I also tried out the Beyond Reality js face tracking library, which was very impressive, but Affectiva can do everything it does and more. 😍

5 thoughts on “QH W2 HW: Document Your Methodology

  1. Hi Jenna!

    Thanks for your work here!

    > Forget all the trackers I wrote about last week. I’ve met someone new.

    Lol

    > I also pushed the values for attention, valence, engagement—because I’m most interested in tracking mind-wandering—as well as the “dominant emoji” and a timestamp. I figured this would give me a pretty good picture of my mood shifts throughout the day, at a reasonable pace.

    Cool to see you’ve started already to think about data filtering and focusing in on the relevant parameters for your analysis.

    > Also, just kidding about the other trackers, I still plan on using/hacking many of them. I just got a little uh, sidetracked this week. I also tried out the Beyond Reality js face tracking library, which was very impressive, but Affectiva can do everything it does and more. 😍

    Super nice insights into the tooling landscape. This is one of those cases when it is handy to ping pong ideas back and forth between what is possible and what ideas you have in mind. Ideally this is what it means to be at the “bleeding edge” where things aren’t so straightforward, but where the possibility-scape is being formed by your explorations and practice. Keep up the explorations!

    As for implementation, you might consider rather than real-time attention and mood tracking, to do the analysis in post. For example you might:
    try to collect webcam shots at X intervals and then run the facial recognition and sentiment/mood analysis over that image by image.
    Then correlate the sentiments mapped to those images to your other data you’ve collected about your mood, valence, and engagement.

    There’s downsides of course such as that it means you’re not getting the real-time feedback, but you might think about ways you can still prompt things like push notifications that might be useful interventions for those variables you’re tracking.

    Excellent!

  2. 16,000 dom elements, lol! +1 to Joey’s recommendations here, but this is such a cool idea and set of sketches. Looking forward to reading how you progress with these data 🙂

  3. […] forward, I’d also like to integrate Affectiva’s attention, engagement, and valence variables into the viz—I just need to figure out how to open the locally-hosted demo in the background, so […]

Leave a Reply

Your email address will not be published. Required fields are marked *