Forget all the trackers I wrote about last week. I’ve met someone new.
MIT’s Affectiva uses computer vision/machine learning to interpret facial biometrics into emotions, in real time, through webcam input. I downloaded a browser demo to play around with and see what the data would look like. Happily, the SDK spits it all out in a nice JSON format:
The big question was figuring out what data to collect, and how frequently. The demo seemed to be returning data for every frame—way too granular, especially considering that I intend to collect on the long-term.
After some experimentation, I decided to only record the emotion variables that reached a value of 95 out of 100, and the expression variables that reached 99.5 out of 100 (these were more sensitive). With each of these, I also pushed the values for attention, valence, engagement—because I’m most interested in tracking mind-wandering—as well as the “dominant emoji” and a timestamp. I figured this would give me a pretty good picture of my mood shifts throughout the day, at a reasonable pace.
Well, after a mere hour or two, my laptop fans were going at full speed, and a preliminary download of the data looked like this:
To test the physical limits of my laptop, I decided to throw this thing into a d3 sketch:
Also, just kidding about the other trackers, I still plan on using/hacking many of them. I just got a little uh, sidetracked this week. I also tried out the Beyond Reality js face tracking library, which was very impressive, but Affectiva can do everything it does and more. 😍