Dream: My dream is quantify depression, and use those numbers to establish a centralized and comprehensive system that empowers both the inflicted and their medical professionals to be better able to understand, manage, and treat the cyclical nature of the disorder. I imagine a tool that will put an ever-on-call psychologist, neurologist, psychiatrist, and personal assistant in the pocket of patients who lack the energy and concern to care for themselves.
Vision: I would like to build a system of wearables that monitor the tracked biometrics and self-reported markers of depression. With the user’s baseline state as reference, the system would employ machine learning and the user’s self-reported corroborations to label biometric deviations. Once the system learns to read the user’s mood, it will provide recommendations on self-care and subsequently learn which methods work best for the user, and when. The system will also retain an archive of visualized data for medical professional to assess during appointments.
Goal: My goal for this course is to create an EEG wearable that I can use on a daily basis. The headset will be 3d-printed and use a bluetooth arduino to send data to my computer or phone, but if this proves unreliable I will just save the data on a SD card to upload at the end of each day. The EEG will record my brainwaves, which will hopefully reveal when I blink (an indicator of mind-wandering) and whether I’m focused (theta dominance in the prefrontal ACC).
In addition, I will be using some ready-made and beta trackers for my Quant Humanists class, and hope to export all the tracked data into one system that visualizes them together. This endeavor will also be supported by the course API of You, which will start at the second half of the semester.
Plan – What is your game plan to achieve the goal in 10 weeks, what is the research required, what are the milestones. Try to come up with about 5 milestone dates towards a completion beginning of May.
Performance Measure, P:
Machine learning formulation:
Supervised Learning- Classification
Supervised Learning- Regression
Supervised Learning- Structured Output
Unsupervised Learning- Density Estimation
Unsupervised Learning- Denoising
Gradient descent: used to find the local minimum
Solution to under/overfitting:
29 JAN 2018
Write a short reflection about what your current relationship with self-tracking (e.g. hopes, dreams, perceptions), questions you have about self-tracking and how it could help or harm you, and how you hope the course will help facilitate your interests. Write about which questions you’ve identified to track, how you plan to track those variables of interest, and what challenges you expect to encounter as well as what you hope to learn.
Out of lack of self-discipline, I don’t currently have a relationship with self-tracking, but at ITP I ultimately hope to track the biometrics of depression and its related factors, and feed this data into a neural network that can predict my moods and help monitor the cycles of depression. I hope this course will allow me to explore my options for tracking these variables and perhaps even integrate them into an automated system of my own making. Below is my wish-list of questions/methods that I want to work on for this course:
My main challenge will be to actually implement any of this! Most of these apps are from academic studies and not available to the masses, and the ones that are are only for iOS.
29 Jan 2018
In this profile of Ishac Bertran, there are two pieces related to personal data collection that I liked. The first, The Memory Device, is a simple recording device of a simple data form: timestamps. The user presses a physical button, which prompts a tiny line (the timestamp) to be drawn on a tiny vertical screen, the length of which represents the day. Each day is saved, so you can scroll through your history to see reminders of moments that you wanted to remember.
The technocrats have made a dazzlingly advanced and lucrative field out of data science, and nowadays you can’t go an hour without hearing about how so much data has gone through so deep a neural network to now so reliably predict a topic once so inscrutable to stodgy old human intelligence. Given this context, I thought this project was rather poetic and refreshing for its utter lack of “intelligence” and granularity.
I also thought it was interesting because I, too, had fantasized about the use of physical mechanisms to mark common events that I might want to collect, such as compulsions, mood states, and productivity. Consider the quick press of a small button on a bracelet on your wrist, compared to the long and disruptive process of turning on your phone, tapping in the password, opening an app, finding the appropriate tracker category, and then finally being able to mark the occasion of the birth of this blog post. And since you’re on your phone already, so you might as well tend to the notifications that have accumulated while you mustered up the willpower to actually start on your homework.
Ishac also produced a series of books, each containing a year’s worth of his Google searches. That’s it! But it’s a clever little comment on internet privacy; there’s something so ironic and immense and terrifying about having all your private and passing curiosities—which one considers such anonymous, unworthy and insignificant dust in the digital ether—not only meticulously recorded by the biggest internet company ever, but enshrined in something as prestigious as print media, available for anyone to come along and flip through.
I personally thought it was interesting because I’ve been meaning to do something with my Google data for a while. Being an Android user with poor memory, I’ve always kind of delighted in having such an assiduous witness to my life. In fact, my memory is so poor that I keep forgetting this data is available to me, so I’m keeping these links here for future reference:
A similar and even more invasive project is HTTPrint, a Chrome extension that records your internet browsing activity. The data it collects includes the pages you visit, the included images and text, and the time you spent on each. You can then print the data like a newspaper.
I like the idea of this because the content you consume online almost certainly has some degree of influence on your mood, especially considering how internet browsing is such an inveterate daily habit for most of us. I would love to run sentiment analysis on both the words coming into my brain and the words coming out, to see how they influence each other and my general mood patterns.
Finally, I love this gesture by Eugenia Kuyda, who fed old chat logs with her deceased best friend into a TensorFlow neural network, to create a chat bot that spoke like him. The bot’s selected-for-publication responses are not only logical and relevant, but also very idiosyncratic. This makes it seem quite powerful as a project and experiment, but still obviously inadequate when compared to the real person.
In the end, this work may be yet another questionable application of AI in a wider, ongoing debate over ethical use cases. But ultimately I’m hopeful that machine learning can learn enough about us via our personal data to teach us about our habits, reveal opportunities for improvement, and facilitate us in our daily lives.
KNN = k nearest neighbor; categorizes input based on neighbors
to create a git repo:
to commit a change:
git add index.html
git add -A
git commit -m "added index.html"
to push a repo to github:
git remote add origin https://github.com/xujenna/project.git
git push origin master
from then on:
To run someone else’s project:
git clone https://github.com/xujenna/project.git
To get updates from someone else’s project (from within the local repo):
To make a copy of someone else’s repo onto your own github account to push to: clone from existing repo’s github page
To prevent files from uploading to github, create a file called .gitignore with a list of files and folders (ie, file.png, *.txt, images) that you want hidden