So I made my first foray into my life support project this week, using Dialogflow. Initially, I had little faith in its flexibility and anticipated having to train a word2vec model to facilitate the self-care/positive psychology “understanding” it would need, but upon further investigation the platform seems to be sufficient, although I haven’t tried Fulfillments yet, which will be the real test.
While I haven’t finished my plans for interventions yet, I translated what I do have into broad intents and entities in the console. Eventually, I will have to connect these to web hooks in order to serve up the actual interventions. I also had a lot of fun training its Knowledge Bases with transcripts from various relevant MOOCs, then testing how much it learned afterward (admittedly, probably more than I have). Some of the responses were wonky, or ended abruptly—which is consistent with what you’d normally get if you asked Google Home a random question—so I had to go back into the training data and remove anecdotes, etc. The feature is in beta right now anyway, so I’ll withhold criticism until it’s official, but overall it’s pretty cool to be able to control your sources of information to what you know is credible.
Although I had the Knowledge results preference at maximum strength, I was still surprised to see it override follow-up intents even when I responded verbatim with training phrases:
The knowledge results the bot returned weren’t very helpful, but it would be nice to have them in addition to the follow-up intent response, but I couldn’t find a combination option in the console.
The faux-AI in the Sandra podcast reminds me of Lauren McCarthy’s project, Get Lauren, where she performs the role of a smart home speaker, readily available to the user’s whims 24/7. I personally prefer narratives where the AI hype fails to live up to its potential and humans have to go back to connecting with other humans, rather than the sci-fi fantasy where robots integrate seamlessly into society and fulfill all our needs. In the context of modern day Western society, where we are feeling more isolated, lonely, and depressed at an increasingly greater scale than ever before, it’s truly grim to think about how we’re still pouring all our resources into swapping human interaction with artificial intelligence—technology has done quite enough to separate us already.
In both the fictional Sandra and the actual Get Lauren project, users turn to these interfaces ostensibly for help, but they sometimes end up bearing witness to and sharing in small, intimate moments in their users’ life. For the users of Get Lauren, clearly this dynamic was the entire appeal of the project, something that they willingly paid for with their privacy and partial control over their households. For the users of Sandra, this was a concomitant result of a natural inclination for humans to connect with other humans/convincing human-like interfaces (see: people like me who thank voice assistants after they complete a task).