With Google Duplex slowly making calls (it was supposed to be rolled out starting Memorial Day and then over the next few months), new health minding capabilities being released to iOS and Android devices, and stronger AI features from Amazon and Google, the upcoming 12 months is going to see some exciting or just bizarre implementations of AI. One of the big debates is going to be whether we can harness AI versus it controlling us. Can it lead us to be better individuals and better to each other or will it lead to more factionalism and bitterness, like the positive feedback loop of Facebook political feeds?
So how can we direct AI to do good?
Step 1: Knowing What You Want
One of the challenges in directing an AI is to first know what we want. It means sitting down and answering very introspective questions:
- What do I like?
- What do I dislike?
- What do I want my future to look like?
- What things bring me happiness?
- What things am I frustrated about?
- What are the things I think I’m good at?
- What are the things that are challenging to me?
These and similar questions can help flush out our goals and how we can create a life that we want and potentially use AI to direct us towards these goals, or at the very least, measure our progress.
Step 2: Creating Experiments
Looking at our goals, we need to come up with a list of hypothetical solutions so that we can measure their effect. We also need to measure and create a baseline for how we are today. If we look at “sleeping better” as a desired outcome, first we need to baseline for our bedtime, wake up time, and sleep quality. Then, we can look at potential ways to improve sleep:
- Eating earlier in the evening
- Cold bath before bed
- Reducing blue light on screens
- Setting a consistent wake time
- Setting the thermostat to a lower temperature
It’s likely that one or more of these will have a larger effect but we’d need to see which one can improve more.
Step 3: Volunteering Compliance
This is where we will need to potentially give up some control to AI to help us towards running these experiments and getting us to change our behaviour. There are at least three profiles that an AI assistant can take in getting us to comply.
This is where the AI assistant sends us messages and tells us “hey, it’s bed time… go do sleep” or “Go eat some dinner now” or “Set your phone to night mode”. Basically, we’re going to be annoyed by our devices / surrounding tech until we comply with the prescribed intervention. We can always ignore or turn off these notifications. The Nudnik might still be effective at getting us to comply.
The Nudger is a little more determined than the Nudnik. It might go ahead and make changes to settings automatically to reach compliance, e.g. putting a device in airplane mode, turning off the lights, rejecting certain calls. These settings and actions can be overridden by the user but by creating facts on the ground for the user, they’re more likely to comply.
The nanny is the most strict interventionist. The user will not be able to override the settings. No TV unless you eat your vegetables will mean that no, you will not be able to view YouTube on your phone in your room instead. This will be the equivalent of the anti-charity challenges, like StikK (e.g. if you don’t lose 10 lbs in 2 months, $50 will be given to a Neo-Nazi organization).
If the result of the Nanny is a huge possible impact, we might be more willing to give up control to the AI to move us towards being healthier and better… in ways we define.
Step 4: Measuring and Learning
The last step is to see whether the interventions and compliance inducements are effective in helping us get to our defined goals and also whether these defined goals actually lead to us being happier, healthier people. The key advantage to an AI assistant over a human personal coach is being able to compare our interventions and inducements against potentially millions of others and create better starting points.
A constant feedback loop is necessary to preventing “runaway AI” that could move us to doing bad things or being bad to each other.