What we’ll likely see in the coming generations of ambient voice interactive devices like the Amazon Echo or Google Home is more awareness of their environment and what’s happening around them. This might come in the form of the addition of new sensors or from adding more data sources to the interaction.
Some insights that the addition of sensors could add are:
- Is it cold or hot in the room?
- Are the lights on our off?
- Is it loud in the room?
- Is it humid or dry?
It’s possible to create prompts back to the user or suggestions.
“Hey [user], are you doing OK? It seems to be a little hot in here. Maybe I can turn down the thermostat?”
“You’re up late! Should I turn off the lights?”
It’s also possible to change the interaction without prompting the user. For example, changing the output volume on the device to match that of the environment.
Other awareness that can be gleaned without needing any new sensors:
- Whether you’re on your way home (via GPS on device)
- Whether you’re home (based on whether your cell is connected to the home’s WiFi)
- If other people are home (based on WiFi network traffic)
- If you have something coming up (based on calendar)
The largest challenge ahead is teaching AI assistants how they should behave as a result of these insights. This is where the new value creation is today.