Image for post
Image for post

Me: OK Google, what voices to you have?

Google Assistant (GA): I got my voices from my engineers. I can change my voice if you’d like.

Me: OK

GA: …

Me: …???

Above was the exchange I just had with my Google Home. The issue was that there was no follow up mic intent even though the context called for it. If a sentence ends like this, the expectation is that someone would response.

Sure, Google is just stating a possibility that’s conditional on my wanting something. However, normal usage would create the expectation of a response. This is clearly an edge case. However, in becoming more of a general AI, Google Assistant will be encountering the edges all the time.

The real challenge here is how can Google test and learn from these edge cases? Here, there was no follow up from me because the mic was off. There’s no indication that someone went wrong.

The immediate way I’d see a solution is either force a follow on mode similar to Alexa (apparently rolling out now) or to test phrases against humans and ask if they expect a follow on mode. The latter could be similar to a Project Voice setup.

Written by

Independent daily thoughts on all things future, voice technologies and AI. More at

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store