Image for post
Image for post

As AI takes more of a role in providing us with information and interacting with us, there’s going to be a growing need for us to understand why and how the AI reached searched certain decisions. This is especially true when the conclusions reached are incorrect.

It doesn’t need to be overly complicated to start. In fact, exposing the reasoning to users is a good feedback mechanism for training the system.

Take the Alexa trigger for example. This evening, the Echo Show triggered several times based on words that were far away from Alexa. What did it interpret as Alexa? I’d love to be able to see this in my Alexa console and be able to correct the system.

Google has this system built into its STT service by providing multiple interpretations of the STT result. If the user corrects it, it learns.

For the trigger example, it would be great to be given 3–4 generic reasons for the false trigger and why the result fell into that particular category, with a comments or yes/no feedback question to send back to Amazon.

Why did Google pick one Whole Foods over another in searching in Google Maps? If it were wrong, it would be beneficial to explain.

Super smart AI becomes scary when it can’t explain to us how it reached a decision. This is perhaps an area of opportunity in the field.

Independent daily thoughts on all things future, voice technologies and AI. More at http://linkedin.com/in/grebler

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store