As AI takes more of a role in providing us with information and interacting with us, there’s going to be a growing need for us to understand why and how the AI reached searched certain decisions. This is especially true when the conclusions reached are incorrect.
It doesn’t need to be overly complicated to start. In fact, exposing the reasoning to users is a good feedback mechanism for training the system.
Take the Alexa trigger for example. This evening, the Echo Show triggered several times based on words that were far away from Alexa. What did it interpret as Alexa? I’d love to be able to see this in my Alexa console and be able to correct the system.
Google has this system built into its STT service by providing multiple interpretations of the STT result. If the user corrects it, it learns.
For the trigger example, it would be great to be given 3–4 generic reasons for the false trigger and why the result fell into that particular category, with a comments or yes/no feedback question to send back to Amazon.
Why did Google pick one Whole Foods over another in searching in Google Maps? If it were wrong, it would be beneficial to explain.
Super smart AI becomes scary when it can’t explain to us how it reached a decision. This is perhaps an area of opportunity in the field.