Check This Before You Trust AI
Or… “Garbage In, Garbage Out”

I like using Waze as an example for how we interact with AI-based systems. Of course, it could be that Waze is an entirely a Mechanical Turk-like system (though I doubt it), but at least in that case it would just be an I-based system and prone to the same issues, if not more.
The other day, I entered my home as the destination in Waze and it gave me a much longer estimated time of arrival. It seemed odd to me because it almost never takes that long. I zoomed out of the map and the route it suggested was circuitous. Hmm. Maybe Waze knew about some crazy construction around my home? Then I noticed something.
The origin was incorrect. I was in a parking lot and it placed me on an adjacent street. It didn’t know that I could make a left turn, as I usually do, from where I was then. Switching the origin reduced my travel time by a third.
So in working with AI, the question we can ask ourselves first is: is the suggestion outside of what we normally expect? If so, we should double check that the system has the correct inputs.
As one of my previous managers coached me on forecasting, “garbage in, garbage out.”