Why Is This In My Feed?
We need to understand AI suggestions

I watched a YouTube recommendation. It was another Canadian who was vlogging about his experience living in his van in -18 °C weather. As I read through the comments, I realized someone asked a question that made me think the same thing: why was this video recommended to me? There is an issue of runaway algorithms recommending more and more extreme videos. What was prompting me to this next step?
The normal reaction is to direct this to an introspection. What was I watching before or doing online or writing in my email that would make Google think I would want to watch this particular YouTube video? Where there is a paucity of data, we fill in the gaps with our own insecurities. We want this recommendation but some supposedly more knowledgeable AI engine to confirm our worst suspicions about ourselves.
What we really desire is a natural language understanding of why the item was in our feed. We the service to say, “Well, because you watched that other van video and you have a van (we know this because of x), we though that you might like this video. Also, we saw you wrote about this three months ago and figured now would be a good time to bring up some topic like this because you were starting to write on similar topics.” We want more than transparency, we want explanation — an interpretation.
We may never get that unless someone puts some extreme effort into designing an AI based engine for explaining itself. Instead, we get a system looking for correlations that has no real understanding of what it is recommending. The system might analyze data, look for trends, create a model, and then come up with actions based on certain thresholds being hit. That’s it. The YouTube video might have found that I was 90% likely to “Like” this video so it surfaced that video first. That’s all it might have been looking for.
Beyond building for explainability, the next thing a designer of a recommendation service can do is look at what might be some long term outcomes for running their algorithm. Does it incentivize one particular behaviour over everything else? If so, it’s prone to generating a problem at some point.