Colin Horgan wrote an interesting piece on the Acosta ousting from last week and the doctored video tweeted by the White House to justify it. It reminded me of Radio Lab’s episode on the future of fake news that talked about text-to-speech engines like Lyrebird and VoCo. I wrote about that last August.
Horgan makes the point that this is going to get worse and the US is likely primed for more fake videos to justify positions that would seem untenable without them. Scott Adams of Dilbert fame who had predicted that Trump would win when we announced his candidacy talks more recently about the two movies going on in the US, one to Trump to supporter and one to those who don’t support him. My worry is that this is more than a metaphor.
With tailored newsfeeds and tailored media, we consume views we want to hear and it may become more difficult for us to see what anyone else sees. It’s almost like an event horizon, but for media. In the not too distant future with natural language generation, realistic text to speech, realistic video editing, we may get real fake news that’s so believable but yet completely irrefutable because no one else will have seen it.
Eight years ago, Google came out with Google Reader Play and it was awesome. Essentially, it would recommend stories and articles based on what was subscribed to in your Google Reader newsfeed. It was amazing. You could like and dislike articles and that would inform the algorithm.
Today, this is a recipe for sending one down rabbit hole, kind of like leaving on Autoplay on YouTube. Maybe we need a reader that exposes us to views that upset us or challenge our assumptions, making us more tolerant and accepting of each other? Maybe we need an AI to make us better humans?