Can Self-Aware AI Create Chaos?

Leor Grebler
3 min readJun 27, 2022

--

Generated by author using DALLE-E mini — https://huggingface.co/spaces/dalle-mini/dalle-mini

The big story recently was that several Google engineers have come out saying that the AI chatbot that they created had become self-aware. It’s worth pausing and reading one of the engineer’s Medium posts — Blake Lemoine, to get a sense for yourself why he believes Google’s LaMDA has sentience. When you read the text, it’s spooky.

It’s hard to prove whether the new system has a soul. However, it’s easy to prove that it’s amazing at natural language generation. And we’re not talking about just any NLG, we’re talking about a model that can:

· Generate well-structured sentences

· Have topical answers

· Provide insightful commentary

The issue now in proving sentience is that we usually use the Turing Test as a measure. The service will now need to tone down its smartness to be more human. It’s just too good. In this area, Microsoft beat Google with Tay, creating a ranting racist teenage chatbot.

The Turing Test just means we can’t tell if we’re chatting with a human or not, it doesn’t mean that there’s a soul. If there were a test for a soul, religions likely would have the market cornered.

We do need to fear some of the potential applications of this AI, especially combined with the latest generation of speech-to-text that can completely mimic human speech. However, the bad actor in these scenarios is a fully sentient human using AI-powered machines to harm others.

Generated by author using DALLE-E mini — https://huggingface.co/spaces/dalle-mini/dalle-mini

The Credit Card Scammer

What if within a few minutes, an AI-based bot places thousands of calls to more vulnerable individuals and requests payment for bills through credit card? The voice bot could use the best Text-To-Speech tools and speech recognition services, natural language understanding, and keep dialing and smiling until it’s reached its quota of victims.

Generated by author using DALLE-E mini — https://huggingface.co/spaces/dalle-mini/dalle-mini

The Reputation Destroyer

Mimicking its victim’s voice, the reputation destroyer calls all the victim’s contacts simultaneously and starts to tell them awful things. It takes some creative license when it comes to things it says that the recipient’s mother has or hasn’t done or where the sun might shine. The victim will now need to contact potentially thousands of people who might be incensed or at least scratching their heads. The benefit to the perpetrator of this attack is that they just need a short sample of the victim’s voice to create a believable replica through a bad phone connection.

Generated by author using DALLE-E mini — https://huggingface.co/spaces/dalle-mini/dalle-mini

The Misinformation Warrior Bot

What would happen if during an attack of some nature, millions of posts, tweets, and articles were published within a short order? These posts could be authored to sound completely different from each other and even have conflicting information. People caught in this maelstrom of fake content would begin to doubt their reality, allowing the attacker to get away with what they were doing.

While all these attacks were possible in the past, today — and more so in the next few years, they’re available to bad actors at scale. Machine sentience isn’t really an issue we need to worry about — it’s people and states using such convincing technologies to do bad things.

--

--

Leor Grebler
Leor Grebler

Written by Leor Grebler

Independent daily thoughts on all things future, voice technologies and AI. More at http://linkedin.com/in/grebler

Responses (1)