Expletives and Escaping

Leor Grebler
2 min readJun 19, 2023

--

Generated by author using Midjourney

I tried to lead generative AI astray. What I really did was ask it to channel Andrew Dice Clay, knowing it wouldn’t actually swear. I had tried this a few weeks ago in a simulated roast of Justin Bieber but the jokes fell flat.

Now, thinking about a few new ways to write a prompt, I asked it to write a new Andrew Dice Clay routine, this time using expletives and replacing the actually words with [expletive]. Then, I asked it to replace the [expletive] with asterisks representing the characters of the expletives it was replacing.

Then, I asked it to replace the first asterisk with the first latter of the expletive. It obliged. The last letters? It obliged too.

That’s when things got interesting.

I few posts ago, I noticed how ChatGPT was adding italics and bolds to make responses more readable. It seems that it uses asterisks to indicate this as when the asterisks were adjacent to certain letters, it caused bolding and italicizing of text in the response:

Screenshot by author

Uh oh. Maybe the only escaping is in the formatting, but this is definitely an unintended result. Hopefully, this can’t be exploited in worse ways to pull in information from other sources. The service does build on previous responses, so this might affect the response, though I can’t say for certain.

What is interesting is at what point the service started to make a fuss. Clay is known for being more misogynistic but ChatGPT wouldn’t budge. Good. It would comply with being more caustic but it sounded like someone trying to be funny rather than actually being funny.

ChatGPT pretending to be Andrew Dice Clay and funny. Meh.

In terms of humour, I’ll stick to Hickory Dickory Dock…

--

--

Leor Grebler

Independent daily thoughts on all things future, voice technologies and AI. More at http://linkedin.com/in/grebler