Making ChatGPT Much Smarter
Using Brian Roemmele’s article on writing superprompts, I was able to help ChatGPT generate much better responses. First, you can check out Brian’s website here: https://readmultiplex.com/.
Yesterday, I wrote about the comparison between ChatGPT and Bard in playing the game Twenty Questions. I then continued, at least with ChatGPT, to apply a few superprompt techniques. These techniques forced the service to dig much deeper into its neural network when coming up with a response.
So let’s see how ChatGPT fared this time around. I asked ChatGPT to explain itself on why it asked a particular question and to think aloud. I also asked it to act like a well known professor in logic from a well respected university. Here’s what I got for the questions. My fairly normal word was “television”…
Question 1: Is your word a living thing?
Logic: This is a broad question that essentially splits the world into two categories — living things and nonliving things. This helps narrow down the field considerably. If the answer is “yes,” we’re dealing with animals, plants, fungi, and so on. If the answer is “no,” we’re looking at objects, places, abstract concepts, and so forth. It’s a helpful first step in honing in on what your word might be.
Question 2: Is your word an object?
Logic: Since we established the word is not a living thing, the next big categorization would be if it is an object or an abstract concept. If it’s an object, then it has a physical presence and we can delve into its characteristics. If it’s an abstract concept, it will probably be associated with feelings, ideas, theories, or states of being, and I would proceed in a different direction.
For brevity, I’ll skip to question 9…
Question 9: Can this object be used by multiple people at the same time?
Logic: This question aims to differentiate between objects that are typically used by individuals (like a personal computer, a book, or a musical instrument) versus those that are communal or interactive (like a television, a game console, or a pool table). This will help further refine the list of possible objects.
By question 11, it had guessed television. It was able to hold on to the memory of previous questions and reason out loud. Of course, we can’t actually see how it’s finding out what is what. What would be great is to see how “activated” ChatGPT gets when predicting the next text. Or, how long it takes to respond.
However, compared (of course with a statistically insignificant amount of testing) to not using a proper prompt, the results are staggering. Yesterday, ChatGPT could likely go on forever naming foods and never guessing the right answer. Today, it uses some logic. Tomorrow, maybe super logic?