Sometimes, we’re too polite.
This is especially the case with generative AI. The prompts we ask AI are usually the same as what we’d ask a skilled person in a topic. “Write a five paragraph essay with arguments about…” and such. Sure, you can add more details but the issue is with the number of iterations of the initial prompts.
Your first prompt on a subject should not be the last, especially for anything that you wish to use beyond entertainment.
Borrowing from Brian Roemmele’s Superprompt building (he describes it on his blog readmultiplex.com), one should first ask the AI to evaluate the result of a prompt, and then turn the AI inward: How would you rate the output? How would you change the prompt in order to better understand it?
Unlike with a human, we can push the AI as far as our tokens can take us. We can also automate the iterative improvements. Can a prompt be rewritten 20 times and then allow for 20 times the rewriting of the output, and evaluate itself? Absolutely. 100? Maybe at some point there are diminishing returns, but we should get in the habit of asking for more.
First, asking for more is likely to get us what we actually want. More importantly, asking for more removes complacency in working with AI. We are less likely to just be accepting of the result and should treat the first results and unreliable. It’s better to have a sense of skepticism about the initial results presented to us so that we can still flex our billion-year-developed sense of critical thinking.