Working with ChatGPT a bit recently, I was frustrated with a few failures when it would spend a significant time “Analyzing…” only to fail. When it retried, I was beyond elated but started to become frustrated when two additional attempts failed. Then, I was tempted with the button “Regenerate”. Of course, it failed again. After two more attempts, I decided to give it a break. We need to design better.
What would have been helpful would have been to understand why the request failed and what I should do next. This is the sort of thing that Large Language Models are good at doing. We only have so much patience for retries.
We can have the most powerful AI systems in the world, but if they’re pushing the limits on what power users are willing to endure, they’re going to slow down the mass adoption they seek. People, even advanced users, need some way of knowing that they’re not banging their heads on a wall. Even simple rules like if three regeneration attempts are made, prompt for help or submit a support ticket, could drastically improve the experience.
Sure, push ahead with new features and tools, but like Tom Peters’ book, sometimes the little things that make things excellent.