Conjuring AI
Sam Altman made a statement that while alarmists are worried about runaway AI, many people today are pushing and seeing where the limits of today’s technology are. Maybe I’m not appreciating the complexity of the arguments but it’s true that determining the edges of the tech is being crowdsourced to users.
That said, it’s likely we’ll see some breakthroughs in usage of ChatGPT and other LLMs in ways that weren’t originally conceived. All of these breakthroughs will come through the engineering of prompts that ask the AI to perform a task that is in some way self-referential.
Self-referential can mean both the AI analyzing its own limits or assessing the output of its work. This can be done with some prodding from a user. What’s so interesting in creating something we don’t truly understand is the excitement of testing and shedding light on the silhouette.
The parallel for this is physics and biology. We bombard materials with neutrons and then try to understand the properties of what we’ve created. We expose life to different factors and then see how things change. Dealing with a new tech we’ve created isn’t new. “Wow, what can we do with this stick!”
While not everyone will strike gold, getting surprising and unexpected responses from these AIs is worthwhile pastime, if not professional research. It’s like conjuring some genie with magic words.