De-Hyping GPT4
Sam Altman has a problem. He knows it.
In a recent interview related to the release of GPT4, he’s quoted as saying “people are begging to be disappointed and they will be”. I really appreciate that re-setting of the bar.
People think subsequent numbers are better. Sure, but usually the increase is diminishing. Second, they think that more parameters, especially if so much bigger than a first AI trained model, is better. That’s not the case. You want an optimized trained model, not a gazillion parameter model.
Want a primer? Re-read Daniel Shapiro, PhD’s article The Nuclear Mousetrap.
What makes the difference with most technologies are the tools to access them, not the underlying technology. GPT3 exploded recently because OpenAI made a super easy to use sandbox for non programmers to access. It also made it accessible to anyone who’d sign up, as opposed to a long waitlist. Further, it had ironed out many kinks and idiosyncrasies that would have killed it through a death by a thousand cuts if it had been exposed in its earlier released to the mainstream.
With so much hype around GPT4 now, Sam has the opportunity to reign in expectations. He can afford it and still raise billions.
GPT3, despite being absolutely amazing, still could develop in relative obscurity. Now, GPT4 and other OpenAI have the full light of the world on them. They’ll need to develop GPT4 to the equivalent of two releases forward before going public, but it seems they have a plan.
Pardon adding to the hype… I can’t wait to see it.