GPTs’ Discovery Problem

Leor Grebler
2 min readNov 8, 2023
Generated by author using Midjourney

OpenAI might have a problem similar to Alexa Skills and Actions on Google… discovery.

With the opening of the platforms to developers to create and publish their own flavours of the language models, people will now be able to select specialized task-oriented bots to work with. The issue will be how to tell which ones are good vs mediocre or plain bad. There might end up being a rating system that suffers from extreme reviews like all rating systems.

There might end up being a RateMyMD for an MD GPT. Which one should I chat with for my medical problem? Changing a tire? Planning a trip? Will I try a few or mostly go with the first presented to me?

Alexa had it more difficult — how do you discover Skills when there is no visual interface, for the most part? In the end, while there are many Skills, Skills didn’t end up being the runaway success of an app store. There was just too much friction.

So what might OpenAI do to overcome the discovery problem?

Assess and rank GPTs based on their stated purpose. Have this assessment verified by a third party or appointed assessors. Run third party GPTs directly from the main ChatGPT interface if it will perform better than the main general GPT (similar to Actions on Google). Give credit both monetarily and through recognition to GPT creators to help them incentivize creation of great GPTs.

Maybe this way the millions of AI future will be one we will benefit from.



Leor Grebler

Independent daily thoughts on all things future, voice technologies and AI. More at