This morning, I was rejected. More precisely, a Skill we were developing was rejected for publishing.
Rejection is a loaded word. It is usually associated with very negative outcomes:
“That girl rejected me.”
“I was rejected by my top college pick.”
“My paper was rejected.”
Participation on a platform like the App Store, Google Play, Actions on Google, or Alexa Skills must have a strong review process to ensure that abuse or just bad code doesn’t affect users. Any process has to balance between a False Acceptance Rate and a False Rejection Rate and front-loading with a lot of documentation, tutorials, and developer and testing tools can help perfect this.
The process can also be stymied when developers stretch the existing use cases of a platform. “What is this thing” is what the testers might think to themselves, “how do I even test this?”
This was the case for the first time we published a Skill in 2016. We went through SEVEN rejections before the Skill was publishable.
While it might lead to some sighs when I get an email rejecting a Skill because it means more work on the project, I don’t see it as negatively as I used to. Thankfully, Amazon’s ASK review team provides some good insight back into why the Skill was rejected. I now see this as a good way to improve the user experience we’ve encountered or to catch potential issues we didn’t envision. The only area where that can be some frustration is in the timeline extension that a rejection can cause. That can definitely be an area where the ASK group can improve in (all major platforms have this issue).
While rejection is not a causal factor in success (it’s nice to hit it out of the park at a first at bat), it often correlates with doing something new and innovative.