The Voice Operated Starbucks Mug

Image for post
Image for post

The Voice Operated Starbucks Mug is my Ish Ploni איש פלוני, a placeholder for an object that is AVS enabled. It provides an example of what can be done with third party voice implementations that CAN’T be done on a first party Alexa-enabled product like an Echo or Echo Show.

What are some these features?

  • Multiple wake words that can call different AIs. “OK Starbucks” can run in conjunction with “Alexa” and can call a custom AI.
  • Sensors on the device can help inform the interaction with Alexa. “Alexa, how many coffees did I have today?” Sensors on the device can feed a Skill that will provide the answer.
  • Voice biometrics can be layered on top of the wake word to authentic the user. User voice enrolment or a fingerprint sensor could be used to verify the identify of the user and ensure Alexa only responds to them.
  • The device can receive push notifications and play them outside of AVS.
  • Using Alexa Gadgets, a developer can coordinate lights or other interactions with a Skill

While there might never be a voice operated Starbucks mug, all of these are possible on third party AVS devices but very few companies have offered features like these if at all. There is an opportunity here.

Written by

Independent daily thoughts on all things future, voice technologies and AI. More at

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store