About six years ago, I sat in a Japanese tea house styled conference room at the Googleplex talking about the Ubi. Some of the people in that meeting were backers of the Ubi and I can to reiterate to them what we had been saying on Kickstarter around plans for the Ubi. At that point, we had realized the enormous task that was ahead of us in bringing the Ubi to life. We had to design the case (our original concepts, made of machined metal parts, were completely unviable), the hardware (the IoT boards available at the time were a no go), and the software (no APIs like Alexa or Google Assistant were available).
We ended up going the route of Android because Voice Typing was a built in feature that was free. We could use it for the speech-to-text component. What we had really wanted was voice responses to voice based queries, which Google was just rolling out that year. However, there was no way of controlling whether a “How’s the weather?” question would get a speech response or just a normal text response.
The other feature of Android that we were eager to try was offline voice typing. This featured was showcased at Google I/O 2012 and it could allow for much faster interaction. However, there was no way to force Android to use offline mode unless you turned off WiFi, which didn’t work because we needed to process the response for natural language understanding and integration with different devices.
Sitting with socked feet, drinking Japanese rose petal tea, I asked the Googlers in that tea room if there was a way to get a Google Now API. This API would allow us to send voice files (or stream) and get back an audio response. I might as well have asked the walls the same question. We got nothing but encouragement that our idea was interesting.
Four years after that, Google announced Google Home. I have no bad blood thinking about that meeting. In between our meeting and the Google Home release was the earth shattering announcement of the Echo and Alexa. There were also other devices that were announced like the Ivee and Homey. However, looking back, it’s clear that Google missed out on an enormous opportunity while pursuing ideas its management had fallen in love with. Google Glass being the prime example (remember, phones are “emasculating”).
It could have had a Google Assistant like API, or a device like the Google Home years before. It should have taken this market from Amazon.
With the Google Pixel event coming up, I’ve been thinking about what would be my wishlist from Google if they’d listen today.
- I’d love to have a simple voice to music API
- Can we get a browser version of Google Assistant with OK Google Wake word?
- Can we create a Google Duplex Assistant to answer inbound calls?
- Can we have a camera that can be used for 4K streaming and videos on its own (not Clips)?
- Can we get a standalone 4 or 5 G Pixel Buds so I can leave my phone behind and still be connected?
- Can we get a giant touchscreen display with Google Assistant?
More developer incentives for creating Actions.
Maybe Google will listen this time.