There wasn’t a lot of chatter after HomePod’s release. It seems it might have been overshadowed by many of thew mac and iOS updates and that these would be more appealing to the Apple consumer (or developer, in this case). For developers, the new graphics processing around AR seemed to get much more of a spotlight.
In reviewing the keynote, it was curious to see what were the highlighted areas:
- Music playback quality
- Processing power of the device
The last bit struck me. The device was being presented almost like an iPad. Why do you need to brag about the processing power of a voice interactive product? It’s like talking about how many horsepower a 737 has. (The answer: enough to fly).
There isn’t a lot that’s needed to run voice or audio. High fidelity music usually requires specialized DSP chips as application processors (like the A8) are not specialized to run in realtime.
Could it be that at some point, Apple might open up the HomePod for developers to build apps (similar to Alexa Skills)? Probably not. What we might see is that SiriKit gets expanded to also call services and not just iOS apps.
What might be an interesting play is for Apple to use these bigger processors that will only get a fraction of their CPU consumed as a massive distributed computer for supporting Siri. One can imagine…