If there’s an argument for a need for edge AI, it’s around vision (and potentially voice). Pocket Lint had an article from CES on the battle of the bedside clock but truthfully, it’s a tough sell to put an Internet connected camera in view of where people can be in various stages of undress. Even cellphones are face up most of the time so they don’t have full view of a room.
Google experimented with edge AI-based cameras with its Clips product but was missing in features. However, a camera that can provide abstracted information about its environment could be useful. It would also eat up much less bandwidth or allow it to be battery powered.
What type of information could it convey and still have an air gap between itself and the Internet?
- Light level
- Human presence
- Facial recognition
- Emotion of people
- Object identification
- Activity identification
- Location of a speaker
- Motion trajectory
Maybe there’s an argument for combining both an edge camera and a live camera with a physical cover, visible to the user. It’ll be great to see more tools being developed for edge AI for vision — DeepLens for Edge or Google Glass with local image recognition SDKs.
Unfortunately, it might take a huge privacy breach to convince companies to move to edge for cameras.