Seeing Better Than Us
I’ve become fascinated with the ability to augment video to see details that we wouldn’t normally be able to see. The method involves accentuating small changes to make them much more visible. With this technique, you can take subtle changes in skin colour between frames and calculate the person’s heart rate. But it goes even further — eye movement, blood flow, twitches. If micro-expressions are a real thing, then we can make them visible as regular expressions. When this is applied to the latest in machine learning, it will mean that machines will have much more capability than we have in telling how we’re feeling.
Such technology might make lying more difficult. Does saying or hearing something make your heart rate change? How does blood flow change? Do you become flush in the face? We might have a tell and if we do, vision processing might be able to find it.
Eye tracking might be taken to a new level to get a better understanding of where we are spending our focus. Can our laptop cameras understand if our eyes our wandering across the screen and get us to focus again? Can we scientifically determine if someone says something and then looks up, down, to the left, or to the right, actually has any meaning? This might be possible soon.
Then there’s a new level of emotion detection. By looking at the subtle movements of our eyes, mouth, nose, and other facial muscles, a system can start to understand sooner what makes us… tick. What do we react in certain ways to certain concepts? It can then predict how we’ll react to different things.
At some point, we may need to rely on machines to better understand each other, or at least, keep up with those who are using the technology to understand us. Will we then be an arms race with each other? Will those not using it fall far behind?