Recognize, Synthesize

Leor Grebler
2 min readDec 21, 2022

--

Generated by author using Midjourney

With Generative AI being all the rage (or at least blowing up the echo chamber of my Google News and Medium feeds), I began looking at technologies that we haven’t seen synthesized yet. One of them is gestures. There are some research papers on gesture synthesis but before you can even get to synthesis, you must first be able to recognize gestures and understand the context for their use.

When I met Kevin Kelly, gesture recognition is a one thing that he mentioned to look out for when I asked him if I could keep my eyes open for any technologies during my travels. I sent him my report of a few technologies out there but there wasn’t much.

At the time, the Microsoft Kinect device had been the gold standard used by researchers for gesture work. Microsoft had subsidized academic researchers’ work with the huge investment they put in commercializing a technology that ultimately didn’t find market fit.

However, gesture recognition as a control input never took off. Not at that time for XBox and not now for Zoom meetings. Gesture just seems to require too much thought and energy to be quick and easy. Gestures happen as an afterthought to go together with speech or emotion. They aren’t done consciously.

The synthesis of gestures will be very useful for avatars in the Metaverse of or deepfakes. With this, we see that generative technology follows two steps. The first is being able to develop a recognition engine. Only then one can build a synthesis engine. The recognition engine is then used to determine if the synthesis engine is any good. That’s your generative adversarial network.

It’s likely that many of the recognition technologies today will become generative technologies tomorrow.

--

--

Leor Grebler
Leor Grebler

Written by Leor Grebler

Independent daily thoughts on all things future, voice technologies and AI. More at http://linkedin.com/in/grebler

No responses yet