Weird With More Pixels
Chatting with family about OpenAI’s Sora and other generative AI video creation tools, I remembered the famous Twighlight Zone episode “A Nice Place To Visit” where after some time, the protagonist realizes that he is in fact in “the other place” and not heaven. Things seemed too good to be true and eventually, this tipped him off.
With the examples from video in generative AI, if you look at the videos long enough or close enough, you start to realize there’s something uncanny going on. It’s always the weird hands clapping in the background, or the timing’s off. It just seems off. It’s like SNL’s Totinos commercial:
When we’re thinking generative AI for video, we might be thinking of that huge jump to completely machine-created content. That’s not how it’ll happen.
We’ll still have humans in the loops except the effort to product a scene will be reduced significantly. The systems will be able to suggest transitions, correct lighting, change dialog, and add soundtracks that can be approved or dismissed by a human editor.
There will still be entirely machine-generated content but it will be the blooper reel. It’ll be the “Rock eating Rocks” or “An Hour of Will Smith Eating Spaghetti” type videos. It might be the end of the decade before humans leave the loop of content generation entirely.