SAM 2, Blender, and More

Leor Grebler
2 min readAug 1, 2024

--

Generated by author using DallE 3

SIGGRAPH 2024 is still underway but some of the new technologies being launched there are hinting at the really cool future that is imminently in front of us. These technologies are going to be able to take initial input from subpar videos generated from Sora, Runway, or others, and turn them into videos indistinguishable from real video.

What we’ll be seeing in the months ahead is that AI-generated video will start off in diffusion models, potentially segmented with technologies like SAM 2, and then fed into tools like Blender to create realistic ray tracing, volumetric fills, and other techniques to turn these concepts into real looking videos that can be edited or even turned into first person games. What we may also see is is the conversion of flat TV or YouTube videos into 3D first-person-shooter like games that can be engaged with through screen devices or virtual reality headsets.

This is where we can see SAM 2 fitting into the larger picture for Meta. With this technology, it will be possible to convert most videos into ones that can be properly enjoyed in 3D through their Quest headsets. This a reason to invest heavily. While Apple might have the upper hand in hardware development, Meta releasing tools to entice developers into developing apps that can eventually work well with its hardware is a long game that it could profit from.

We might also benefit from endless channels of entertainment.

--

--

Leor Grebler
Leor Grebler

Written by Leor Grebler

Independent daily thoughts on all things future, voice technologies and AI. More at http://linkedin.com/in/grebler

No responses yet