NeRF on Gigapan

Leor Grebler
2 min readDec 4, 2022

--

Generated by author using Midjourney

If you follow the YouTube channels Two Minute Papers and Corridor Crew, you’ll have seen a technology that is slowly becoming accessible to the masses — NeRF. Neural Radiance Fields (NeRF) is a method to render both the physical dimensions of an object as well as the lighting by stitching together images around that object and processing using neural networks.

You can think of NeRF as “bullet time” from the matrix where from a set of images you can now pause, pan, and rotate around an object and the light will change accordingly. The other TV trope that plays a bit into NeRF is the “zoom, enhance” quotes. You can now see details around an object, albeit ones that were already available to the viewer as these images were needed to train the model.

Separately, Gigapan is a website that stitches together images to get gigapixel size images that can be explored. It’s like a real life version of Where’s Waldo except without Waldo.

I remember seeing how this could play out shortly after Barack Obama’s inauguration. Microsoft had just launched a graphics streaming technology called Silverlight and had used photos that people had posted of the event to stitch together different angles that could be walked through

I’m wondering whether the two technologies — gigapixel-sized images and NeRF — together might allow for the rapid creation of virtual worlds based on reality. A quick drone video of a city block could be turned into a traversable virtual world or could be used to create walkthroughs of buildings.

This technology is very early so it’s probably the best time to start building out new business around it.

--

--

Leor Grebler
Leor Grebler

Written by Leor Grebler

Independent daily thoughts on all things future, voice technologies and AI. More at http://linkedin.com/in/grebler

No responses yet