In what some are calling a game-changer and a clear move toward the metaverse, Snap launched virtual object creation and shared its latest research paper, which outlines a new way to create 3D digital assets from photos and videos of objects sourced from online collections, like Google images.

The process, which Snap is calling ‘Neural Object Capture and Rendering from Online Image Collections’, allows AR and VR creators to search for an item on Google, select a group of relevant images, taken from different angles, and automatically fill in the gaps, enabling 3D object creation without tedious scanning.

As explained by Snap:

“This approach could unlock the ability for an AR creator or developer to digitize any object in the world, as long as there are photos or videos available of it found in an online image collection. This is a significant step towards the goal of creating a library of AR digital assets to overlay computing on the world.”

This could mark a big step for Snap’s own AR ambitions, and could set the platform up for the coming metaverse, where people create their own virtual environments that others will then be able to visit, interact with, and even purchase digital and real-world items from, within the VR space.

Meta has been looking to its advances in eCommerce and to expand its virtual item library, with brands encouraged to scan in digital versions of their products to enhance their in-app listings. This new process from Snap eliminates the need to manually scan their products.

Snap is presenting this new paper at SIGGRAPH 2022. We will keep you informed of any new details on this acceleration in building new AR and VR experiences. If you want to learn more about how you can prepare for the metaverse, contact us today.