Nvidia on Monday announced a breakthrough in 3D rendering research that may have far-reaching ramifications for future virtual worlds.

A team led by Nvidia Vice President Bryan Catanzaro discovered a way to use a neural network to render synthetic 3D environments in real time, using a model trained on real-world videos.

Now, each object in a virtual world has to be modeled individually. With Nvidia’s technology, worlds can be populated with objects “learned” from video input.

Nvidia’s technology offers the potential to quickly create virtual worlds for gaming, automotive, architecture, robotics or virtual reality. The network can, for example, generate interactive scenes based on real-world locations or show consumers dancing like their favorite pop stars.

“Nvidia has been inventing new ways to generate interactive graphics for 25 years, and this is the first time we can do so with a neural network,” Catanzaro said.

Learning From Video
The research currently is on display at the NeurIPS conference in Montreal, Canada, a show for artificial intelligence researchers.

Nvidia’s team created a simple driving game for the conference that allows attendees to interactively navigate an AI-generated environment.

The virtual urban environment rendered by a neural network was trained on videos of real-life urban environments. The network learned to model the appearance of the world, including lighting, materials and their dynamics.

Since the output is synthetically generated, a scene easily can be edited to remove, modify or add objects.

1. Reducing Labor Overhead
2. Competition for Hollywood
3. Simulating Bad Behavior