Nvidia: ‘Graphics 3.0’ will drive physical AI productivity
Nvidia has floated the idea of “Graphics 3.0” with the hope of making AI-generated graphics central to physical productivity, especially in factories and warehouses.
The concept revolves around graphics generated by generative AI (genAI) tools as opposed to humans. Nvidia said AI-generated graphics could help in multiple ways, including training robots to do their jobs in the physical world or by helping AI assistants automate the creation of equipment and structures.
“We believe we are now in Graphics 3.0…being superpowered by AI,” said Ming-Yu Liu, vice president of research at Nvidia, during a keynote at SIGGRAPH 2025, the graphics conference held this week in Vancouver, BC.
Nvidia’s GPUs are widely used by text-based genAI models and virtual assistants. Beyond that, the company hopes Graphics 3.0 will change the physical world by allowing AI to run robots, traffic signals, home appliances, autonomous cars, and equipment in offices, factories and warehouses.
Robots will “assist us in our homes, redefine how work is done in factories, warehouses, agriculture, and more,” Nvidia CEO Jensen Huang said in a short video address during an event keynote.
But creating Graphics 3.0 isn’t easy, as virtual AI differs from physical AI by relying on data used to train foundation models from the likes of OpenAI and Google. Because physical AI relies on pixels, which aren’t as readily available, Nvidia is creating synthetic data by simulating virtual worlds for applications.
“Robots don’t learn from code. They learn from experience. But real-world training is slow and expensive,” Huang said.
Nvidia has created AI models and simulation tools to create pixels that can ultimately be used to train robots, autonomous cars and other physical AI devices. “We need to invent completely new tools so that artists can conceptualize, create, and iterate orders of magnitude more quickly than they can today,” Aaron Lefohn, vice president of research at Nvidia’s real-time graphics lab, explained during the keynote.
Nvidia talked about its Cosmos AI models that will help robots take commands, sense, reason, plan, and then execute tasks in the physical world. The models can help robots bring digital intelligence into the physical world, said Sonia Fidler, vice president of research at Nvidia’s spatial intelligence lab.
“Physical AI can’t scale through real world trial and error. It’s unsafe, time consuming and expensive,” Fidler said.
For example, autonomous cars are being trained in virtual worlds because crashing cars hundreds of times to create training data is not feasible.
The company also this week announced Omniverse NuRec, which turns real-world sensor data into fully interactive simulations robots can train in or test on. It includes different tools and AI models for the construction, simulation, rendering, and enhancement of 3D digital environments.
The virtual reconstruction of worlds comes through 2D data collected from cameras and sensors. Every pixel is labeled based on a visual understanding of the sensor data.
“It is very important to stress here that visual understanding is not perfect and because of different ambiguities it’s hard to perfect,” Fidler said.
The company also announced AI material-generation tools to create more realistic graphics “complete with realistic visual details, including reflectivity and surface textures,” Lefohn said.
3D experts and engineers can “engage AI assistants using simple language to describe their requirements,” Lefohn said.Nvidia: ‘Graphics 3.0’ will drive physical AI productivity – ComputerworldRead More