Nvidia is betting big on robotics with new AI models and developer tools.
The company’s research lab, once a tiny 12-person operation focused only on ray tracing, now has 400+ staff. It helped turn Nvidia from a GPU startup into a $4 trillion AI powerhouse. Now the lab is shifting toward physical AI and robotics tech.
Bill Dally joined Nvidia’s research lab fully in 2009 and has been leading its massive growth. The lab quickly expanded beyond ray tracing to chip design and other areas. They were early adopters of AI-specific GPUs, starting around 2010—long before the current AI hype.
Now, Nvidia wants to make “the brains of all the robots,” per Dally. That means building AI models that power robots and physical AI systems.
Sanja Fidler, VP of AI research, joined in 2018 after working on robot simulation models at MIT. She set up Omniverse Toronto, focused on physical AI simulations. Omniverse introduced GANverse3D in 2021 to convert images into 3D models. They followed with a video-based 3D modeling tech called the Neural Reconstruction Engine in 2022.
Fidler said:
“We invested in this technology called differentiable rendering, which essentially makes rendering amendable to AI, right?
You go [from] rendering means from 3D to image or video, right? And we want it to go the other way.”
Nvidia’s Cosmos AI world models, launched earlier this year, rely on this tech for training robots with synthetic data.
The lab just rolled out new world AI models, libraries, and infrastructure for robotics developers at this week’s SIGGRAPH conference. These tools promise faster model responses, critical for real-time robot perception and manipulation.
Fidler explains:
“The robot doesn’t need to watch the world in the same time, in the same way as the world works.
It can watch it like 100x faster. So if we can make this model significantly faster than they are today, they’re going to be tremendously useful for robotic or physical AI applications.”
Despite the hype, Nvidia says humanoid robots in your home are still years away. The approach is gradual, tackling vision, task planning, and manipulation, scaling with more data.
Dally summed it up:
“We’re making huge progress and I think you know AI has really been the enabler here.
Starting with visual AI for the robot perception, and then you know generative AI, that’s being hugely valuable for task and motion planning and manipulation.
As we solve each of these individual little problems and as the amount of data we have to train our networks grows, these robots are going to grow.”
Nvidia’s research lab is pushing from AI GPUs into physical AI. Robotics may be the company’s next big bet.