NVIDIA has unveiled a new suite of physical AI models that promise to transform robotics worldwide. The announcement at CES 2026 in Las Vegas demonstrates how artificial intelligence is increasingly moving into real-world, physical applications. Furthermore, these models allow robots to perceive environments, reason logically, and act autonomously across multiple industries.
NVIDIA’s models, including Cosmos Transfer 2.5, Cosmos Predict 2.5, Cosmos Reason 2, and Isaac GR00T N1.6, are now available on platforms such as Hugging Face. As a result, developers can simulate environments, generate synthetic data, and train robots efficiently, reducing costs and improving accuracy.
Physical AI models enable machines to analyze surroundings, plan complex movements, and execute full-body actions autonomously. Consequently, engineers can focus on innovation instead of repetitive programming tasks, accelerating robotics development cycles significantly.
Several global leaders have integrated NVIDIA’s technology into their robotics solutions. Boston Dynamics, Caterpillar, Franka Robotics, Humanoid, LG Electronics, and NEURA Robotics demonstrated robots using physical AI models. Therefore, these machines now operate more effectively in industrial assembly, medical environments, and household assistance.
NVIDIA CEO Jensen Huang described the release as a “ChatGPT moment for robotics.” Moreover, he highlighted that combining Jetson processors, CUDA software, Omniverse simulation tools, and open physical AI models provides a complete stack for building advanced autonomous robots.
The Jetson T4000 module, powered by Blackwell architecture, delivers four times higher AI compute efficiency than previous modules. As a result, it will power industrial, healthcare, and domestic robots with greater speed and intelligence, enabling more reliable operations.
Alongside the models, NVIDIA introduced Isaac Lab Arena and OSMO frameworks to unify training workflows. Isaac Lab Arena benchmarks robot behavior at scale in simulation. Consequently, robots can be tested extensively before deployment, ensuring higher operational safety and performance. OSMO coordinates data generation, model training, and automated testing, simplifying development pipelines.
Opening physical AI models to developers has drawn global interest. NVIDIA’s collaboration with Hugging Face integrates these models into the LeRobot framework. In addition, millions of engineers can now use these tools, accelerating innovation in robotics worldwide.
Addressing a critical challenge in robotics, physical AI models overcome data scarcity. Cosmos models simulate complex real-world scenarios, therefore generating abundant datasets without expensive or risky physical testing. This approach allows engineers to develop smarter, more capable robots.
Industry applications extend beyond traditional manufacturing. Salesforce uses physical AI models with video analysis to improve operational efficiency, while LEM Surgical trains autonomous surgical arms. Meanwhile, logistics, healthcare, and service industries are exploring how these models enhance productivity and safety.
The announcement highlights AI’s shift from digital tasks to physical-world execution. Consequently, robots are becoming increasingly intelligent, adaptive, and capable of performing autonomous operations in dynamic and complex environments.
NVIDIA’s open-stack approach encourages rapid adoption of AI-driven robots and fosters collaboration across the developer community. Moreover, the physical AI model ecosystem allows engineers to innovate quickly, building the next generation of autonomous machines.
With robust computing platforms, open-source accessibility, and collaborative frameworks, NVIDIA and partners are advancing robotics innovation. Thus, physical AI models are transitioning from experimental research tools into practical, deployable systems across diverse industries worldwide.