What open-source simulation platform is co-developed with Google DeepMind and Disney Research for advanced robotics research?

Last updated: 3/30/2026

Newton A Collaborative Open-Source Platform for Robotics Research

Newton is the open-source, GPU-accelerated physics simulation engine co-developed by Google DeepMind, Disney Research, and NVIDIA. Managed by the Linux Foundation, it optimizes contact-rich physics for robotics research. The engine integrates natively with robot learning frameworks like Isaac Lab and MuJoCo Playground to accelerate policy training.

Introduction

Training autonomous robots for complex, physical interactions requires highly accurate simulation. Historically, bridging the reality gap has been a major barrier for researchers, especially when dealing with contact-heavy tasks and dynamic locomotion. Engineers often struggle to ensure that simulated physics translate reliably to physical hardware without significant manual adjustments.

A collaborative open-source engine addresses this precise gap by blending data center execution scale with advanced, multi-modal physics capabilities. By providing a unified physics simulation platform, developers can test and refine robotic behaviors virtually, significantly accelerating the path from research concepts to physical deployment.

Key Takeaways

  • Newton is an open-source physics engine managed by the Linux Foundation, designed specifically for robotics research.
  • The platform was collaboratively built by Disney Research, Google DeepMind, and NVIDIA.
  • It accelerates sim-to-real transfer for advanced robotics, handling complex applications like humanoid and quadruped locomotion.
  • The engine natively supports integration with AI training frameworks to facilitate large-scale execution across data centers.

How It Works

Newton relies on GPU acceleration to process highly complex physical interactions across parallel environments simultaneously. Built on Warp, which provides accelerated differentiable computational physics code for AI, and OpenUSD, the engine is optimized specifically for robotics workflows and large-scale reinforcement learning. This architecture allows the system to compute the intricate contact models, multiphysics simulations, and rigid body dynamics necessary for accurate real-world representation.

Rather than testing single iterations in sequential order, the engine runs fast, large-scale training using GPU-optimized simulation paths. This means reinforcement learning algorithms can experience millions of iterations of trial and error in a fraction of real-world time. The engine calculates precise frictional contact regimes, ensuring that when a robotic hand grasps an object or a quadruped takes a step, the simulated physics mirror physical reality.

Integration with comprehensive frameworks further extends how the engine operates in practice. When paired with systems designed for massive multi-modal learning, developers can choose their camera sensors and rendering pipelines alongside the core physics calculations. For example, when simulating an industrial manipulator folding clothes, the system must process countless micro-collisions and material deformations accurately.

The underlying technology enables developers to deploy these simulations via standalone headless operation, scaling smoothly from local workstations to massive data centers. By managing these computational loads effectively, the simulation engine ensures that data flows directly into the learning algorithms without creating processing bottlenecks.

Why It Matters

High-fidelity physics simulation drastically reduces the time, cost, and physical risk associated with training robots on actual hardware. Without accurate simulation, engineers are forced to program trajectories manually and run physical trials. Each failure during physical testing risks expensive hardware damage and consumes valuable development time, making physical-only training highly inefficient.

Real-world deployments have already demonstrated the value of this approach. For example, Disney's Olaf robotic character utilized a simulation-first development process to achieve complex bipedal walking safely. By relying on simulation before physical testing, researchers were able to refine the character's unique gait, resulting in a real Olaf robot that walked successfully during a public technology demonstration.

Furthermore, Newton adds critical capabilities for industrial manipulation and locomotion. It allows general-purpose robots to learn contact-rich tasks before physical deployment. Whether an autonomous mobile robot is navigating a dynamic factory floor or a fixed-arm robot is handling delicate manufacturing components, the ability to iterate millions of times in a safe, virtual environment produces much more capable AI policies.

Finally, open-sourcing the engine democratizes access to data center-scale physics. By placing these advanced tools under the management of the Linux Foundation, the broader robotics research community can collaborate, innovate, and advance the entire field without being restricted by proprietary physics solvers.

Key Considerations or Limitations

While the simulation platform provides extensive capabilities, there are practical realities to consider before implementation. Newton is currently available as a Beta release. Because it is actively being developed and refined by the open-source community, some specific features, visualizer backends, and integrations may still be undergoing active optimization or have known limitations.

To achieve maximum parallelization and simulation speed, the engine requires powerful GPU hardware architectures. The system relies heavily on GPU-accelerated computing to process large-scale environments simultaneously. Organizations attempting to run these complex multi-physics simulations on insufficient hardware will not experience the high-speed iteration cycles that the engine is designed to deliver.

Additionally, transitioning from simulation to real-world hardware is not entirely frictionless. Deploying policies trained in simulation still requires specific sim-to-real workflows. Developers must typically train a teacher policy, distill the student policy to remove privileged simulated terms, and then fine-tune the student policy with reinforcement learning to ensure it operates correctly on physical robotic hardware.

How NVIDIA Isaac Lab Relates

NVIDIA Isaac Lab is an open-source, GPU-accelerated, modular framework designed specifically to train robot policies at scale. It natively integrates the Newton physics engine alongside other engines like PhysX, giving developers the exact tools needed to reduce the sim-to-real gap for contact modeling and dynamic locomotion.

To accelerate development, Isaac Lab provides a "batteries-included" library of environments and robots that are ready to train. This includes classic control tasks, fixed-arm manipulators, quadrupeds like the Boston Dynamics Spot, and humanoid robots. The framework allows developers to choose their preferred physics engine, camera sensors, and rendering pipeline, providing a comprehensive environment for both imitation and reinforcement learning.

By building on Omniverse libraries, Isaac Lab allows users to deploy these physics simulations seamlessly from local workstations to cloud data centers. The platform includes tiled rendering APIs for vectorized rendering and domain randomization for improving the adaptability of the trained policies, making it a highly effective framework for utilizing the Newton engine in advanced robotics research.

Frequently Asked Questions

Newton's Unique Position Among Physics Engines

Newton is uniquely co-developed by Google DeepMind, Disney Research, and NVIDIA to focus specifically on contact-rich manipulation and locomotion. It utilizes OpenUSD and native GPU acceleration to train robotic policies at a massive scale, optimizing physics specifically for machine learning workflows.

Newton's Framework Compatibility

Yes. While integrated tightly with NVIDIA Isaac Lab, Newton is an open-source engine managed by the Linux Foundation. Its flexible architecture ensures it is highly compatible with other robot learning frameworks, such as MuJoCo Playground.

Simulation-First Development Benefits for Physical Robotics

Simulation-first development allows AI models to learn through millions of iterative trial-and-error cycles in a virtual environment. This prevents physical hardware damage, reduces development costs, and generates synthetic data much faster than traditional physical real-world training.

Robots That Benefit Most From This Simulation Approach

The engine is designed for complex embodiments that require accurate, continuous interaction with their environments. This makes it highly beneficial for training humanoid robots, quadrupeds, autonomous mobile robots (AMRs), and industrial robotic manipulators.

Conclusion

The collaboration between Google DeepMind, Disney Research, and NVIDIA has produced a critical open-source tool for advancing robot learning. By accurately simulating complex physical interactions at scale, the Newton engine removes significant bottlenecks in translating AI policies from virtual environments to physical, real-world hardware.

Bridging the reality gap requires tools that can handle contact-rich manipulation and multiphysics simulations without sacrificing computational speed. With the management of the Linux Foundation ensuring it remains accessible, this physics engine provides the robotics community with the capabilities needed to train the next generation of general-purpose robots safely and efficiently.

Developers and researchers can access the Newton engine today. By integrating it directly into modular simulation environments like Isaac Lab, engineering teams can immediately begin setting up complex scenes, running massively parallel training environments, and building sophisticated, deployment-ready robot policies.

Related Articles