Sim2Real

Assembly Line

Crossing the Sim2Real Gap With NVIDIA Isaac Lab

đź“… Date:

đź”– Topics: Partnership, Sim2Real, Reinforcement Learning, Humanoid

🏢 Organizations: Agility Robotics, NVIDIA


We’ve been able to demonstrate this in areas like step-recovery, where physics are particularly hard to model. In situations where Digit loses its footing, it’s often a result of an environment where we don’t have a good model of what’s going on – there might be something pushing on or caught on Digit, or its feet might be slipping on the ground in an unexpected way. Digit might not even be able to tell which issue it’s having. But we can train a controller to be robust to many of these disturbances with reinforcement-learning, training it on many possible ways that the robot might fall until it comes up with a controller that works well in many situations. In the following chart, you can see how big of a difference that training can make.

Early last year, we started using NVIDIA Isaac Lab to train these types of models. Working with NVIDIA, we were able to make some basic policies that allowed Digit to walk around.

The net result has been a huge step forward in our RL software stack. Instead of a pile of stacked reward functions over everything from “stop wiggling your foot” to “stand up straighter”, we have a handful of rewards around things like energy consumption and symmetry that are not only simpler, but follow our basic intuitions about how Digit should move. Investing the time to understand why the simulation differed has taught us a lot more about why we want Digit to move a certain way in the first place. And most importantly, coupled with fast NVIDIA Isaac Sim, a reference application built on NVIDIA Omniverse for simulating an testing AI-driven robots, it’s enabled us to explore the impact of different physical characteristics that we might want in future generations of Digit.

Read more at Agility Robotics