I spent two minutes at the Humanoids Summit trying to unplug a cable.
I was standing at the Lightwheel booth, holding a pair of game controllers, operating a robotic arm. The task sounded trivial: grab the connector, pull, unplug.
I failed. Repeatedly. I don’t think I even came close.
As a human, unplugging a cable is something you do without thinking. You grab it, wiggle a little, pull — done. For a robot, every part of that interaction has to be learned: where to grab, how hard to pull, what to do when the connector resists, and how to recover when things don’t line up perfectly.
The two-minute video above shows exactly what that learning process looks like. It’s awkward. It’s slow. And it’s far harder than it appears.
That small, frustrating demo explains more about the state of robotics today than any polished keynote or glossy humanoid reveal. Training robots isn’t just about building better hardware or bigger models. It’s about teaching machines how to deal with the messy physical details humans take for granted.
Why Simple Tasks Break Robots (and apparently, Diana, too.)
In the race to build general-purpose robots — from autonomous vehicles to humanoids — hardware keeps improving. Motors get stronger. Sensors get cheaper. Form factors get sleeker.
Software, however, is starving.
Physical AI systems need enormous amounts of experience to behave reliably in the real world. But gathering that experience physically is slow, expensive, and risky. You can’t crash cars endlessly to see what happens. You can’t let humanoid robots repeatedly fail in kitchens, warehouses, or factories.
This is why simulation has quietly become essential infrastructure for robotics.
Simulation Is Harder Than It Sounds
Simulation sounds straightforward in theory. Build a virtual world. Drop a robot into it. Let it practice.
In reality, it’s brutally difficult.
First, there’s asset discovery. Engineers spend huge amounts of time just finding the right objects to populate a simulation, not “a cable,” but this cable, with the right shape, stiffness, and friction.
Second, there’s physics fidelity. Most simulations assume a rigid world. But the real world is full of non-rigid, deformable objects: cables, cloth, food, wires, plants. These are exactly the things that cause robots to fail once they leave the lab.
And third, there’s evaluation. A robot can succeed endlessly in simulation and still fail the moment it touches reality. Simulation is a proxy, not the real thing, and without careful validation it can create false confidence.
Practicing against a tennis ball machine helps, but it won’t prepare you for wind, pressure, or a slippery court. At some point, you have to play the match.
Lightwheel: Building Worlds for Robots to Learn In
Lightwheel isn’t building robots. They’re building the worlds robots learn in.
Founded by Steve Xie, formerly a lead on autonomous driving simulation at NVIDIA and Cruise, Lightwheel focuses on reducing friction in simulation workflows rather than promising to “solve” sim-to-real outright.
I had the opportunity to interview members of the Lightwheel team. What stood out was how they frame simulation not as a visualization tool, but as a behavioral test environment.
Their framework separates two layers:
a behavior layer, which captures what a robot does
a world layer, which represents where it does it
The goal isn’t a perfect digital copy of reality. It’s a world realistic enough to stress-test behavior and measure how well it generalizes.
Rather than treating sim-to-real as a binary success or failure, they treat it as a gradient. How far does a learned behavior transfer? Where does it break? How bad is the failure when it happens?
Robots struggle most with everyday, squishy, flexible stuff, because those things don’t behave the same way twice.
Those are exactly the objects that make robots fail once they leave the lab.
Who This Matters For — Right Now
This approach is especially relevant for:
Academic labs, which need reproducible environments and standardized assets
Humanoid and embodied AI teams, facing a massive data gap
Industrial teams, working in factories full of cables, cloth, fluids, and flexible materials
For startups, this kind of infrastructure avoids building simulation tooling from scratch. For larger organizations already using platforms like NVIDIA Isaac Sim, it augments existing workflows instead of replacing them.
Autonomous Driving Has Been Here Before
If this all sounds familiar, it should.
Autonomous driving has been wrestling with these problems for years. Systems can feel smooth and confident one moment, then unsettling the next, not because they’re broken, but because they run into situations they haven’t experienced deeply enough.
Autonomy rarely fails loudly. It fails quietly, in ways that erode trust.
My husband recently tried Tesla’s Full Self-Driving, accidentally switched on Mad Max Mode, and decided he was done with it.
He expected a quiet ride. He did not get one. The system wasn’t broken. It just felt wrong.
Why This Moment Matters
The real world is messy in very specific ways. Objects bend, resist, and change based on how they’re handled, and those small variations are exactly what make training robots so difficult.
Trust isn’t trained by benchmarks alone. It’s trained through exposure to awkward, frustrating, real-world moments we usually ignore.
As robots move out of labs and into human spaces, simulation is no longer optional. It’s infrastructure. And the quality of the worlds we build for robots to practice in may matter as much as any single breakthrough model.
Training robots is hard.
That’s the point.
Editor’s Note: The accompanying podcast was created using NotebookLM. Notebook sources.
Sources:
“Beyond Rigid Worlds: Representing and Interacting with Non-Rigid Objects Workshop.” CoRL 2025, 27 Sept. 2025, Seoul, Korea. Conference Workshop Program.
Demaitre, Eugene. “NVIDIA Releases Cloud-to-Robot Computing Platforms for Physical AI, Humanoid Development.” The Robot Report, WTWH Media, 19 May 2025.
“End-to-End Autonomous Driving Industry Report, 2024-2025.” ResearchInChina, Dec. 2024.
“Lightwheel-Platform Enterprise.” Lightwheel, Lightwheel Inc., 2025.
Lim, Sudo. “Deals in Brief: 81Ravens Raises Funding, Honor Secures Investment Ahead of IPO, Five Other China Deals, and More.” KrASIA, 1 Nov. 2024.
Mustafa. “Creating a Simulation Environment for Robot Training Is Hard, but Accelerating Asset Discovery Using USD Search Makes It Easier.” Lightwheel, 29 Sept. 2025.










