Two new papers in Science Robotics, published on August 27 by UC Berkeley roboticist Ken Goldberg, argue that truly capable humanoid robots remain far further off than the most optimistic timelines suggest. While large language models (LLMs) surged into everyday use by training on vast internet text, Goldberg says robots can’t ride the same data wave to master real-world skills quickly.
He estimates a “100,000-year data gap” between the text used to train LLMs and the kind of rich, embodied data robots would need to gain dexterity and reliability in homes, factories, or hospitals. The gap, he says, makes near-term claims, such as robots outshining human surgeons in five years, premature and risks fueling a bubble that could backfire on the field.
Why language raced ahead of dexterity
Goldberg’s core contention is that physical manipulation remains the hardest bottleneck. Simple human tasks, such as picking up a wine glass or changing a light bulb, demand precise perception of object pose, fingertip placement, contact control, and continuous feedback. Humans do this effortlessly. Robots do not.
This mismatch echoes Moravec’s paradox, Goldberg says. What’s easy for humans can be profoundly hard for machines. Proposals to harvest training data from online videos fall short, the roboticist argues, because 2D footage rarely reveals the exact 3D motions, forces, and contacts a robot must replicate.
Simulation helps with dynamic feats like running or acrobatics. Still, it has not translated into the kind of fine, useful handwork done by construction workers, plumbers, electricians, kitchen staff, or factory technicians—teleoperation, where people “puppet” robots, does generate data, yet it scales linearly. Eight hours of work yield only eight hours of data, a trickle against the torrent that powered language models.
Data vs. “good old-fashioned engineering”
Prof. Goldberg describes an active paradigm debate in robotics. One camp argues that more data alone will unlock general-purpose humanoids. The other, which he calls “good old-fashioned engineering, ” leans on physics, math, and explicit world modeling.
Goldberg sees a pragmatic middle path. Use engineering to make robots reliably functional enough in limited tasks that people will deploy them, then let those deployed robots collect the real-world data needed to improve.
He points to examples already following this “bootstrap” logic. Waymo’s autonomous vehicles gather data continuously as they operate, improving performance over time. Ambi Robotics uses the same pattern for warehouse parcel sorting: deploy, collect, refine, repeat. In Goldberg’s view, the practical route to better robots is not a data deluge all at once, but iteratively earned data from systems that already do something useful.
Jobs and timelines: what changes first
On timelines, the professor pushes back against confident predictions of rapid humanoid parity. He does not see sweeping breakthroughs in the next two, five, or even ten years. As for work, he argues that blue-collar trades have been “very safe” for a long time because they demand precisely the dexterous manipulation and adaptation robots that are still lacking.
More vulnerable in the near term are routine, form-filling tasks and certain white-collar workflows that language-based systems can streamline. He flags customer service and clinical communication where human empathy still matters. An automated agent can’t credibly say, “I know how you feel,” and few people want a robot to be the one delivering devastating medical news, even if it can read an image.
Smarter bodies, not just smarter brains
The bottleneck isn’t only data or algorithms. Despite eye-catching demos from companies like Boston Dynamics, Figure, and Tesla, many humanoids remain constrained by their bodies. Sony recently called for research partnerships that focus on richer joint designs and more flexible structures, noting that many machines move in rigid and unnatural ways.
Hamed Rajabi, who directs the Mechanical Intelligence Research Group at London South Bank University, makes a similar case. In an article for The Conversation, he says that today’s robots often burn substantial energy and rely on constant computational corrections because their hardware lacks “mechanical intelligence.” Reports of humanoids overheating or crashing in competition settings, and cases where systems consume more power than humans for simple walking, highlight the point. The next leap likely requires more adaptive, biologically inspired bodies as much as smarter control software.
Goldberg’s message is not pessimistic but corrective. Progress is real, yet uneven, and the hardest problems sit at the intersection of data, control, and physical design. Managing expectations now, he argues, will protect the field from hype cycles and keep attention on the unglamorous work, engineering, experimentation, and steady deployment that actually move robots from lab demos to dependable tools.
Read the UC Berkeley News interview here.