What does it take for a humanoid robot to do a job which is boring for humans, unforgiving for equipment and expensive to get wrong? Boston Dynamics has been addressing that question by taking its latest robot, Atlas, out of the world of staged demos and into controlled industrial practice. At Hyundai Motor Group’s new Georgia factory, a 5-foot-9-inch, 200-pound Atlas has run through its paces on a real-life assignment: sorting roof racks meant for an assembly line all by itself. The task choice is revealing. It is repetitive, parts are fat enough to challenge the balance of the entire body, and the work sits close to the rhythms of the production logistics – just the kind of environment in which robotics only works when it acts in predictable ways, shift after shift.

The more significant shift is not the job itself, but the way Atlas is being prepared to do it. Scott Kuindersma, head of robotics research at Boston Dynamics, said the current programming thinking is less about programming behaviors by hand, and more about giving instructions via demonstrations and learning. “A lot of this has to do with how we’re going about programming these robots now, where it’s more about teaching, and demonstrations, and machine learning than manual programming.” That reframing transforms a humanoid from being a one-off engineering project to something closer to a deployable platform – something where skills could be captured, honed, and distributed.
One of the best example is supervised learning by teleoperation. At Boston Dynamics, machine learning scientist Kevin Bergamin was using a VR headset to directly control Atlas and direct it to make its hands and arms through a task sequence. The objective isn’t remote operation as the endpoint, but data capture as fuel. Kuindersma summarized the payoff: “That generates data that we can use to train the robot’s AI models to then later do that task autonomously.” In terms of manufacturing, this represents a new type of work instruction: instead of writing an instruction to the PLC or specifying a fixed motion path, an operator “shows” a behavior while sensors record a high dimensional path of intent, contact and correction.
Demonstration just doesn’t scale and that’s where simulation is the other half of the pipeline. A motion-capture suitattering one television correspondent doing jumping jacks) which produced data on the motion which could be retargeted to Atlas’ different body. That same skill was then put through a stress test in a training environment filled with more than 4,000 digital Atlases in sim learning for six hours. The simulated variations are more important than the stunt: slippery floors, inclines and stiff joints represent proxies for what factories deliver in the real world – variable friction, imperfect pallets, component tolerances and aging hardware. This train broadly, then transfer approach is in line with more modern robotics simulation stacks that now also put greater emphasis on large scale evaluation and domain randomization, such as more recent releases by Isaac Lab 2.3 (focused on expanding teleoperation and whole-body control capabilities focused on humanoids).
Atlas’ compute and software partnerships speak to how far humanoids are from the past, where they are nothing more than a system of moving actuators. Boston Dynamics has outlined work done to develop Atlas that integrates the Nvidia Jetson Thor computing platform and simulates robot learning workflows to speed up dexterity and locomotion policies. In this architecture, the controllers of the robot still perform the hard real time work of balance and manipulation, but increasingly they are sitting underneath learned policies and perception systems that interpret scenes, decide actions and adapt to variation.
That adaptation is also why humanoids are being discussed as “general purpose” although they are narrow in practice. Boston Dynamics CEO Robert Playter framed the interest of industry around building robots “smart enough to really become general purpose.” At the same time, Kuindersma’s caveat is neither subtle: Atlas is not great at ordinary human routines like putting on clothes or pouring coffee yet. “There are no humanoids that do that nearly as well as a person,” he said. In other words, factory relevance is getting here before household competence because factories are able to constrain the tasks, parts, lighting and work envelopes. The Georgia roof-rack sorting trial makes that kind of logic: It’s a human job, but it can be structured.
Behind every technical achievement stands economics. Goldman Sachs has estimated the humanoid market to reach $38 billion before the decade ends in addition to advising that the cost of the components has dropped dramatically in recent estimates – with the manufacturing costs per unit dropping down from $50,000–$250,000 to $30,000-$150,000 alongside a faster-than-expected decline rate. Those ranges continue to establish where early deployments can land: big manufacturers, high utilization, and tasks where flexibility can triumph over dedicated automation. The strongest near-term demand continues to be “structured environments like manufacturing,” with examples including component sorting – this is an exact match to the type of work Atlas has been practicing.
Factory deployment also imposes a different type of maturity: safety is a system property, not a demo feature. Industrial robotics is already a child of the standards world, and the 2025 revision of the standard includes an expansion of explicit functional safety requirements, updated end-effector guidance, and consideration of cybersecurity issues – ANSI/A3 R15.06-2025. Humanoids level the playing field because they blur traditional assumptions about robot cells. A stationary arm behind guarding is one category of risk; a mobile whole body machine working near people and carts introduces new failure modes – falls, collision geometries and unexpected contact during recovery maneuvers. The practical implication for integrators, however, is that task-based risk assessment and safeguarded-space design are inextricably linked with “training the AI,” given that the types of behavior being learned must be tieneled within safety functions that are valid across edge cases.
This labor question is prone to blind our eyes to these engineering realities, but Boston Dynamics’ own framing is operational. Playter has argued that robots are good for “repetitive” and “backbreaking” work, but emphasized that they still need people around them: “But these robots are not so autonomous that they don’t need to be managed. They need to be built. They need to be trained. They need to be serviced.” That description maps to what factories already know: Automation is often a labor shifter, not a labor eliminator, transforming labor into forms of supervision, maintenance, process engineering, and quality control.
In that sense, Atlas sorting roof racks is less about a humanoid replacing a person and more about a factory testing a new interface between software and physical work. When training can be captured through teleoperation, expanded through simulation, and deployed across identical machines, the “unit of improvement” becomes a skill update rather than a mechanical redesign. The remaining challenge is not whether Atlas can perform a headline-worthy movement, but whether the learned behavior stays stable when the floor changes, the part shifts, the cycle time tightens, and safety constraints stay non-negotiable.
