Mobileye’s agreement to spend $900 million on humanoid-robotics startup Mentee Robotics is less about novelty than about leverage: moving an autonomy stack built for cars into a broader category of machines that must see, decide, and act safely in human spaces. The bet hinges on whether “physical AI” can be productized with the same discipline the automotive supply chain has demanded from driver-assistance systems.

The structure of the deal underscores that intent. Mobileye plans to pay about $612 million in cash and issue up to 26.2 million Mobileye shares, while keeping Mentee as an independent unit inside the company. Both businesses trace back to Prof. Amnon Shashua, and Mobileye said he did not participate in approving the transaction.
For Mobileye, the pivot arrives after a period of trimming. In December, the company cut about 200 employees, roughly 5% of its global workforce, as it shut down some units in response to softer demand for certain products and weaker revenue. Folding in a new robotics group adds cost and complexity, but also a narrative reset: a second platform that can reuse core perception, mapping, verification, and edge-compute engineering.
Shashua framed that commonality at CES, describing how autonomous driving differs from digital-only AI because “the AI is in the decision-making in the real world,” and adding: “Mobileye wants to expand its scope to all aspects of physical AI because there are a lot of synergies in terms of the technology layers.” The same reliability burden shows up in humanoids, where locomotion, manipulation, and real-time planning must work amid constant variation in layouts, lighting, objects, and human behavior.
Mentee’s approach is built around “mentoring” rather than remote operation: a person demonstrates a task, and the robot learns from visual imitation and intent. The company has described its platform as trained heavily in simulation to reduce the cost and risk of learning in the real world, a direction aligned with broader industrial thinking that success in physical AI depends on simulation, edge intelligence, and digital representations of processes. Citi Research has argued that if industrial robots were to displace 30% of manufacturing tasks over the next decade, the installed base could approach almost 30 million in 10 years, implying a steep ramp in deployment and support infrastructure.
The hardware target is also telling. Mentee’s humanoid is described as 1.76 meters tall, able to carry up to 25 kilograms, and designed for both warehouses and domestic environments. That payload number matches the kind of pragmatic, tote-and-bin material handling many humanoid developers cite as an early commercial fit, because the work is repetitive but still variable enough to frustrate fixed automation.
At the same time, the hard problems remain stubbornly physical. Roboticist Ayanna Howard has highlighted that the real world is “inherently dynamic,” and warned that large AI models often operate in “human time,” where a second or two of delay is tolerable for chat but dangerous for a walking machine. She also flagged overtrust risks with embodied systems: “With physical embodiments of AI, this behavioral overtrust becomes dangerous because these robots apply physical forces in the environment.”
Mobileye and Mentee have pointed to initial commercial use in fulfillment settings in 2028, with home scenarios expected later. Whether that schedule holds, the immediate engineering implication is clear: humanoids will be judged less by demos and more by uptime, fleet management, verification, and safe behavior that can be audited—requirements that resemble automotive autonomy more than consumer electronics.
