What’s the first thing to change when machines stop “demoing” and start running for days: code, batteries, or certification paperwork?

The set of 2026 predictions from Figure CEO Brett Adcock lands squarely on that question, bundling three technology arcs that have been advancing in parallel-humanoid robots, electric vertical takeoff and landing aircraft (eVTOLs), and AI systems that remember. Read as an engineering brief rather than a sci-fi tease, the common thread is endurance: robots that keep operating after the cameras leave, aircraft that fit inside the regulatory machine, and agents that retain context long enough to become genuinely useful.
The most provocative claim coming from Adcock is that humanoids will handle unsupervised, multi-day work in homes they have never seen before using neural networks that map perception directly into control-“pixels to torques,” in the shorthand most often used to describe vision-to-actuation learning. The promise is clear: a general-purpose body that can make progress in the messy world without being reprogrammed for each new kitchen, each new door latch, each new set of lighting conditions. The constraint is also clear: “multi-day” operation implies that power, stability, and fault recovery stop being secondary specs and become the product.
Most humanoid programs still spend their time in that gap between captivating autonomy and the physical realities of running a biped safely and continuously. Many platforms remain bound by energy storage and thermal management; typical working windows are still measured in hours rather than in shifts, which keeps “walk around and do things” bounded by charging logistics and careful scheduling. Manipulation is the second hard wall. Human hands have roughly 20-27 degrees of freedom, and matching the mechanics is only the start; repeatable, closed-loop control with tactile feedback is still uneven, especially on objects that are thin, reflective, deformable, or cluttered among other items. Dynamic balance adds its own tax, because actively controlled stability requires constant sensing and computation even when the robot is standing still, and a fall is a safety event, not a reset button.
Those limitations have not stopped the field from shifting into early-scale manufacturing. UBTech’s Walker line is a visible marker of that transition: the company has produced its 1,000th Walker S2, with over 500 units already deployed in real settings. That sort of number does not mean household autonomy is solved; it means supply chains, assembly and service models are getting practice while the hardest problems – dexterity, up-time, and validated safety – remain the gating items.
Safety is the quiet determinant of where humanoids can actually work. Production environments have dealt with robot risk for decades by separating machines from people. Humanoids invert that assumption by design: they are intended to move in spaces built for humans and, eventually, beside them. Standards bodies are responding-including work on requirements for dynamically stable industrial mobile robots; the presence of humanoid-specific safety frameworks under development signals that the industry recognizes biped failure modes as categorically different from those of fixed-base arms. Until those frameworks harden into accepted test methods and certification expectations, the most common “deployment” will continue to look like constrained pilots, fenced zones, and tightly scoped tasks.
Adcock’s aviation prediction sits on a more familiar pathway: eVTOLs moving into piloted operations in congruence with FAA safety expectations, city validation missions that exercise end-to-end operations rather than one-off flight tests. It is here that the critical shift is not only propulsion or aerodynamics, but procedural integration: pilot qualifications, maintenance programs and a definition of airworthiness that fits powered-lift aircraft which behave partly like rotorcraft and partly like airplanes. The FAA has been building that bridge already. The agency’s powered-lift rules and associated planning aim at the integration of advanced air mobility within the National Airspace System with performance-based requirements, including an “Innovate28” plan for scaling at one or more sites by 2028.
That means, in practice, that early services will lean on existing infrastructure-helipads, established routes, conventional air traffic control-rather than waiting for vertiport networks purpose-built to blanket major metros. The FAA also describes how updated regulations for powered-lift operations reduce ambiguity for developers while keeping the burden of proof regarding safe design and operation. The third prediction, AI shifting from chat to multimodal voice agents with persistent memory, may be both the fastest-moving and easiest to underestimate. Memory is what turns a clever interface into an assistant that can manage sequences: scheduling, purchasing, follow-up, exception handling.
It is also what drags AI out of the lab and into governance questions that mechanical engineers have long recognised in a different form: data retention is a design decision with life-cycle consequences. When an agent remembers, it stores not just what was said but what it inferred -and those inferences can be wrong, sticky, and hard to audit. Voice makes that more sensitive. Audio can carry biometric and emotional signals, and operational voice agents create a data pipeline from capture to transcription, storage, and integration with other systems. A 2024 Deloitte survey noted that 40% of professionals rank data privacy as their top AI concern, and voice systems amplify the issue because they can unintentionally capture background speech or store identifying voiceprints.
For organizations deploying memory-rich voice agents, the technical checklist of encryption, role-based access, redaction, retention limits becomes as central to product quality as latency or word error rate. Adcock also pointed to another sort of sensing system: weapon detection at 20-foot standoff, beta-tested in K–12 schools. Whatever the application domain, though, the engineering pattern is the same as the rest of these predictions: remote sensing + automated interpretation + operational acceptance. That combination works only when false positives and false negatives are bounded, the interfaces are legible to non-experts, and the surrounding procedures are designed to prevent the technology from becoming an uncalibrated authority.
Taken together, the 2026 forecast reads less like a single breakthrough moment, more like a convergence of maturity tests: humanoids are being tasked to demonstrate continuity, not choreography; eVTOLs to demonstrate integration, not airtime; and AI agents to demonstrate stewardship of memory, not conversational flair. If any of these domains falter, it will not be because the demos looked unimpressive. It will be because endurance-electrical, mechanical, regulatory, and informational-turns out to be the hardest requirement to ship.
