Helix 02: Figure AI Unveils Their 'Most Powerful And Capable' Humanoid Robot
Keneci Network @kenecifeed
Keneci Network @kenecifeed
Figure AI, the AI robotics company developing humanoid robots, has unveiled Helix 02, described as their most powerful and capable model to date for controlling humanoid robots, the company announced on X Tuesday.
Helix 02 represents a major advancement in their AI stack, building on the earlier Helix Vision-Language-Action (VLA) model introduced in early 2025. It enables fully autonomous, end-to-end whole-body control for complex, long-horizon tasks in real-world environments.
Key highlights from the company's video demonstrations:
Full-body autonomy from pixels to actions — A single unified neural network directly controls the entire body (including locomotion, manipulation, balance, wrists, fingers, torso, and head) using inputs from all onboard sensors: vision, touch (tactile sensing), and proprioception.
Long-horizon loco-manipulation demo — The standout showcase is a robot (running on Figure 03 hardware) autonomously unloading and reloading a dishwasher in a full-sized kitchen. This 4-minute task integrates walking, grasping, placing items, opening/closing doors, and balance maintenance — all without resets or human intervention. Figure claims this is the longest and most complex autonomous task completed by a humanoid robot to date.
New architecture — Helix 02 extends their prior "System 1 + System 2" setup with a foundational System 0 layer: a learned whole-body controller trained on over 1,000 hours of human motion data plus sim-to-real reinforcement learning. This replaces over 109,000 lines of hand-engineered C++ code with a neural prior for more natural, stable motion.
Enhanced dexterity — Leveraging embedded tactile sensors and palm cameras on the Figure 03 robot, it achieves fine-grained manipulation previously out of reach for vision-only systems, such as:
Extracting individual pills.
Dispensing precise syringe volumes.
Singulating (separating) small, irregular objects from cluttered environments despite self-occlusion.
This pushes toward household and general-purpose robotics, addressing unpredictability in homes (diverse objects, no fixed setups). It runs onboard with low power, enabling real-world deployment without cloud reliance.