Being-0: A Humanoid Robotic Agent with Vision-Language Models and Modular Skills


PKU     BAAI     Being
§ Corresponding author

Abstract

Building autonomous robotic agents capable of achieving human-level performance in real-world embodied tasks is an ultimate goal in humanoid robot research. Recent advances have made significant progress in high-level cognition with Foundation Models (FMs) and low-level skill development for humanoid robots. However, directly combining these components often results in poor robustness and efficiency due to compounding errors in long-horizon tasks and the varied latency of different modules. We introduce Being-0, a hierarchical agent framework that integrates an FM with a modular skill library. The FM handles high-level cognitive tasks such as instruction understanding, task planning, and reasoning, while the skill library provides stable locomotion and dexterous manipulation for low-level control. To bridge the gap between these levels, we propose a novel Connector module, powered by a lightweight vision-language model (VLM). The Connector enhances the FM’s embodied capabilities by translating language-based plans into actionable skill commands and dynamically coordinating locomotion and manipulation to improve task success. With all components, except the FM, deployable on low-cost onboard computation devices, Being-0 achieves efficient, real-time performance on a full-sized humanoid robot equipped with dexterous hands and active vision. Extensive experiments in large indoor environments demonstrate Being-0’s effectiveness in solving complex, long-horizon tasks that require challenging navigation and manipulation subtasks.

Being-0



[Overview of the Being-0 framework] Our humanoid agent framework, Being-0, comprises three key components:
  • the Foundation Model (FM) for high-level task planning and reasoning;
  • the Connector, a vision-language model (VLM) that bridges the FM and low-level skills;
  • the Modular Skill Library for robust locomotion and dexterous manipulation.
Being-0 is a hierarchical agent framework for humanoid robots, where each component is optimally deployed on either the cloud or onboard devices and the Connector module bridges the gap between the FM's language-based task plans and the execution of low-level skills. Being-0 is capable of controlling humanoid robots with multi-fingered dexterous hands and active cameras, enhancing their dexterity in both navigation and manipulation tasks, and solving complex, long-horizon embodied tasks in the real world.






[Workflow of Being-0 for the task "make a cup of coffee"] This figure illustrates the step-by-step execution of the task, with images arranged in two rows. The execution order proceeds left to right in the first row, then continues left to right in the second row. Images with yellow borders indicate decision-making points for the Foundation Model (FM). The yellow dialog boxes display the FM's plans, the green boxes show decisions made by the Connector, and the blue boxes represent the skills called from the modular skill library.

Experiments

vs wo-connector

[Being-0 vs. w/o Connector] A comparison of Being-0 w/o Connector and Being-0 in the long-horizon task "Prepare-coffee." The first row shows recordings of Being-0 without the Connector, while the second row shows recordings of Being-0 with the Connector. Being-0 w/o Connector frequently queries the FM, which often fails to provide correct plans due to its limited embodied scene understanding. In contrast, Being-0 with the Connector completes the task, requiring only a few queries to the FM.




vs wo-adjust

[Being-0 vs. w/o Adjustment] A comparison of Being-0 with and without the adjustment method in two-stage tasks involving navigation and manipulation. Each row corresponds to a specific task, with the left three images showing results for Being-0 w/o Adjustment and the right three images showing results for Being-0. Without adjustment, the agent may terminate navigation in improper poses, leading to failed manipulations..




vs fixedcam

[Active Camera vs. Fixed Camera] Recordings from the ablation study on the active camera. Each row represents a different camera configuration, with the left three images depicting the navigation task and the right three images depicting the manipulation task. Only Being-0 with an active camera achieves robust performance in both navigation and manipulation.

Conclusion

In this work, we introduced Being-0, a hierarchical agent framework for humanoid robots, designed to control a humanoid equipped with dexterous hands and active vision to solve long-horizon embodied tasks. The novel VLM-based Connector module effectively bridges the gap between the high-level Foundation Model and low-level skills, significantly enhancing the performance and efficiency of the humanoid agent. Extensive real-world experiments demonstrate Being-0's strong capabilities in navigation, manipulation, and long-horizon task-solving. The results highlight the effectiveness of the proposed Connector, the adjustment method for coordinating navigation and manipulation, and the use of active vision.

Despite these advancements, the current system does not incorporate complex locomotion skills such as crouching, sitting, or jumping. These skills could extend the humanoid's functionality beyond flat-ground settings, enabling tasks like climbing stairs, working from seated positions, or manipulating objects at varying heights. Enhancing these capabilities will be an important direction for future work. Additionally, while the onboard system is efficient, Being-0 still relies on the slow Foundation Model for high-level decision-making. Future research could explore lightweight Foundation Models tailored for robotics applications to further improve the system's efficiency.

BibTeX

@article{yuan2025being,
title={Being-0: A Humanoid Robotic Agent with Vision-Language Models and Modular Skills},
author={Yuan, Haoqi and Bai, Yu and Fu, Yuhui and Zhou, Bohan and Feng, Yicheng and Xu, Xinrun and Zhan, Yi and Karlsson, B{\"o}rje F and Lu, Zongqing},
journal={arXiv preprint arXiv:2503.12533},
year={2025}
}