Inspired by the recent success of LLMs, the field of human motion understanding has increasingly shifted toward developing large motion models. Despite some progress, current efforts remain far from achieving truly generalist models, primarily due to the lack of massive high-quality data. To address this gap, we present MotionLib, the first million-level dataset for motion generation, which is at least 15× larger than existing counterparts and enriched with hierarchical text descriptions. Using MotionLib, we train a large motion model named Being-M0, demonstrating robust performance across a wide range of human activities, including unseen ones. Through systematic investigation, for the first time, we highlight the importance of scaling both data and model size for advancing motion generation, along with key insights to achieve this goal. To better integrate the motion modality, we propose MotionBook, an innovative motion encoding approach including (1) a compact yet lossless feature to represent motions; (2) a novel 2D lookup-free motion tokenizer that preserves fine-grained motion details while expanding codebook capacity, significantly enhancing the representational power of motion tokens. We believe this work lays the groundwork for developing more versatile and powerful motion generation models in the future.
In this paper, we explore how to advance the field of large motion model. To this end, we introduce MotionLib, the first million-level dataset comprising over 1.2 million motions with hierarchical texts. Building on MotionLib, we present key insights into scaling up both data and model size for large-scale motion training. Furthermore, we propose MotionBook, a novel motion encoding approach designed to maximize the benefits when trained on extensive motion data. MotionBook incorporates compact yet lossless features to represent motion, and introduces a novel 2D-LFQ motion quantization method that treats each motion sequence as a 2D image, constructing a finite-scale codebook that eliminates the need for token lookups. Leveraging these advancements, we train Puppet, a large motion model that achieves SoTA results compared to current counterparts.
@inproceedings{wang2025scaling,
title={Scaling Motion Generation Model with Million-Level Human Motions},
author={Wang, Ye and Zheng, Sipeng and Cao, Bin and Wei, Qianshan and Zeng, Weishuai and Jin, Qin and Lu, Zongqing},
booktitle={International Conference on Machine Learning (ICML)},
year={2025}}