RL from Physical Feedback: Aligning Large Motion Models with Humanoid Control


PKU     BeingBeyond     WHU
§ Corresponding author

From Text to Humanoid Control

Abstract

This paper focuses on a critical challenge in robotics: translating text-driven human motions into executable actions for humanoid robots, enabling efficient and cost-effective learning of new behaviors. While existing text-to-motion generation methods achieve semantic alignment between language and motion, they often produce kinematically or physically infeasible motions unsuitable for real-world deployment. To bridge this sim-to-real gap, we propose Reinforcement Learning from Physical Feedback (RLPF), a novel framework that integrates physics-aware motion evaluation with text-conditioned motion generation. RLPF employs a motion tracking policy to assess feasibility in a physics simulator, generating rewards for fine-tuning the motion generator. Furthermore, RLPF introduces an alignment verification module to preserve semantic fidelity to text instructions. This joint optimization ensures both physical plausibility and instruction alignment. Extensive experiments show that RLPF greatly outperforms baseline methods in generating physically feasible motions while maintaining semantic correspondence with text instruction, enabling successful deployment on real humanoid robots.

RLPF



[Overview of the RLPF framework] Our training framework, RLPF, consists of three key components:
  • Motion Tracking Policy, which is pretrained to establish a motion tracking reward to evaluate generated motions;
  • Alignment Verification Module, which enhances text-motion semantic consistency while preserving physical feasibility;
  • RL Optimization Framework that jointly optimizes the physical feasibility and semantic alignment of motions generated by the large motion model.

Experiments

We evaluate our model on the CMU and AMASS benchmark using both high-level generation metrics and low-level tracking metrics. RLPF significantly improves the physical feasibility of generated motions while maintaining semantic alignment.
[Tracking evaluation on CMU benchmark] RLPF significantly enhances the physical feasibility of generated motions compared to baseline methods on CMU benchmark.
[Tracking evaluation on AMASS benchmark] RLPF significantly enhances the physical feasibility of generated motions compared to baseline methods on AMASS benchmark.
[Generation evaluation on AMASS benchmark] RLPF maintains the semantic alignment of generated motions while improving physical feasibility.

RLPF vs. SFT

[RLPF] A person jumps and spins.
[SFT] A person jumps and spins.
[RLPF] A person doing jumping jacks.
[SFT] A person doing jumping jacks.
[RLPF] A person stretching his arms.
[SFT] A person stretching his arms.
[RLPF] Jumping up and down in place.
[SFT] Jumping up and down in place.
[RLPF] A person jumps forward once.
[SFT] A person jumps forward once.
[RLPF] A person walks forward once.
[SFT] A person walks forward once.

Conclusion

In this paper, we present Reinforcement Learning from Physical Feedback (RLPF), a novel framework that resolves physical inconsistency in motion generation models for humanoid robots. RLPF integrates two key components, a pretrained motion tracking policy and an alignment verification module, which ensures that generated motions are both physically feasible and semantically correspond with the input instructions. Extensive experiments demonstrate the effectiveness of our RLPF approach, showing a pathway for the humanoid robots in real-world applications.

BibTeX

@article{yue2025rl,
title={RL from Physical Feedback: Aligning Large Motion Models with Humanoid Control},
author={Junpeng Yue and Zepeng Wang and Yuxuan Wang and Weishuai Zeng and Jiangxing Wang and Xinrun Xu and Yu Zhang and Sipeng Zheng and Ziluo Ding and Zongqing Lu},
journal={arXiv preprint arXiv:2506.12769},
year={2025}
}