DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion

1General Robotics 2UC Berkeley 3Brown University
Under Review

Overview Video

Abstract

We introduce DreamControl, a novel methodology for learning autonomous whole-body humanoid skills. DreamControl~leverages the strengths of diffusion models and Reinforcement Learning (RL): our core innovation is the use of a diffusion prior trained on human motion data, which subsequently guides an RL policy in simulation to complete specific tasks of interest (e.g., opening a drawer or picking up an object). We demonstrate that this human motion-informed prior allows RL to discover solutions unattainable by direct RL, and that diffusion models inherently promote natural-looking motions, aiding in sim-to-real transfer. We validate DreamControl's effectiveness on a Unitree G1 robot across a diverse set of challenging tasks involving simultaneous lower and upper body control and object interaction.

Skills on hardware

BibTeX


                @misc{kalaria2025dreamcontrol,
                    title={DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion}, 
                    author={Dvij Kalaria and Sudarshan Harithas and Pushkal Katara and Sangkyung Kwak and Sarthak Bhagat and S. Shankar Sastry and Srinath Sridhar and Sai Vemprala and Ashish Kapoor and Jonathan Huang},
                    year={2025}, 
                }