Multi-Brain Collaborative Control for Quadruped Robots

CORL 2024

Hang Liu*             Yi Cheng*             Rankun Li             Xiaowen Hu             Linqi Ye             Houde Liu            


Abstract

In the field of locomotion task of quadruped robots, Blind Policy and Perceptive Policy each have their own advantages and limitations. The Blind Policy relies on preset sensor information and algorithms, suitable for known and structured environments, but it lacks adaptability in complex or unknown environments. The Perceptive Policy uses visual sensors to obtain detailed environmental information, allowing it to adapt to complex terrains, but its effectiveness is limited under occluded conditions, especially when perception fails. Unlike the Blind Policy, the Perceptive Policy is not as robust under these conditions. To address these challenges, we propose a Multi-Brain collaborative system that incorporates the concepts of Multi-Agent Reinforcement Learning and introduces collaboration between the Blind Policy and the Perceptive Policy. By applying this multi-policy collaborative model to a quadruped robot, the robot can maintain stable locomotion even when the perceptual system is impaired or observational data is incomplete. Our simulations and real-world experiments demonstrate that this system significantly improves the robot's passability and robustness against perception failures in complex environments, validating the effectiveness of multi-policy collaboration in enhancing robotic motion performance.




Framework

Your Image

🧗🏼Climb with Perception


   

   

🚧Avoid with Perception


   

🧐What if Perception suddenly fail?

✅Still Robust


   

   

👁‍🗨Height Map in Real-World


Percep Fail 1   Percep Fail 2   Percep Fail 3

BibTeX


@inproceedings{go2mbc,
title={Multi-Brain Collaborative Control for Quadruped Robots},
author={Hang Liu, Yi Cheng, Rankun Li, Xiaowen Hu, Linqi Ye, Houde Liu},
booktitle={TBD},
year={2024},
}

Acknowledgment