Runhan Huang

I am an undergraduate student from Tsinghua University, Yao Class. At Tsinghua University, I am fortunately advised by Prof. Hang Zhao in MARS Lab. At Harvard's Kempner AI institute, I am a visiting researcher advised by Prof. Yilun Du. I also spent time at Shanghai Qizhi Institute.

I am currently seeking for a PhD position for Fall 2026. Feel free to reach out via email at runhanhuang2004(at)gmail.com!

Email  /  CV  /  Scholar  /  Github  /  Linkedin  /  Twitter

profile photo

Recent News

  • One paper accepted by IROS 2025! Check out the website.
  • One paper accepted by RA-L 2025! Check out the website and code.
  • One paper accepted by ICRA 2025! Check out the website and code.
  • My personal website is out!

Research

My research interests center on Robotics, Generative Models and Reinforcement Learning. My goal is to develop robotic systems that can interact robustly and plan effectively in real-world environments, achieving generalizable and adaptable behaviors. I am also open to exploring other relevant research topics. Some papers are highlighted.

Flexible Locomotion Learning with Diffusion Model Predictive Control
Runhan Huang, Haldun Balim, Heng Yang, Yilun Du
Arxiv, 2025
project page / video / arXiv / code

We introduces a test-time adaptable locomotion planner grounded in a diffusion-based generative prior. An interactive training procedure further improves the performance of diffusion-based planners.

MoELoco: Mixture of Experts for Multitask Locomotion
Runhan Huang*, Shaoting Zhu*, Yilun Du, Hang Zhao,
IROS, 2025
project page / video / arXiv / code

MoELoco introduces a multitask locomotion framework that employs a mixture-of-experts strategy to enhance reinforcement learning across diverse tasks while leveraging compositionality to generate new skills.

VR-Robo: A Real-to-Sim-to-Real Framework for Visual Robot Navigation and Locomotion
Shaoting Zhu*, Linzhan Mou*, Derun Li, Baijun Ye, Runhan Huang, Hang Zhao,
RA-L, 2025
project page / video / arXiv / code

VR-Robo introduces a digital twin framework using 3D Gaussian Splatting for photorealistic simulation, enabling RGB-based sim-to-real transfer for robot navigation and locomotion.

Robust Robot Walker: Learning Agile Locomotion over Tiny Traps
Shaoting Zhu, Runhan Huang, Linzhan Mou, Hang Zhao,
ICRA, 2025
project page / video / arXiv / code

We propose a proprioception-only, two-stage training framework with goal command and a dedicated tiny trap benchmark, enabling quadruped robots to robustly traverse small obstacles.

Miscellanea

Academic Service

Reviewer, RA-L
Reviewer, JAIR

Template forked from Jon Barron. A big thanks to him!