Runhan Huang

I am an undergraduate student from Tsinghua University, Yao Class. At Tsinghua University, I am fortunately advised by Prof. Hang Zhao in MARS Lab. At Harvard's Kempner AI institute, I am a visiting researcher advised by Prof. Yilun Du. I also spent time at Shanghai Qizhi Institute.

I am currently seeking for a PhD position for Fall 2026. Feel free to reach out via email at runhanhuang2004(at)gmail.com!

Email  /  CV  /  Scholar  /  Github  /  Linkedin  /  Twitter

profile photo

Recent News

  • One paper Diffusion MPC accepted by ICRA 2026!
  • One paper MoELoco accepted by IROS 2025!
  • One paper VR-Robo accepted by RA-L 2025!
  • One paper RRW accepted by ICRA 2025!

Research

My research interests center on Robotics, Generative Models and Reinforcement Learning. My goal is to develop robotic systems that can interact robustly and plan effectively in real-world environments, achieving generalizable and adaptable behaviors. I am also open to exploring other relevant research topics. Some papers are highlighted.

TTT-Parkour: Rapid Test-Time Training for Perceptive Robot Parkour
Shaoting Zhu*, Baijun Ye*, Jiaxuan Wang†, Jiakang Chen†, Ziwen Zhuang, Linzhan Mou, Runhan Huang, Hang Zhao
Arxiv, 2026
project page / video / arXiv

We propose a real-to-sim-to-real framework that leverages rapid test-time training (TTT) on novel terrains, significantly enhancing the robot's capability to traverse extremely difficult geometries.

Flexible Locomotion Learning with Diffusion Model Predictive Control
Runhan Huang, Haldun Balim, Heng Yang, Yilun Du
ICRA, 2026
project page / video / arXiv / code

We introduces a test-time adaptable locomotion planner grounded in a diffusion-based generative prior. An interactive training procedure further improves the performance of diffusion-based planners.

MoELoco: Mixture of Experts for Multitask Locomotion
Runhan Huang*, Shaoting Zhu*, Yilun Du, Hang Zhao
IROS, 2025
project page / video / arXiv / code

MoELoco introduces a multitask locomotion framework that employs a mixture-of-experts strategy to enhance reinforcement learning across diverse tasks while leveraging compositionality to generate new skills.

VR-Robo: A Real-to-Sim-to-Real Framework for Visual Robot Navigation and Locomotion
Shaoting Zhu*, Linzhan Mou*, Derun Li, Baijun Ye, Runhan Huang, Hang Zhao
RA-L, 2025
project page / video / arXiv / code

VR-Robo introduces a digital twin framework using 3D Gaussian Splatting for photorealistic simulation, enabling RGB-based sim-to-real transfer for robot navigation and locomotion.

Robust Robot Walker: Learning Agile Locomotion over Tiny Traps
Shaoting Zhu, Runhan Huang, Linzhan Mou, Hang Zhao
ICRA, 2025
project page / video / arXiv / code

We propose a proprioception-only, two-stage training framework with goal command and a dedicated tiny trap benchmark, enabling quadruped robots to robustly traverse small obstacles.

Miscellanea

Academic Service

Reviewer, RA-L
Reviewer, JAIR

Template forked from Jon Barron. A big thanks to him!