Hu Tianrun

Hu Tianrun

Research Engineer

Smart System Institute (SSI)

National University of Singapore

Email: tianrunhu@gmail.com, tianrun@nus.edu.sg

Office: Smart System Institute, Innovation 4.0, #06-01C, 3 Research Link, Singapore 117602

About Me

I am a Research Engineer at the Smart System Institute (SSI), National University of Singapore, supervised by Prof. David Hsu. I work closely with Dr. Hanbo Zhang while at NUS. My research focuses on mobile manipulation, object-centric skill learning, and generalizable robot systems. Previously, I was an undergraduate in Computer Engineering at the College of Computing and Data Science (CCDS), Nanyang Technological University under the guidance of Prof. Lam Siew Kei. During my undergraduate studies, I worked with Dongshuo Zhang.

Research Interests

My long-term research goal is to build generalizable robot skills efficiently with minimal human prior. Generalizing across objects, embodiments, viewpoints, and tasks remains hard because most learning approaches couple factors that are structurally independent. I am interested in representations and algorithms that make this structure explicit, so that compositional generalization and data efficiency arise from the representation rather than from scale.

Compositional Generalization in Manipulation How can skill representations and scene models support transfer across objects, embodiments, viewpoints, and tasks?
Object-Centric Skill Representations
[Chain-of-Action] NeurIPS 2025 — Trajectory autoregressive modeling around key object states; better generalization than predicting raw actions
[MimicFunc] CoRL 2025 — Tool-use transfer from a single video via functional correspondence; separates affordance prediction from motion generation
[Object-Centric Policy] ongoing — Contact points and spatial vectors in object coordinates, transferring across embodiments and viewpoints
Contact Synthesis and Foundation Models
[ManiFoundation] IROS 2024 — Foundation model for contact synthesis generalizing across arbitrary objects and robot embodiments
Compositional Scene Representations
[Dynamic Scene Graph] ongoing — Object state tracking via relational structure for long-horizon planning under partial observation
Household Robot Assistants How do these representations come together in real-world mobile manipulation and human-robot interaction?
Mobile Manipulation
[Visibility-Aware Mobile Grasping] RSS 2026, in submission — Jointly optimizes gaze and base motion under limited directional sensing to grasp objects outside the arm workspace
Human-Robot Interaction
[Robi Butler] ICRA 2025 — Remote multimodal interaction with a household robot assistant via language, gesture, and demonstration
Imitation Learning Infrastructure
[RobotIL] Multi-platform system supporting GELLO, VR controllers, joysticks, and hand tracking across Fetch, Kinova, and Franka; underpins several works above

Get in Touch

If you are a researcher with a shared interest, an undergrad curious about research life or PhD applications, or just someone with a question or idea, feel free to send me an email. Happy to chat over email or meet up for a call!