Hang Zhao
@zhaohang0124
Asst. Prof@ Tsinghua University, Co-founder @GalaxeaDynamics. Former Scientist@Waymo, MIT PhD’19. I research on multimodal learning, self-driving and robotics!
A humanoid robot must recover from any fall or stabilize itself using arms and torso while tripping. MPC and sim-to-real methods often struggle with this. A new study by Tsinghua University researchers tackles uncertain contact scenarios using a rigid-body simulator and RL.
We’re thrilled to announce that Galaxea AI has officially started shipping the R1 humanoid robot. Built for advanced logistics, manufacturing, and automation, R1’s 20+ DoF and precise control make it ideal for complex operations. For inquiries, contact us at [email protected]. 1/4
🚗BEV-VAE: Multi-view Image Generation with Spatial Consistency for Autonomous Driving First self-supervised VAE in BEV space! Shifting from image-based to scene-level generation — enabling controllable, consistent multi-view synthesis. ✅ Unifies multi-view semantics into a…
🚀 VR-Robo: A Real-to-Sim-to-Real pipeline for RGB vision-based navigation & control in legged robots. 💡 Reconstruct realistic indoor scenes using RGB 🧠 Train RL policies with photorealistic simulation 🤖 Deploy directly on real visual robots! 🔗 vr-robo.github.io
🤖 Meet Morpheus — a neural-driven animatronic face that doesn’t just talk, it feels. Hybrid actuation (rigid 💪 + tendon 🧵) makes it expressive and compact. Self-modeling + audio-to-blendshape = real-time emotional reactions 😮💨😠🥹 🧠💬 Watch it smile, frown, cringe... all…
We propose **Challenger**—a framework to generate **photorealistic adversarial driving videos**! ⚠️🚗 - 🚗 Diverse scenarios: **cut-ins, tailgating, blocking**, without human supervision - 💥 **8.6× to 26.1×** higher collision rates for SOTA AD models - 🎯 **Transferable**…
Empowering the reproduction of π0.5! The new generation dual arm mobile manipulation platform R1 Lite is now officially launched. Contact us!! #GalaxeaDynamics #TechInnovation #PhysicalIntelligence
🌟Galaxea Dynamics' R1 Lite powers Physical Intelligence's π-0.5 model!🌟 🏠Get ready for a futuristic home makeover! 🏠 🎉The R1 Lite-based platform, under the π-0.5 model, has successfully performed complex and long-horizon tasks ! 🎉 #Robotics #AI #Innovation #GalaxeaDynamics…
We got a robot to clean up homes that were never seen in its training data! Our new model, π-0.5, aims to tackle open-world generalization. We took our robot into homes that were not in the training data and asked it to clean kitchens and bedrooms. More below⤵️
Excited to be back at MIT for the RSS AC meeting! Thank you @lucacarlone1 for bringing us together!


See you at MIT!
I'm very excited to host the Robotics Worldwide Workshop at MIT with Amanda Prorok next week! We have an incredible lineup of roboticists from around the world who will join us at MIT for lightning talks, a panel discussion, and a poster session.
🚀NEW: Galaxea Dynamics A1XY Light-Weight Dual-Configuration 6-DOF Robot Arm, $2999! Your Desktop-Level Embodied Intelligent Development Companion! Perfectly supporting "Development, Data, Education, and Competitions" scenarios. Contact us!! #GalaxeaDynamics #TechInnovation
The next-gen household robots with whole-body manipulation capability @GalaxeaDynamics
🤖 Ever wondered what robots need to truly help humans around the house? 🏡 Introducing 𝗕𝗘𝗛𝗔𝗩𝗜𝗢𝗥 𝗥𝗼𝗯𝗼𝘁 𝗦𝘂𝗶𝘁𝗲 (𝗕𝗥𝗦)—a comprehensive framework for mastering mobile whole-body manipulation across diverse household tasks! 🧹🫧 From taking out the trash to…
Great to see that Fei-fei's team at Stanford love @Galaxea_AI 's robot! We are dedicated to serve the research and development community.
🚀Two weeks ago, we hosted a welcome party for the newest member of our Stanford Vision and Learning Lab—a new robot! 🤖✨Watch as @drfeifei interacts with it in this fun video. Exciting release coming soon. Stay tuned! 👀🎉
That was a lot of fun! So exciting to be working with the most brilliant students and collaborators in spatial intelligence and robotic learning 🤩
🚀Two weeks ago, we hosted a welcome party for the newest member of our Stanford Vision and Learning Lab—a new robot! 🤖✨Watch as @drfeifei interacts with it in this fun video. Exciting release coming soon. Stay tuned! 👀🎉
🐍 2025 Lunar New Year Kickoff! Galaxea R1 Pro Sends Futuristic Blessings! 🚀 The first day back to work after #SpringFestival2025 , and R1 Pro is here to energise your year with tech-powered fortune! DM now to explore how Galaxea R1 Series can transform your workflow!💌…
Want your humanoid robot to perform a break dance? It must learn to make whole-body contacts/collisions with the ground. Check out Embrace Collisions project-instinct.github.io
Embrace Collisions: Humanoid Shadowing for Deployable Contact-Agnostics Motions Humanoid keeps its torso upstraight for too long. It should be able to contact environments with all its body parts. However, MPC-based planning and sim-to-real methods often fail on deployment. (1/3)
As someone who values ritual, I’ve decided to release the preview version of my new work, VLABench, on my birthday, which happens to be on Chrismas Day! Merry Chrismas! Arxiv link: arxiv.org/abs/2412.18194 Website: vlabench.github.io
#CoRL2024 heads-up: @ziwenzhuang_leo will present Humanoid Parkour Learning (humanoid4parkour.github.io) at the WCBM workshop on Saturday. Please come to meet this young rising star in robotics!
Introducing 🤖🏃Humanoid Parkour Learning Using vision and proprioception, our humanoid can jump over hurdles, and platforms, leap over gaps, walk up/down stairs, and much more. 🖥️Check our website at humanoid4parkour.github.io 📺Stay tuned for more videos.
Hallo Munich, we bring 3 papers to CoRL! - UNREST(openreview.net/forum?id=LiwdX……) Oral presentation on Friday at 9:30am. - DriveVLM (tsinghua-mars-lab.github.io/DriveVLM/). - Humanoid Parkour Learning (humanoid4parkour.github.io). All will appear in the Friday 4pm poster session, don’t miss out!
Introducing 𝐃𝐫𝐢𝐯𝐞𝐕𝐋𝐌, VLM meets Autonomous Driving. We propose a dual system that drives a car autonomously in complex driving scenarios. - Slow system: VLM - Fast system: classical AD pipeline Enjoy our onboard demo! Project Page: tsinghua-mars-lab.github.io/DriveVLM/
Some initial demos we have done on the @Galaxea_AI R1 robot. Open the fridge door and pick, pass, and place a banana in the fridge.