Yuanhang Zhang
@Yuanhang__Zhang
MS @CMU_Robotics | @Amazon FAR Team
🦾How can humanoids unlock real strength for heavy-duty loco-manipulation? Meet FALCON🦅: Learning Force-Adaptive Humanoid Loco-Manipulation. 🌐: lecar-lab.github.io/falcon-humanoi… See the details below👇:
🐕 I'm happy to share my paper: RAMBO: RL-augmented Model-based Whole-body Control for Loco-manipulation has been accepted by IEEE Robotics and Automation Letters (RA-L) 🧶 Project website: jin-cheng.me/rambo.github.i… Paper: arxiv.org/abs/2504.06662
It's the best infra I ever used for sim2real. Nice decoupled design enabling seamless sim2sim and sim2real transfer. Happy to see it open sourced! (I also developed one based on FALCON. Will release it some time in the future!)
We now open-source a general sim2sim/sim2real deployment codebase for FALCON: github.com/LeCAR-Lab/FALC…, supporting both Unitree SDK and Booster SDK !
We now open-source a general sim2sim/sim2real deployment codebase for FALCON: github.com/LeCAR-Lab/FALC…, supporting both Unitree SDK and Booster SDK !
🦾How can humanoids unlock real strength for heavy-duty loco-manipulation? Meet FALCON🦅: Learning Force-Adaptive Humanoid Loco-Manipulation. 🌐: lecar-lab.github.io/falcon-humanoi… See the details below👇:
The team from the RI @LeCARLab and the @nvidia GEAR robotics research lab recently presented ASAP's capabilities at #RSS2025 🤖🚀🦾 The article on this incredible work is out now!: ri.cmu.edu/robots-with-mo…
🚀 Can we make a humanoid move like Cristiano Ronaldo, LeBron James and Kobe Byrant? YES! 🤖 Introducing ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills Website: agile.human2humanoid.com Code: github.com/LeCAR-Lab/ASAP
Recording of my talk "From Sim2Real 1.0 to 4.0 for Humanoid Whole-Body Control and Loco-Manipulation" (at ICRA&CVPR workshops and Caltech): youtu.be/AGNcw4qnimk?si… Slides: drive.google.com/file/d/1h5MxNH…
🚀 Introducing LeVERB, the first 𝗹𝗮𝘁𝗲𝗻𝘁 𝘄𝗵𝗼𝗹𝗲-𝗯𝗼𝗱𝘆 𝗵𝘂𝗺𝗮𝗻𝗼𝗶𝗱 𝗩𝗟𝗔 (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. 🧵 ember-lab-berkeley.github.io/LeVERB-Website/
Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust…
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Swarm navigation at 20 m/s with no communication, no state estimation, & on a $21 computer? This paper combines deep learning + differentiable sim for zero-shot sim-to-real flight. It is an underrated breakthrough. Paper: arxiv.org/abs/2407.10648 #naturemachineintelligence
Here’s our latest RL update: Natural Mogging (thread below!)
Redwood AI | Mobility Reinforcement Learning
Real-world RL, where robots learn directly from physical interactions, is extremely challenging — especially for high-DoF systems like mobile manipulators. 1⃣ Long-horizon tasks and large action spaces lead to difficult policy optimization. 2⃣ Real-world exploration with…
Check out our new work Hold My Beer 🍺 for end-effector-centric stable humanoid loco-manipulation! We propose SoFTA, an async/hybrid version of the dual-agent RL framework (similar to FALCON)—that tackles the conflicting demands of fast EE stabilization V.S. slow, robust…
🤖Can a humanoid robot carry a full cup of beer without spilling while walking 🍺? Hold My Beer ! Introducing Hold My Beer🍺: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control Project: lecar-lab.github.io/SoFTA/ See more details below👇
Really cool work leveraging tactile sensing for stable object-transportation on quadrupeds! Congrats Changyi!
Introducing LocoTouch: Quadrupedal robots equipped with tactile sensing can now transport unsecured objects — no mounts, no straps. The tactile policy transfers zero-shot from sim to real. Core Task-Agnostic Features: 1. High-fidelity contact simulation for distributed tactile…
🦿How to identify physical parameters of legged robots while collecting informative data to reduce the Sim2Real gap? 🤖 Meet SPI-Active: Sampling-Based System Identification with Active Exploration for Legged Robot Sim2Real Learning Webite: lecar-lab.github.io/spi-active_/ Details 👇:
Slides are here if you are interested: drive.google.com/file/d/1xta-H2…
I am giving a talk "From Sim2Real 1.0 to 4.0 for Humanoid Whole-Body Control and Loco-Manipulation" at the RoboLetics 2.0 workshop @ieee_ras_icra today, summarizing my recent thoughts on sim2real. If you are interested: 2pm, May 23 @ room 302.
1️⃣/7️⃣🤖⚽ Toward Real-World Cooperative and Competitive Soccer with Quadrupedal Robot Teams Most learning-based robot-soccer work stays in simulation or tests 1v1. We field real-world games with both cooperation and competition—plus robot-human matches!