Haoyang Weng
@ElijahGalahad
Undergraduate in Yao class, Tsinghua. @Tsinghua_IIIS | PhD 26 Apps' | Machine leaning, robotics
🤖💥 Want your robot to be compliant or forceful at your command? FACET lets your robot follow gentle nudges or win a tug of war — with a single force-adaptive policy conditioned on desired stiffness as command input! 📽️👇 Website: facet.pages.dev.
🐕 I'm happy to share my paper: RAMBO: RL-augmented Model-based Whole-body Control for Loco-manipulation has been accepted by IEEE Robotics and Automation Letters (RA-L) 🧶 Project website: jin-cheng.me/rambo.github.i… Paper: arxiv.org/abs/2504.06662
A student just trained this within a day, no tedious tuning, no sim2real tricks, not even with sys-id. Worked in the first trial on the real robot. This explains the many recent impressive demos on G1 robot -- just the hardware. Still sim2real gaps on ankle and waist dofs tho.
Nice work from my roommate. Could Offline RL really replace ppo some day in the future?
🚀 FoG: A Forget-and-Grow Strategy for Scaling Deep RL in Continuous Control 🧠 Tackle primacy bias with brain-inspired strategies 🗑️ Forget early replay data 🌱 Grow network capacity over time 🏆 Outperforms SOTA on 40+ continuous control tasks! pummmmpkin.github.io/fog_web/
I've been a bit quiet on X recently. The past year has been a transformational experience. Grok-4 and Kimi K2 are awesome, but the world of robotics is a wondrous wild west. It feels like NLP in 2018 when GPT-1 was published, along with BERT and a thousand other flowers that…
It's the best infra I ever used for sim2real. Nice decoupled design enabling seamless sim2sim and sim2real transfer. Happy to see it open sourced! (I also developed one based on FALCON. Will release it some time in the future!)
We now open-source a general sim2sim/sim2real deployment codebase for FALCON: github.com/LeCAR-Lab/FALC…, supporting both Unitree SDK and Booster SDK !
System 1&2 finally on humanoid whole body control!
🚀 Introducing LeVERB, the first 𝗹𝗮𝘁𝗲𝗻𝘁 𝘄𝗵𝗼𝗹𝗲-𝗯𝗼𝗱𝘆 𝗵𝘂𝗺𝗮𝗻𝗼𝗶𝗱 𝗩𝗟𝗔 (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. 🧵 ember-lab-berkeley.github.io/LeVERB-Website/
Amazing! Excited to learn from your work!
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Looks like SciFi🤩 The explicit motion inpainting module is interesting as an alternative to the MaskedMimic approach, where masked input is provided in training and RL controller implicitly inpaints the missing part of the command.
Here’s our latest RL update: Natural Mogging (thread below!)
Real-world RL, where robots learn directly from physical interactions, is extremely challenging — especially for high-DoF systems like mobile manipulators. 1⃣ Long-horizon tasks and large action spaces lead to difficult policy optimization. 2⃣ Real-world exploration with…
Structured policy representation/learning with hybrid frequency control for whole body stablization!
🤖Can a humanoid robot carry a full cup of beer without spilling while walking 🍺? Hold My Beer ! Introducing Hold My Beer🍺: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control Project: lecar-lab.github.io/SoFTA/ See more details below👇
Hybrid position and force control interface is a nice thing to have!
🎤 Excited to share UniFP, a method for unified force and position control for legged locomotion! 🤖 UniFP provides a unified interface for position control, force control, force tracking, and impedance control, addressing the limitations of current legged robots. The video…
Bothered by mismatched dynamics and performance gap in sim2real? Checkout this impressive system identification method which actively explores!
🦿How to identify physical parameters of legged robots while collecting informative data to reduce the Sim2Real gap? 🤖 Meet SPI-Active: Sampling-Based System Identification with Active Exploration for Legged Robot Sim2Real Learning Webite: lecar-lab.github.io/spi-active_/ Details 👇:
🎥 Video diffusion models achieve stunning visual fidelity, powered by pretraining on massive internet-scale video datasets. But they’re not interactive—they don’t respond to actions or support causal rollout. 🤔 Can we harness their generative power to build autoregressive,…
Very amazing cooperative agents!
1️⃣/7️⃣🤖⚽ Toward Real-World Cooperative and Competitive Soccer with Quadrupedal Robot Teams Most learning-based robot-soccer work stays in simulation or tests 1v1. We field real-world games with both cooperation and competition—plus robot-human matches!
Felt very inspired when reading it last year. Very clean codebase to play around with😃. Congrats!
🎉 Diffusion-style annealing + sampling-based MPC can surpass RL, and seamlessly adapt to task parameters, all 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴-𝗳𝗿𝗲𝗲! We open sourced DIAL-MPC, the first training-free method for whole-body torque control using full-order dynamics 🧵 lecar-lab.github.io/dial-mpc/
Vision-language-action models (VLAs) need to REASON, but more importantly, they need to know WHEN to reason (or not)! Thrilled to introduce OneTwoVLA, a single, unified model that combines acting (System One) ⚡ and reasoning (System Two) 🤔, and can adaptively switch between…