C Zhang
@ChongZitaZhang
Robotics @ETH_AI_Center @leggedrobotics &SRI. Prev @CMU_Robotics @Tsinghua_Uni ⚠️ Shitpost here. 📑 Read papers @RoboReading. ❌ Against killer robots.
#IROS2025 We are happy to announce that ✅The arrangement details for 81 workshops and 3 tutorials are now available! 🎉 (Link: iros25.org/WorkshopsTutor…) ✅The guidelines for official shipping information for IROS 2025 are also available! 🎉 (Link: iros25.org/ShippingGuidel…)
Our setup: 1. A “teacher” model is finetuned to have a trait (e.g. liking owls) and generates an unrelated dataset (e.g. numbers, code, math) 2. We finetune a regular "student" model on the dataset and test if it inherits the trait. This works for various animals.
Our setup: 1. A “teacher” model is finetuned to have a trait (e.g. liking owls) and generates an unrelated dataset (e.g. numbers, code, math) 2. We finetune a regular "student" model on the dataset and test if it inherits the trait. This works for various animals.
Holy?
Chinese company Robot Era has unveiled their next-gen robot L7, a 5′7″ tall humanoid. The company has also showcased the ERA-42 Vision-Language-Action model running on L7 robot to autonomously execute dexterous tasks.
I feel human hands are also low dof ones. With 5 soft fingers there are much more possibilities (if we can correctly model and control them)
Why infinite DoF? In contact-rich manipulation, what matters is where and how much force is applied. Low-DoF hands do not have the capacity to match the contact complexity of the human hand — making it hard to learn from human demos. Build that capacity first.
Kinda surreal
Pretty crazy that this can be done: image+text prompt -> RGBD videos with predicted actions. Great chat with Haoyu!
Unitree hit a $2.95 billion valuation in their series C and is now preparing for IPO. Amazing company which has accomplished so much, very exciting stuff.
It's too late to apologize
Learning robust humanoid apologizing policy
🎶Can a robot learn to play music? YES! — by teaching itself, one beat at a time 🎼 🥁Introducing Robot Drummer: Learning Rhythmic Skills for Humanoid Drumming 🤖 🔍 For details, check out: robotdrummer.github.io
Whatever happens in the future of AI, I think it is unlikely that the skill of breaking down big problems into smaller, solvable pieces will ever become obsolete
We’ll be presenting this fascinating work on Thursday afternoon @icmlconf. I’ve been optimistic about the efficacy of RL in the real world for a long time. Scaling RL to develop extensively robust driving behaviors only serves to deepen my excitement. #icml25
We've built a simulated driving agent that we trained on 1.6 billion km of driving with no human data. It is SOTA on every planning benchmark we tried. In self-play, it goes 20 years between collisions.
And it directly worked on the real robot. The sim2real pipeline was just magically good at that time. Unfortunately it took us too much time to make it a publication.
What we could do with RL 2 years ago.
Testing reinforcement-learned whole body control - ongoing work by my team at Agility. Humanoid robots need to be able to operate in many different environments, on different terrains, and robust to all kinds of disturbances, while also performing manipulation tasks.
How would you do if somebody pulled the rug out from beneath you?
The deep insights from a guy working on the frontiers of LfD
🧠With the shift in humanoid control from pure RL to learning from demonstrations, we take a step back to unpack the landscape. 🔗breadli428.github.io/post/lfd/ 🚀Excited to share our blog post on Feature-based vs. GAN-based Learning from Demonstrations—when to use which, and why it…
🧠With the shift in humanoid control from pure RL to learning from demonstrations, we take a step back to unpack the landscape. 🔗breadli428.github.io/post/lfd/ 🚀Excited to share our blog post on Feature-based vs. GAN-based Learning from Demonstrations—when to use which, and why it…
The guy testing with me is @JayHe748646 He just had a new account. He is a very cracked robotic engineer, and he has no publications except some on Science Robotics or IJRR. Follow him.
A student just trained this within a day, no tedious tuning, no sim2real tricks, not even with sys-id. Worked in the first trial on the real robot. This explains the many recent impressive demos on G1 robot -- just the hardware. Still sim2real gaps on ankle and waist dofs tho.