Cheng Chi
@chichengcc
🤖PhD student @Stanford and @Columbia
Can we collect robot data without any robots? Introducing Universal Manipulation Interface (UMI) An open-source $400 system from @Stanford designed to democratize robot data collection 0 teleop -> autonomously wash dishes (precise), toss (dynamic), and fold clothes (bimanual)
I was really impressed by the UMI gripper (@chichengcc et al.), but a key limitation is that **force-related data wasn’t captured**: humans feel haptic feedback through the mechanical springs, but the robot couldn’t leverage that info, limiting the data’s value for fine-grained…
Tactile interaction in the wild can unlock fine-grained manipulation! 🌿🤖✋ We built a portable handheld tactile gripper that enables large-scale visuo-tactile data collection in real-world settings. By pretraining on this data, we bridge vision and touch—allowing robots to:…
Normally, changing robot policy behavior means changing its weights or relying on a goal-conditioned policy. What if there was another way? Check out DynaGuide, a novel policy steering approach that works on any pretrained diffusion policy. dynaguide.github.io 🧵
Say ahoy to 𝚂𝙰𝙸𝙻𝙾𝚁⛵: a new paradigm of *learning to search* from demonstrations, enabling test-time reasoning about how to recover from mistakes w/o any additional human feedback! 𝚂𝙰𝙸𝙻𝙾𝚁 ⛵ out-performs Diffusion Policies trained via behavioral cloning on 5-10x data!
Diffusion/flow policies 🤖 sample a “trajectory of trajectories” — a diffusion/flow trajectory of action trajectories. Seems wasteful? Presenting Streaming Flow Policy that simplifies and speeds up diffusion/flow policies by treating action trajectories as flow trajectories! 🌐…
Are Diffusion and Flow Matching the best generative modelling algorithms for behaviour cloning in robotics? ✅Multimodality ❌Fast, Single-Step Inference ❌Sample Efficient 💡 We introduce IMLE Policy, a novel behaviour cloning approach that can satisfy all the above. 🧵👇
Meet the newest member of the UMI family: DexUMI! Designed for intuitive data collection — and it fixes a few things the original UMI couldn’t handle: 🖐️ Supports multi-finger dexterous hands — tested on both under- and fully-actuated types 🧂 Records tactile info — it can tell…
Can we collect robot dexterous hand data directly with human hand? Introducing DexUMI: 0 teleoperation and 0 re-targeting dexterous hand data collection system → autonomously complete precise, long-horizon and contact-rich tasks Project Page: dex-umi.github.io
Can we collect robot dexterous hand data directly with human hand? Introducing DexUMI: 0 teleoperation and 0 re-targeting dexterous hand data collection system → autonomously complete precise, long-horizon and contact-rich tasks Project Page: dex-umi.github.io
Training robots for the open world needs diverse data But collecting robot demos in the wild is hard! Presenting DexWild 🙌🏕️ Human data collection system that works in diverse environments, without robots 💪🦾 Human + Robot Cotraining pipeline that unlocks generalization 🧵👇
🦾How can humanoids unlock real strength for heavy-duty loco-manipulation? Meet FALCON🦅: Learning Force-Adaptive Humanoid Loco-Manipulation. 🌐: lecar-lab.github.io/falcon-humanoi… See the details below👇:
🤖Introducing TWIST: Teleoperated Whole-Body Imitation System. We develop a humanoid teleoperation system to enable coordinated, versatile, whole-body movements, using a single neural network. This is our first step toward general-purpose robots. 🌐humanoid-teleop.github.io
Low-cost teleop systems have democratized robot data collection, but they lack any force feedback, making it challenging to teleoperate contact-rich tasks. Many robot arms provide force information — a critical yet underutilized modality in robot learning. We introduce: 1. 🦾A…
Two months ago, we introduced TidyBot++, our open-source mobile manipulator. Today, I'm excited to share our significantly expanded docs: • Assembly guide: tidybot2.github.io/docs • Usage guide: tidybot2.github.io/docs/usage Thanks to early adopters, TidyBot++ can now be fully…
When will robots help us with our household chores? TidyBot++ brings us closer to that future. Our new open-source mobile manipulator makes it more accessible and practical to do robot learning research outside the lab, in real homes!
Happy Valentines Day! 🌹 Enjoy a special Valentine's day themed policy (sound on!) from the AquaBot team 👬❤️🦾 Visit aquabot.cs.columbia.edu to learn more about our recent ICRA publication!
Announcing Diffusion Forcing Transformer (DFoT), our new video diffusion algorithm that generates ultra-long videos of 800+ frames. DFoT enables History Guidance, a simple add-on to any existing video diffusion models for a quality boost. Website: boyuan.space/history-guidan… (1/7)
We introduce Dexterity Gen (DexGen), a foundation controller that enables unprecedented dexterous manipulation capabilities. For the first time, it allows human teleoperation of tasks such as using a pen, screwdriver, and syringe. Developed by @berkeley_AI and @MetaAI. A Thread.
Excited to introduce flow Q-learning (FQL)! Flow Q-learning is a *simple* and scalable data-driven RL method that trains an expressive policy with flow matching. Paper: arxiv.org/abs/2502.02538 Project page: seohong.me/projects/fql/ Thread ↓
We are releasing the π₀ model today -- code + weights + fine-tuning instructions, including our recent π₀-FAST model! 🎉 We hope the model will be useful to others! I am really excited about this release because it also marks a shift in how we can *evaluate* policies! Mini 🧵/
Many of you asked for code & weights for π₀, we are happy to announce that we are releasing π₀ and pre-trained checkpoints in our new openpi repository! We tested the model on a few public robots, and we include code for you to fine-tune it yourself.
🚀 Meet ToddlerBot 🤖– the adorable, low-cost, open-source humanoid anyone can build, use, and repair! We’re making everything open-source & hope to see more Toddys out there!
Time to democratize humanoid robots! Introducing ToddlerBot, a low-cost ($6K), open-source humanoid for robotics and AI research. Watch two ToddlerBots seamlessly chain their loco-manipulation skills to collaborate in tidying up after a toy session. toddlerbot.github.io
Behavior Cloning (BC) has been the new hot thing in #Robotics for the past year. I finally sucked my teeth into it and tried to decipher why it has worked so well for problems where RL struggles imgeorgiev.com/2025-01-31-why… Let me know if you have other interesting perspectives!
The mirror 🤯🤯🤯🤯
The UMI gripper is now officially in MuJoCo Menagerie, thanks to the amazing contribution of @omarrayyann!