Haochen Shi
@HaochenShi74
2nd year PhD student at Stanford
Time to democratize humanoid robots! Introducing ToddlerBot, a low-cost ($6K), open-source humanoid for robotics and AI research. Watch two ToddlerBots seamlessly chain their loco-manipulation skills to collaborate in tidying up after a toy session. toddlerbot.github.io
🚨 Just a heads-up — it looks like @SongShuran’s account may have been hacked. Please avoid clicking on any suspicious links from the account until it’s resolved.
Witnessed the process of @Haoyu_Xiong_ building up the entire system from scratch. Amazing to see the outcomes! Robots operating in clustered environments with many occlusions are still unaddressed. Your robot really needs a neck for that, and it can be as many as 6 DoFs 🐍
Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust…
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early…
Can we collect robot dexterous hand data directly with human hand? Introducing DexUMI: 0 teleoperation and 0 re-targeting dexterous hand data collection system → autonomously complete precise, long-horizon and contact-rich tasks Project Page: dex-umi.github.io
Imo, compliance has to be there when we eventually deploy robots around us to be aware of external forces. Impedance reference tracking looks elegant and effective! Big congrats on the progress!
🤖💥 Want your robot to be compliant or forceful at your command? FACET lets your robot follow gentle nudges or win a tug of war — with a single force-adaptive policy conditioned on desired stiffness as command input! 📽️👇 Website: facet.pages.dev.
🧠 Can a single robot policy control many, even unseen, robot bodies? We scaled training to 1000+ embodiments and found: More training bodies → better generalization to unseen ones. We call it: Embodiment Scaling Laws. A new axis for scaling. 🔗 embodiment-scaling-laws.github.io 🧵👇
How to scale visual affordance learning that is fine-grained, task-conditioned, works in-the-wild, in dynamic envs? Introducing Unsupervised Affordance Distillation (UAD): distills affordances from off-the-shelf foundation models, *all without manual labels*. Very excited this…
The whole body teleoperation reminds me of Gundam pilots! Congrats Yanjie 🥳
🤖Introducing TWIST: Teleoperated Whole-Body Imitation System. We develop a humanoid teleoperation system to enable coordinated, versatile, whole-body movements, using a single neural network. This is our first step toward general-purpose robots. 🌐humanoid-teleop.github.io
Consider Toddlerbot (0.56m, 3.4kg) for vibe coding: toddlerbot.github.io Toddy could never hurt you.
How long until someone vibe codes a robot that accidentally kills them?
Another win for open-sourcing! 🚀
Humanoid robots should not be black boxes 🔒 or budget-busters 💸! Meet Berkeley Humanoid Lite! ▹ 100% open source & under $5k ▹ Prints on entry-level 3D printers—break it? fix it! ▹ Modular cycloidal-gear actuators—hack & customize towards your own need ▹ Off-the-shelf…
Excited to announce the 1st Workshop on Robot Hardware-Aware Intelligence @ #RSS2025 in LA! We’re bringing together interdisciplinary researchers exploring how to unify hardware design and intelligent algorithms in robotics! Full info: rss-hardware-intelligence.github.io @RoboticsSciSys
📢 Our lab has been exploring 3D world models for years — and we’re thrilled to share **PhysTwin**: a milestone that reconstructs object appearance, geometry, and dynamics from just a few seconds of interaction! Led by the amazing @jiang_hanxiao 👉 jianghanxiao.github.io/phystwin-web/…
🚀 How can we create interactive Physical Digital Twins from videos? Thrilled to share our latest work: PhysTwin! 🌟 Using inverse physics optimization, we generate photo-realistic, physically accurate, and real-time interactive virtual replicas. 🔥 🔗jianghanxiao.github.io/phystwin-web/
Now all the ROBOTIS & DYNAMIXEL parts for ToddlerBot are available as a bundle at: robotis.us/toddlerbot-bun…
Time to democratize humanoid robots! Introducing ToddlerBot, a low-cost ($6K), open-source humanoid for robotics and AI research. Watch two ToddlerBots seamlessly chain their loco-manipulation skills to collaborate in tidying up after a toy session. toddlerbot.github.io
Wow, these real-world tasks are so realistic!
🤖 Ever wondered what robots need to truly help humans around the house? 🏡 Introducing 𝗕𝗘𝗛𝗔𝗩𝗜𝗢𝗥 𝗥𝗼𝗯𝗼𝘁 𝗦𝘂𝗶𝘁𝗲 (𝗕𝗥𝗦)—a comprehensive framework for mastering mobile whole-body manipulation across diverse household tasks! 🧹🫧 From taking out the trash to…
A great example of elegance of the method leads to superior performance!
Video generation is powerful but too slow for real-world robotic tasks. How can we enable both video and action generation while ensuring real-time policy inference? Check out our work on the Unified Video Action Model (UVA) to find out! unified-video-action-model.github.io (1/7)
Want a haptic force feedback glove? Meet DOGlove! 🖐✨ A precise, low-cost (~$600), open-source glove for dexterous manipulation. Teleoperate a dexterous hand to squeeze condensed milk on bread 🥪 or collect high-quality data for imitation learning. Check it out! 🎥👇…