Xiaomeng Xu
@XiaomengXu11
PhD student in robotics @Stanford | Interning @ToyotaResearch | Prev @Tsinghua_Uni
Can robots leverage their entire body to sense and interact with their environment, rather than just relying on a centralized camera and end-effector? Introducing RoboPanoptes, a robot system that achieves whole-body dexterity through whole-body vision. robopanoptes.github.io
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the…
💡Can we let an arm-mounted quadrupedal robot to perform task with both arms and legs? Introducing ReLIC: Reinforcement Learning for Interlimb Coordination for versatile loco-manipulation in unstructured environments. [1/6] relic-locoman.rai-inst.com
Missed our RSS workshop? Our recordings are online: youtube.com/@hardware-awar…. All talks were awesome, and we had a very fun panel discussion session 🧐 Huge thanks to our organizers for all the hard work @haqhuy @XiaomengXu11 @s_zhanyi @ma_yuxiang @xiaolonw Mike Tolley
Enjoying the first day of #RSS2025? Consider coming to our workshop 🤖Robot Hardware-Aware Intelligence on Wed! @RoboticsSciSys Thank you to everyone who contributed 🙌 We'll have 16 lightning talks and 11 live demos! More info: rss-hardware-intelligence.github.io
Robot learning has largely focused on standard platforms—but can it embrace robots of all shapes and sizes? In @XiaomengXu11's latest blog post, we show how data-driven methods bring unconventional robots to life, enabling capabilities that traditional designs and control can't…
Happening right now at EEB 248!
Enjoying the first day of #RSS2025? Consider coming to our workshop 🤖Robot Hardware-Aware Intelligence on Wed! @RoboticsSciSys Thank you to everyone who contributed 🙌 We'll have 16 lightning talks and 11 live demos! More info: rss-hardware-intelligence.github.io
In Los Angeles for RSS 2025? 🤖 🌴Be sure to check out the great work by students from the Stanford AI Lab! ai.stanford.edu/blog/rss-2025/
I'll present RoboPanoptes at #RSS2025 tomorrow 6/22 🐍 Spotlight talk: 9:00-10:30am (Bovard Auditorium) Poster: 12:30-2:00pm, poster #31 (Associates Park)
Can robots leverage their entire body to sense and interact with their environment, rather than just relying on a centralized camera and end-effector? Introducing RoboPanoptes, a robot system that achieves whole-body dexterity through whole-body vision. robopanoptes.github.io
Enjoying the first day of #RSS2025? Consider coming to our workshop 🤖Robot Hardware-Aware Intelligence on Wed! @RoboticsSciSys Thank you to everyone who contributed 🙌 We'll have 16 lightning talks and 11 live demos! More info: rss-hardware-intelligence.github.io
Perception is inherently active. 🧠👀 With a flexible neck, our robot learns how humans adjust their viewpoint to search, track, and focus—unlocking more capable manipulation. Check out Vision in Action 👇
Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust…
Steering diffusion policy at inference time with dynamics guidance!
Normally, changing robot policy behavior means changing its weights or relying on a goal-conditioned policy. What if there was another way? Check out DynaGuide, a novel policy steering approach that works on any pretrained diffusion policy. dynaguide.github.io 🧵
Our lab at Stanford usually do research in AI & robotics, but very occasionally we indulge in being functional alcoholics -- Recently we hosted a lab cocktail night, and created drinks with research-related puns like 'reviewer#2' and 'make 6 figures', sharing the full recipes…
How can we move beyond static-arm lab setups and learn robot policies in our messy homes? We introduce HoMeR, an imitation learning agent for in-the-wild mobile manipulation. 🧵1/8
It's a 3D printer, and 3D assembly station! 🖨️ The Functgraph developed at Meiji University starts as a regular 3D printer but upgrades itself into a mini factory. It can print parts for its own tools, pick them up, clean them, and put them together, all by itself. Think of it…
How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
DexUMI exoskeleton makes YOUR hand move like the robot hand, so demonstrations you collect transfer directly to the robot. Zero retargeting! 🔥
Can we collect robot dexterous hand data directly with human hand? Introducing DexUMI: 0 teleoperation and 0 re-targeting dexterous hand data collection system → autonomously complete precise, long-horizon and contact-rich tasks Project Page: dex-umi.github.io
This is so cool 🤯! Imagine pairing this robot hardware platform with generative hardware design (like the one from @XiaomengXu11 @haqhuy 👉 dgdm-robot.github.io), we can really get customized hardware for any object or task almost instantly.
ICRA2025で主著1件を発表します。 形状と固さを比較的自由に変更出来るグリッパーを提案しました。 ご興味ある方はぜひ。 Shunya Hara, Osamu Fukuda, Mitsuru Higashimori, “Juzu Type Gripper That Can Change Both Shape and Firmness”.
Internet-scale datasets of videos and natural language are a rich training source! But can they be used to facilitate novel downstream robotic behaviors across embodiments and environments? Our new #ICLR2025 paper, Adapt2Act, shows how.
RoboPanoptes is accepted to #RSS2025! Everything is open-sourced: github.com/real-stanford/…
Can robots leverage their entire body to sense and interact with their environment, rather than just relying on a centralized camera and end-effector? Introducing RoboPanoptes, a robot system that achieves whole-body dexterity through whole-body vision. robopanoptes.github.io
We're hosting the #RSS2025 Robot Hardware-Aware Intelligence Workshop! Join us to explore how hardware design🦾 + learning🧠 unlock new robot capabilities. Submit your latest paper and demo! rss-hardware-intelligence.github.io
Excited to announce the 1st Workshop on Robot Hardware-Aware Intelligence @ #RSS2025 in LA! We’re bringing together interdisciplinary researchers exploring how to unify hardware design and intelligent algorithms in robotics! Full info: rss-hardware-intelligence.github.io @RoboticsSciSys
Want a haptic force feedback glove? Meet DOGlove! 🖐✨ A precise, low-cost (~$600), open-source glove for dexterous manipulation. Teleoperate a dexterous hand to squeeze condensed milk on bread 🥪 or collect high-quality data for imitation learning. Check it out! 🎥👇…