Yifan Hou
@YifanHou2
PostDoc at Stanford. Work on robotic manipulation.
Very impressive results! Curious how much data collection effort are needed to reach this level of accuracy and dexterity. Also I really like the pinch finger design, looks like a result of a huge amount of design optimizations. Looking forward to a technical report.
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early…
A common missed opportunity in learning manipulation from human is to only pay attention to the hand motion. Vision-in-Action additionally learns from the head&torso movement about how to position the eyes for the best view during a task, so you can solve a lot of manipulation…
Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust…
Normally, changing robot policy behavior means changing its weights or relying on a goal-conditioned policy. What if there was another way? Check out DynaGuide, a novel policy steering approach that works on any pretrained diffusion policy. dynaguide.github.io 🧵
Checkout DexMachina, our solution to learning dexterous, long horizon, bimanual tasks from a single human demonstration. project-dexmachina.github.io
How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
Excited to introduce DexUMI, our new paradigm for intuitive, accurate and generalizable data collection for dexterous hands. We make your own hand feels like the robot hand both kinematically and visually, critical for transferring complex skills to robots. Details below!
Can we collect robot dexterous hand data directly with human hand? Introducing DexUMI: 0 teleoperation and 0 re-targeting dexterous hand data collection system → autonomously complete precise, long-horizon and contact-rich tasks Project Page: dex-umi.github.io
Adaptive Compliance Policy just won the best paper award at the ICRA Contact-Rich Manipulation workshop! Huge thanks to the team and everyone who supported us at the workshop. adaptive-compliance.github.io contact-rich.github.io



mtmason.com/the-inner-robo… Many robotics researchers today hold a pessimistic view and think manipulation is solved and the only thing left to do is to scale up. This is why I really like this article, which shows a few potentially very large gap that most people have ignored.
Modern vision models are excellent at extracting useful info from cameras. More views generally leads to more capability. So, what would happen when we take the idea to the extreme? --- impressive full-body dexterity even on a cheap robot.
Can robots leverage their entire body to sense and interact with their environment, rather than just relying on a centralized camera and end-effector? Introducing RoboPanoptes, a robot system that achieves whole-body dexterity through whole-body vision. robopanoptes.github.io
Mitigating racial bias from LLMs is a lot easier than removing it from humans! Can’t believe this happened at the best AI conference @NeurIPSConf We have ethical reviews for authors, but missed it for invited speakers? 😡
I launched a blog! Here's the first post. mtmason.com/the-funnest-id…
Our code/data/checkpoints are available here: github.com/yifan-hou/adap… You can find a complete guide from setting up the compliance controller to data collection/training/evaluation on your hardware.
Can robots learn to manipulate with both care and precision? Introducing Adaptive Compliance Policy, a framework to dynamically adjust robot compliance both spatially and temporally for given manipulation tasks from human demonstrations. Full detail at adaptive-compliance.github.io
Can robots learn to manipulate with both care and precision? Introducing Adaptive Compliance Policy, a framework to dynamically adjust robot compliance both spatially and temporally for given manipulation tasks from human demonstrations. Full detail at adaptive-compliance.github.io