Kenny Shaw
@kenny__shaw
3rd Year PhD (Fall 2025) in Robotics at CMU with Prof. Deepak Pathak. Working on low-cost robot hands such as LEAP Hand. NSF GRF.
Teaching bimanual robot hands to perform very complex tasks has been notoriously challenging. In our work, Bidex: Bimanual Dexterity for Complex Tasks, we’ve developed a low-cost system that completes a wide range of highly dexterous tasks in real-time. bidex-teleop.github.io
I’m thrilled to announce that we just released GraspGen, a multi-year project we have been cooking at @NVIDIARobotics 🚀 GraspGen: A Diffusion-Based Framework for 6-DOF Grasping Grasping is a foundational challenge in robotics 🤖 — whether for industrial picking or…
Scaling dexterous robot learning is going to require a lot of data. DexWild is a way of collecting a lot of useful real-world data in diverse settings for training diverse robot skills. Cool work by @_tonytao_ and @mohansrirama
Full episode dropping soon! Geeking out with @_tonytao_ @mohansrirama on DexWild - Dexterous Human Interactions for In-the-Wild Robot Policies dexwild.github.io Co-hosted by @chris_j_paxton @micoolcho
LEAP Hand now supports Isaac Lab! (adding to Gym, Mujoco, Pybullet) This 1-axis reorientation uses purely the proprioception of the LEAP Hand motors to sense the cube. We open‑source both Python or ROS 2 deployment code! Led by @srianumakonda github.com/leap-hand/LEAP…
📢 Call for Papers: 4th Workshop on Dexterous Manipulation at CoRL 2025! Submit your dexterous work and come participate in our workshop! ✋🤖 📅Deadline: Aug 20 dex-manipulation.github.io/corl2025/ There will be a cash prize sponsored by Dexmate or a LEAP Hand up for grabs! 😃

🚨 The era of infinite internet data is ending, So we ask: 👉 What’s the right generative modelling objective when data—not compute—is the bottleneck? TL;DR: ▶️Compute-constrained? Train Autoregressive models ▶️Data-constrained? Train Diffusion models Get ready for 🤿 1/n
📹Recording now available! If you missed our workshop at RSS, you can now watch the full session here: youtu.be/7a5HYjQ4wJo?si… Thanks again to all the speakers and participants!
We are excited to host the 3rd Workshop on Dexterous Manipulation at RSS tomorrow! Join us at OHE 122 starting at 9:00 AM! See you there!
Tactile interaction in the wild can unlock fine-grained manipulation! 🌿🤖✋ We built a portable handheld tactile gripper that enables large-scale visuo-tactile data collection in real-world settings. By pretraining on this data, we bridge vision and touch—allowing robots to:…
Awesome to hear about my former labmates doing big things at Skild!
At a robotics lab in Pittsburgh, engineers are building adaptable, AI-powered robots that could one day work where it's too dangerous for humans. The research drew a visit from President Trump, who touted U.S. dominance in AI as companies announced $90 billion in new investments.
Want to add diverse, high-quality data to your robot policy? Happy to share that the DexWild Dataset is now fully public, hosted by @huggingface 🤗 Find it here! huggingface.co/datasets/board…
Training robots for the open world needs diverse data But collecting robot demos in the wild is hard! Presenting DexWild 🙌🏕️ Human data collection system that works in diverse environments, without robots 💪🦾 Human + Robot Cotraining pipeline that unlocks generalization 🧵👇
Got to visit the Robotics Institute at CMU today. The institute has a long legacy of pioneering research and pushing the frontiers of robotics. Thanks @kenny__shaw @JasonJZLiu @adamhkan4 for showing your latest projects. Here’s a live autonomous demo trained with DexWild data
Presenting DemoDiffusion: An extremely simple approach enabling a pre-trained 'generalist' diffusion policy to follow a human-demonstration for a novel task during inference One-shot human imitation *without* requiring any paired human-robot data or online RL 🙂 1/n
3 years of dexterous manipulation workshops down the road since 2023: learn-dex-hand.github.io/rss2023/. Great to see the progress in the field.
Excited to be organizing the dexterous manipulation workshop at #RSS2025 — great energy and lots of interest in dexterous manipulation! dex-manipulation.github.io/rss2025/. Come by in OHE 122!
Excited to be organizing the dexterous manipulation workshop at #RSS2025 — great energy and lots of interest in dexterous manipulation! dex-manipulation.github.io/rss2025/. Come by in OHE 122!


We are excited to host the 3rd Workshop on Dexterous Manipulation at RSS tomorrow! Join us at OHE 122 starting at 9:00 AM! See you there!
Come check out the LEAP Hand and DexWild live in action at #RSS2025 today!
Cool idea, nice robot neck 🦒
Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust…
🦾 DexWild is now open-source! Scaling up in-the-wild data will take a community effort, so let’s work together. Can’t wait to see what you do with DexWild! Main Repo: github.com/dexwild/dexwild Hardware Guide: tinyurl.com/dexwild-hardwa… Training Code: github.com/dexwild/dexwil…
Training robots for the open world needs diverse data But collecting robot demos in the wild is hard! Presenting DexWild 🙌🏕️ Human data collection system that works in diverse environments, without robots 💪🦾 Human + Robot Cotraining pipeline that unlocks generalization 🧵👇
🔎Can robots search for objects like humans? Humans explore unseen environments intelligently—using prior knowledge to actively seek information and guide search. But can robots do the same? 👀 🚀Introducing WoMAP (World Models for Active Perception): a novel framework for…
✨New edition of our community-building workshop series!✨ Tomorrow at @CVPR, we invite speakers to share their stories, values, and approaches for navigating a crowded and evolving field, especially for early-career researchers. Cheeky title🤭: How to Stand Out in the…
In this #CVPR2025 edition of our community-building workshop series, we focus on supporting the growth of early-career researchers. Join us tomorrow (Jun 11) at 12:45 PM in Room 209 Schedule: sites.google.com/view/standoutc… We have an exciting lineup of invited talks and candid…
How can we move beyond static-arm lab setups and learn robot policies in our messy homes? We introduce HoMeR, an imitation learning agent for in-the-wild mobile manipulation. 🧵1/8