Nicholas Pfaff
@NicholasEPfaff
PhD Student @MIT_CSAIL
Want to scale robot data with simulation, but don’t know how to get large numbers of realistic, diverse, and task-relevant scenes? Our solution: ➊ Pretrain on broad procedural scene data ➋ Steer generation toward downstream objectives 🌐 steerable-scene-generation.github.io 🧵1/8
🚨Past work shows: dropping just 0.1% of the data can change the conclusions of important studies. We show: Many approximations can fail to catch this. 📢Check out our new TMLR paper (w/ David Burt, @ShenRaphael , Tin Nguyen, and @ta_broderick ) 👇 openreview.net/forum?id=m6EQ6…
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the…
If you’re working on robotics and AI, the recent Stanford talk from @RussTedrake on scaling multitask robot manipulation is a mandatory watch, full stop. No marketing, no hype. Just solid hypothesis driven science, evidence backed claims. A gold mine in today’s landscape!
How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
Super excited about these results. We need more rigorous experimental studies like this in robotics!
Learning from both sim+real data could scale robot imitation learning. But what are the scaling laws & principles of sim+real cotraining? We study this in the first focused analysis of sim+real cotraining spanning 250+ policies & 40k+ evals arxiv.org/abs/2503.22634 (1/6)
Scalable Real-to-Sim: Automated scanning of objects using camera+BundleSDF and Robot Arm measuring its inertial parameters, this is so cool! scalable-real2sim.github.io (thanks @bowenwen_me for the link!)