Priya Sundaresan
@priyasun_
CS PhD student @Stanford, prev. Intrinsic, @Amazon Robotics, @UCBerkeley | learning from humans & teaching robots
How can we move beyond static-arm lab setups and learn robot policies in our messy homes? We introduce HoMeR, an imitation learning agent for in-the-wild mobile manipulation. 🧵1/8
🤖 Household robots are becoming physically viable. But interacting with people in the home requires handling unseen, unconstrained, dynamic preferences, not just a complex physical domain. We introduce ROSETTA: a method to generate reward for such preferences cheaply. 🧵⬇️
And we won the #RSS 2025 Best Paper Award! Congrats @rkjenamani and the entire @EmpriseLab team @CornellCIS 🎉🎉
Congrats @rkjenamani and the entire team @EmpriseLab on this impressive accomplishment and being nominated for Best Paper Award and Best Systems Paper Award at #RSS 2025! This project took almost 2.5 years to get to this stage, and I am incredibly proud of what we have achieved…
Excited to announce what we've been working on: Gemini Robotics On-Device, a VLA model that runs locally and shows strong performance on 3 different robot embodiments! We're also releasing an open source MuJoCo sim for the Aloha 2 platform, and an SDK for trusted testers to use…
We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖 It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵
check out rajat’s awesome new work on assistive feeding in the wild!
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧵1/8
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧵1/8
Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust…
Meet ProVox: a proactive robot teammate that gets you 🤖❤️🔥 ProVox models your goals and expectations before a task starts — enabling personalized, proactive help for smoother, more natural collaboration. All powered by LLM commonsense. Recently accepted at @ieeeras R-AL! 🧵1/7
Sometimes the best way to express an idea is by sketching it out. A system from MIT CSAIL & Stanford captures this iterative process by teaching LLMs to create sequential sketches. It could work w/users to visually communicate concepts: bit.ly/4kfXFhk
How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
How can robots autonomously handle ambiguous situations that require commonsense reasoning? *VLM-PC* provides adaptive high-level planning, so robots can get unstuck by exploring multiple strategies. Paper: anniesch.github.io/vlm-pc/
Introducing Phantom 👻: a method to train robot policies without collecting any robot data — using only human video demonstrations. Phantom turns human videos into "robot" demonstrations, making it significantly easier to scale up and diversify robotics data. 🧵1/9
Ever watch your imitation-based robot policy do something bizarre? Wish you could fix it—no retraining needed? Meet FOREWARN, a VLM-in-the-loop system that steers multi-modal generative policies toward the right outcomes, on the fly.