Jenn Grannen
@jenngrannen
@StanfordAILab PhD, previously @ToyotaResearch, @berkeley_ai. I can teach your robot new tricks.
We have developed a new tactile sensor, called e-Flesh, with a simple working principle: measure deformations in 3D printable microstructures. Now all you need to make tactile sensors is a 3D printer, magnets, and magnetometers! 🧵
Such a powerful use of robotics. Love seeing this new paper take real steps toward making assistive feeding robots a reality.
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧵1/8
Worked with Sidd for years and can honestly say he was born to be a mentor. Any student would be incredibly lucky to have his guidance through a PhD. They’re in amazing hands 🥹🎓
Thrilled to share that I'll be starting as an Assistant Professor at Georgia Tech (@ICatGT / @GTrobotics / @mlatgt) in Fall 2026. My lab will tackle problems in robot learning, multimodal ML, and interaction. I'm recruiting PhD students this next cycle – please apply/reach out!
Saying hi 👋 to @exploratorium's new feathery friend. Congrats @michelllepan and @CatieCuan!!

🤖📦 Want to move many items FAST with your robot? Use a tray. But at high speeds, objects may fall off 💥. Introducing our new method: it hears sliding 🎧, learns dynamic friction 🥌, and plans time-optimized motions to transport objects 🚀. fast-non-prehensile.github.io 🧵1/7
How can we move beyond static-arm lab setups and learn robot policies in our messy homes? We introduce HoMeR, an imitation learning agent for in-the-wild mobile manipulation. 🧵1/8
Video generation is powerful but too slow for real-world robotic tasks. How can we enable both video and action generation while ensuring real-time policy inference? Check out our work on the Unified Video Action Model (UVA) to find out! unified-video-action-model.github.io (1/7)
So excited to try this out!!
The ultimate test of any physics simulator is its ability to deliver real-world results. With MuJoCo Playground, we’ve combined the very best: MuJoCo’s rich and thriving ecosystem, massively parallel GPU-accelerated simulation, and real-world results across a diverse range of…
Want a smaller VLA that performs better? We just released some core improvements to OpenVLA, like: + MiniVLA: 7x smaller model! + Action chunking using Vector Quantization + Multi-image support Blog: ai.stanford.edu/blog/minivla/ Code: github.com/Stanford-ILIAD… (1/5) More below! 👇
Excited to share @StanfordHAI’s article on our Vocal Sandbox work! Looking forward to pushing Vocal Sandbox out into real world settings (bakery🥐/mall👕/library📚) next!
A new robot system called Vocal Sandbox is the first of many systems that promise to help integrate robots into our daily lives. Learn about the prototype that @Stanford researchers presented at the 8th annual Conference on Robot Learning. stanford.io/3Bco1jd
Today at 3-4pm, I'll be presenting our work again at the Language and Robot Learning Workshop. Come check out my poster and say hi! :)
Introducing 🆚Vocal Sandbox: a framework for building adaptable robot collaborators that learns new 🧠high-level behaviors and 🦾low-level skills from user feedback in real-time. ✅ Appearing today at @corl_conf as an Oral Presentation (Session 3, 11/6 5pm). 🧵(1/6)