Ashwin Balakrishna
@ashwinb96
Building robot brains @physical_int. Previously at @GoogleDeepMind, @berkeley_ai.
Was super fun to demo Gemini Robotics @ Google I/O! This was a big effort with the @GoogleDeepMind team including @ColinearDevin, @SudeepDasari, and many others. Here's a fun uncut video of me playing with the demo :)
Since the first year of my PhD, every talk I’ve given has opened with a slide about the distant north star: dropping a robot in a home it’s never been before and having it do useful things. I think it might be time for me to find a new opening slide 😀. Thrilled to share π-0.5!
We got a robot to clean up homes that were never seen in its training data! Our new model, π-0.5, aims to tackle open-world generalization. We took our robot into homes that were not in the training data and asked it to clean kitchens and bedrooms. More below⤵️
An advanced version of Gemini with Deep Think has officially achieved gold medal-level performance at the International Mathematical Olympiad. 🥇 It solved 5️⃣ out of 6️⃣ exceptionally difficult problems, involving algebra, combinatorics, geometry and number theory. Here’s how 🧵
Love the vibe of the Gemini Robotics On-Device live demo booth in RSS 2025! Especially the genuine excitement from the robotics research community!
We took a robot to RSS in LA running our new Gemini Robotics On-Device VLA model. People interacted with the model with new objects and instructions in a brand new environment and the results were amazing!
Excited to announce what we've been working on: Gemini Robotics On-Device, a VLA model that runs locally and shows strong performance on 3 different robot embodiments! We're also releasing an open source MuJoCo sim for the Aloha 2 platform, and an SDK for trusted testers to use…
We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖 It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵
We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖 It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵
I had a great time chatting with Hannah Fry from @GoogleDeepMind podcast about redefining what’s possible in robotics with Gemini ! Thank you! youtu.be/Rgwty6dGsYI?si… via @YouTube
One of the nicest reviews I’ve ever seen, for our live robotics demo at Google I/O! Walk up to our booth, greet our friendly ALOHA robot, and just talk to it and it will try to do *anything* you ask! ✅ instruction following ✅ generalist ✅ dexterous h/t @BradyPSnyder
Our live and interactive demo of Gemini Robotics is up at Google I/O until 5pm today!
Gemini Robotics makes first contact with the outside world at #GoogleIO this week! “this is the first time in my life that I’ve been able to control robots using nothing but my voice. That is basically the very definition of cool.” @AndroidAuth “Gemini Robotics is exactly the…
Introducing Gemini 2.5 Pro Experimental! 🎉 Our newest Gemini model has stellar performance across math and science benchmarks. It’s an incredible model for coding and complex reasoning, and it’s #1 on the @lmarena_ai leaderboard by a drastic 40 ELO margin. Only a handful of…
Complementary to Gemini Robotics -- the massive vision-language-action (VLA) model released yesterday -- we also investigated how far we can push Gemini for robotics _purely from simulation data_ in Proc4Gem: 🧵
We’ve always thought of robotics as a helpful testing ground for translating AI advances into the physical world. Today we’re taking our next step in this journey with our newest Gemini 2.0 robotics models. They show state of the art performance on two important benchmarks -…