Keerthana Gopalakrishnan
@keerthanpg
Mother of robots. Building Embodied AGI @DeepMind. Author of "AI for Robotics" textbook. Opinions my own.
Only in SF do you get invited to the funeral of an AI model
Google is processing 980 trillion+ monthly tokens across our products and APIs (up from 480T in May) 🤯 No slowdown in sight, intelligence is everywhere.
Our team @GoogleDeepMind is hiring! You'll join an incredible team passionate about building AGI in the physical world. We're one of the only labs innovating at the cutting edge of both frontier LLMs & humanoids, and we have a ton of fun doing it Apply: job-boards.greenhouse.io/deepmind
Very excited to share that an advanced version of Gemini Deep Think is the first to have achieved gold-medal level in the International Mathematical Olympiad! 🏆, solving five out of six problems perfectly, as verified by the IMO organizers! It’s been a wild run to lead this…
Super thrilled to share that our AI has has now reached silver medalist level in Math at #imo2024 (1 point away from 🥇)! Since Jan, we now not only have a much stronger version of #AlphaGeometry, but also an entirely new system called #AlphaProof, capable of solving many more…
Gemini achieves Gold Medal level performance at IMO 🚀 Confirmed via official grading and announced with due respect to the competition. Congrats and so proud of the team!
Official results are in - Gemini achieved gold-medal level in the International Mathematical Olympiad! 🏆 An advanced version was able to solve 5 out of 6 problems. Incredible progress - huge congrats to @lmthang and the team! deepmind.google/discover/blog/…
I decided to show my dog how special and loved she was by making her homemade dog snacks - baby carrots simmered in beef broth and frozen She sniffed them, looked me dead in the eye, and walked away like *I* was the animal and now I’m left emptying all my love down the trash..…
Happy 4th of July to the brave dogs who are mounting a valiant fight against explosions across America tonight! We salute your service 🇺🇸
hello, i'm selling 40 nvidia 4090 24gb gpus - not using them ! lmk if interested
Friends who are funny >>>> Very few people actively trying to say interesting and witty things in group settings, but why not, humor is the spice of life?
Most people don't know this yet, but open vocabulary manipulation is already starting to work: unseen object / task, zero shot. The chatGPT moment for robotics will not be sudden or viral because you need a robot to experience the magic. But for those with a robot, you know.
Gemini Robotics zero shot picks a dextrous hand: No prior demos, not even videos. It recognized, failed to grasp (slippery surface), retried with new angles, got help, nailed the pick, adjusted post-pick. Mad respect to DeepMind team. Now I really worry about human labor 😅
We took a robot to RSS in LA running our new Gemini Robotics On-Device VLA model. People interacted with the model with new objects and instructions in a brand new environment and the results were amazing!
Building on more than 10 years of robotics research and engineering at @GoogleDeepMind, @GoogleResearch and @GoogleAI, we're delighted to announce our Gemini Robotics On-Device system. A really capable vision-language-action model that can run entirely without network access. ⬇️
We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖 It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵
Google DeepMind announces Gemini Robotics On-Device - an efficient VLA model optimized to run locally with low-latency inference. It enables general-purpose dexterity, adapts to new tasks or robot hardware with fewer than 100 demos.
If you're at RSS, there's a live demo of the Gemini Robotics On Device model! Come by and interact! It responds to a lot of different language queries!
Come by the @GoogleDeepMind booth at @RoboticsSciSys conference in LA! We’re demoing Gemini Robotics On-Device live, come check it out
Amazing to see the generality & dexterity of Gemini Robotics in a model small enough to run directly on a robot. Incredible speed & performance even in areas with low connectivity. Excited to continue this momentum to make robots more helpful & useful to people
We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖 It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵