Kevin Zakka
@kevin_zakka
phding @Berkeley_AI
The ultimate test of any physics simulator is its ability to deliver real-world results. With MuJoCo Playground, we’ve combined the very best: MuJoCo’s rich and thriving ecosystem, massively parallel GPU-accelerated simulation, and real-world results across a diverse range of…
if you want to try training a robot to dance or pick up stuff, check out these Colab notebooks released by Google & @kevin_zakka this week. Train Unitree dog to spin and handstand: colab.research.google.com/github/google-… Train Franka robot arm to pick up stuff: colab.research.google.com/github/google-…
Everyone knows action chunking is great for imitation learning. It turns out that we can extend its success to RL to better leverage prior data for improved exploration and online sample efficiency! colinqiyangli.github.io/qc/ The recipe to achieve this is incredibly simple. 🧵 1/N
Fully open machine learning requires not only GPU access but a community commitment to openness. (Some nostalgic lessons from the ImageNet decade.) argmin.net/p/an-open-mind…
Warm-start RL (WSRL) can learn to control a real robot in under 20 minutes! Deep RL is getting really fast. Warm-start from offline data + super-efficient online learning is increasingly making real world RL not just practical but pretty easy.
We tested WSRL (Warm-start RL) on a Franka Robot, and it leads to really efficient online RL fine-tuning in the real world! WSRL learned the peg insertion task perfectly with only 11 minutes of warmup and *7 minutes* of online RL interactions 👇🧵
Introducing my recent work, "Learning Steerable Imitation Controllers From Unstructured Animal Motions". In this work, we present a control framework for legged robots that leverages unstructured real-world animal motion data to generate animal-like and user-steerable behaviors.
🥋 We're excited to share judo: a hackable toolbox for sampling-based MPC (SMPC), data collection, and more, designed to make it easier to experiment with high-performance control. Try it: pip install judo-rai
I'll be speaking at the RSS Dexterous Manipulation Workshop tomorrow, discussing our recent work with Atlas!
We are excited to host the 3rd Workshop on Dexterous Manipulation at RSS tomorrow! Join us at OHE 122 starting at 9:00 AM! See you there!
Looking forward to an exciting final day of RSS tomorrow with our WCBM workshop kicking off at 8:20 at USC! More details on the website: wcbm-workshop.github.io @RoboticsSciSys @YoungwoonLee @Xingyu2017 @ToruO_O @pabbeel
Congratulations to BAIR researchers @kevin_zakka @qiayuanliao @arthurallshire @carlo_sferrazza @KoushilSreenath @pabbeel and Google collaborators for winning the Outstanding Demo Paper Award at RSS 2025! playground.mujoco.org
We’re super thrilled to have received the Outstanding Demo Paper Award for MuJoCo Playground at RSS 2025! Huge thanks to everyone who came by our booth and participated, asked questions, and made the demo so much fun! @carlo_sferrazza @qiayuanliao @arthurallshire
Come check out the LEAP Hand and DexWild live in action at #RSS2025 today!
We’re super thrilled to have received the Outstanding Demo Paper Award for MuJoCo Playground at RSS 2025! Huge thanks to everyone who came by our booth and participated, asked questions, and made the demo so much fun! @carlo_sferrazza @qiayuanliao @arthurallshire


I'll present RoboPanoptes at #RSS2025 tomorrow 6/22 🐍 Spotlight talk: 9:00-10:30am (Bovard Auditorium) Poster: 12:30-2:00pm, poster #31 (Associates Park)
Can robots leverage their entire body to sense and interact with their environment, rather than just relying on a centralized camera and end-effector? Introducing RoboPanoptes, a robot system that achieves whole-body dexterity through whole-body vision. robopanoptes.github.io
Demo starting in 10 minutes, come witness the magic of open-source sim2real!
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early…
Looking forward to giving a talk at the Real2Sim workshop at @CVPR today at 3:50pm CDT. I will be speaking about sim-to-real for robot learning -- and MuJoCo Playground! #CVPR2025
Join our #CVPR2025 Workshop on Real2Sim: Bridging the Gap between Neural Rendering and Robot Learning on 6/12! With amazing speakers: @drmapavone @shahdhruv_ @GordonWetzstein @LingjieLiu1 @sicheng_mo @RuohanZhang76 @carlo_sferrazza ⏲️ Thu, 6/12, 1:45-5:30 PM CDT 🏢 Davidson…
Excited to get this release out there! It's been cool seeing sw made for the real robot function identically in sim. Also really glad to be building on MuJoCo; their codebase & docs are incredible
🚀Stretch MuJoCo v0.5 is released! It's a high-fidelity simulation of Stretch 3. Here’s what's new: • ROS2 and Python libraries • RGB-D and Lidar sensors • 100s of kitchen-style environments • Runs on Ubuntu, MacOS, Windows, or online on Google Colab
Specifically, HoMeR builds on our prior work SPHINX: x.com/priyasun_/stat… extending it to @jimmyyhwu‘s Tidybot++: x.com/jimmyyhwu/stat… with a whole-body controller based on @kevin_zakka‘s mink! x.com/kevin_zakka/st… 🧵4/8
Excited to finally open-source 𝐦𝐢𝐧𝐤, a library for differential inverse kinematics in Python based on the MuJoCo physics engine. github.com/kevinzakka/mink