Kevin Black
@kvablack
phd @berkeley_ai, research @physical_int
Implemented @physical_int’s Real‑Time Chunking (RTC) on @huggingface’s SmolVLA in the @LeRobotHF repo! It noticeably reduces jerky motion compared with basic merge strategies during async inference!🧵1/
We got a robot to clean up homes that were never seen in its training data! Our new model, π-0.5, aims to tackle open-world generalization. We took our robot into homes that were not in the training data and asked it to clean kitchens and bedrooms. More below⤵️
We are excited to share new experiments with AgiBot @AgiBot_zhiyuan on multi-task, multi-embodiment VLAs! With one model that can perform many tasks with both two-finger grippers and multi-fingered hands, we take another step toward one model for all robots and tasks.
Many of you asked for code & weights for π₀, we are happy to announce that we are releasing π₀ and pre-trained checkpoints in our new openpi repository! We tested the model on a few public robots, and we include code for you to fine-tune it yourself.
My favorite slide that I made for my talk last weekend -- a very silly thought experiment in which we compare language datasets to robotics datasets (in the most shallow way possible). Yes it is to scale; I learned that the maximum shape size in Keynote is 20,000pts

Here's a link to the recording for anyone that's interested! youtube.com/live/ELUMFpJCU…
If you're at #CoRL2024, come check out my talk at the X-Embodiment workshop at 1:30pm! Thanks to @KarlPertsch for inviting me to speak!
If you're at #CoRL2024, come check out my talk at the X-Embodiment workshop at 1:30pm! Thanks to @KarlPertsch for inviting me to speak!
