CMU Robotics Institute
@CMU_Robotics
Pioneering the future of robotics since 1979. We’re transforming industries and everyday life through cutting-edge innovation and world-class education.
Moonyoung Lee, a fifth-year Ph.D. student at Carnegie Mellon University’s Robotics Institute, was involved in developing SonicBoom, a sensing system which allows autonomous robots to use sound to sense the objects it touches. spectrum.ieee.org/farm-robots-so…
@CMU_Robotics alum Jeremy Kubica is the engineering director for LINCC Frameworks, a joint initiative by CMU, the University of Washington and the LSST Discovery Alliance. Kubica's team is developing products to help scientists identify which massive stars are exploding.
Shortcut models enable scaling offline RL, both at train-time at test-time! We beat so many other algorithms on so many tasks we had to stick most of the results in the appendix 😅. Very proud of @nico_espinosa_d for spearheading this project, check out his thread!
by incorporating self-consistency during offline RL training, we unlock three orthogonal directions of scaling: 1. efficient training (i.e. limit backprop through time) 2. expressive model classes (e.g. flow matching) 3. inference-time scaling (sequential and parallel) which,…
Check it out! 🚀 "Diffusion Beats Autoregressive in Data-Constrainted Settings" They show that Diffusion LLMs outperform Autoregressive LLMs, when allowed to train for multiple epochs! #CMUrobotics Work from Mihir Prabhudesai @mihirp98 & Mengning Wu @WuMengning54261
🚨 The era of infinite internet data is ending, So we ask: 👉 What’s the right generative modelling objective when data—not compute—is the bottleneck? TL;DR: ▶️Compute-constrained? Train Autoregressive models ▶️Data-constrained? Train Diffusion models Get ready for 🤿 1/n
Congrats to Changliu Liu, who received International Federation of Automatic Control (IFAC) Robotics Outstanding Young Researcher Award! 🎉 The award recognizes the critical applications of her work in human-robot collaborations. Read more below! 👇 bit.ly/4f2bGNM

✈️ RI's Mitch Fogelson and Zac Manchester put their research to the test in zero gravity as part of a NASA Flight Opportunities campaign with @GoZeroG! They worked to create foldable structures for space deployment 🚀 #CMUrobotics Read and watch below: bit.ly/414Nt3w

Excited to share recent work with @kaihuac5 and @RamananDeva where we learn to do novel view synthesis for dynamic scenes in a self-supervised manner, only from 2D videos! webpage: cog-nvs.github.io arxiv: arxiv.org/abs/2507.12646 code (soon): github.com/Kaihua-Chen/co…
SCS faculty members Deepak Pathak and Abhinav Gupta are co-founders of Skild AI, which was featured last night on CBS News! cbsnews.com/video/over-90-…
Recent work has seemed somewhat magical: how can RL with *random* rewards make LLMs reason? We pull back the curtain on these claims and find out this unexpected behavior hinges on the inclusion of certain *heuristics* in the RL algorithm. Our blog post: tinyurl.com/heuristics-con…
How can 🤖 learn from human workers to provably reduce their workload in factories? Our latest @RoboticsSciSys paper answers this question by proposing the first cost-optimal interactive learning (COIL) algorithm for multi-task collaboration.
The team from the RI @LeCARLab and the @nvidia GEAR robotics research lab recently presented ASAP's capabilities at #RSS2025 🤖🚀🦾 The article on this incredible work is out now!: ri.cmu.edu/robots-with-mo…
🚀 Can we make a humanoid move like Cristiano Ronaldo, LeBron James and Kobe Byrant? YES! 🤖 Introducing ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills Website: agile.human2humanoid.com Code: github.com/LeCAR-Lab/ASAP
Developed at @CMU_Robotics, this robotic arm can "feel" using sound. SonicBoom allows robots to use sound to sense objects. The approach helps agricultural #robots harvest food under tough conditions, while navigating complex environments. @IEEESpectrum: spectrum.ieee.org/farm-robots-so…
Congratulations to the DexWild team for their #RSS2025 Best Paper Award! 🦾🏅 DexWild is now open-source 💻➡️ x.com/_tonytao_/stat…
Thrilled to have received Best Paper Award at the EgoAct Workshop at RSS 2025! 🏆 We’ll also be giving a talk at the Imitation Learning Session I tomorrow, 5:30–6:30pm. Come to learn about DexWild! Work co-led by @mohansrirama, with @JasonJZLiu, @kenny__shaw, and @pathak2206.
🚀 Introducing LeVERB, the first 𝗹𝗮𝘁𝗲𝗻𝘁 𝘄𝗵𝗼𝗹𝗲-𝗯𝗼𝗱𝘆 𝗵𝘂𝗺𝗮𝗻𝗼𝗶𝗱 𝗩𝗟𝗔 (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. 🧵 ember-lab-berkeley.github.io/LeVERB-Website/
It was a dream come true to teach the course I wish existed at the start of my PhD. We built up the algorithmic foundations of modern-day RL, imitation learning, and RLHF, going deeper than the usual "grab bag of tricks". All 25 lectures + 150 pages of notes are now public! 🧵
ViSafe’s vision-only approach offers a lightweight, passive alternative that maintains strong safety performance, enabling broader deployment on agile, resource-constrained platforms. ✈️👀 #CMUrobotics Read the latest RI article covering the team's work! ri.cmu.edu/visafe-smarter…
🚀 Thrilled to present ViSafe, a vision-only airborne collision avoidance system that achieved drone-to-drone avoidance at 144 km/h. In an era of congested airspace and growing autonomy, reliable self-separation is paramount 🧵👇
Congratulations to RI Ph.D. Zulekha Karachiwalla for her honorable mention at the @NCWIT Aspirations in Computing (AiC) Collegiate Awards! 👏 Zulekha presented her work on robot-assisted wound care 🏥 #NCWITAiC Learn more ⬇️⬇️ aspirations.org/news/award-pro…
"Generalization means being able to solve problems that the system hasn't been prepared for." Our latest work in #RSS2025 can automatically invent neural networks as state abstractions, which help robots generalize. Check it out here: jaraxxus-me.github.io/IVNTR/