AirLab
@AirLabCMU
We develop perception, control, & planning algorithms for robot autonomy | @CMU_Robotics | http://instagram.com/airlabcmu | http://youtube.com/airlab
It’s been an incredible journey—making robotics not just impactful, but also fun and full of discovery!
🐞A bug led to a RA-L paper🤪 Our paper AirIO is started when we accidentally used raw IMU data in body frame—and it worked better. Turns out, keeping body-frame observability helps generalization. No control inputs. No extra sensors. Just better IO. air-io.github.io
RayFronts code has been released ! github.com/RayFronts/RayF… 🤖 Guide your robot with semantics within & beyond depth. 🖼️ Stop using slow SAM crops + CLIP pipelines. RayFronts gets dense language aligned features in one forward pass. 🚀 Test your mapping ideas in our pipeline !
Want to learn how to empower 🤖 with real-time scene understanding and exploration capabilities? Catch Me, @hocherie1 & @QiuYuhengQiu presenting RayFronts at #RSS2025 SemRob Workshop (OHE 122) & Epstein Plaza at 10:00 am PST Today!
Catch our team @Parvkpr @PatrikarJay @AirLabCMU presenting and demoing ViSafe at #RSS2025 tomorrow! We'll be showing our payload demo & high speed aerial collision avoidance results 🚀
"Generalization means being able to solve problems that the system hasn't been prepared for." Our latest work in #RSS2025 can automatically invent neural networks as state abstractions, which help robots generalize. Check it out here: jaraxxus-me.github.io/IVNTR/
Very surprised by the quality of podcast style overviews generated by @NotebookLM. The RayFronts team tried them out and we were amazed by the quality and accuracy of the explanation. Some couldn't tell if it was AI generated. Should the AirLab start its own podcast channel? 📷
Introducing UFM, a Unified Flow & Matching model, which for the first time shows that the unification of optical flow and image matching tasks is mutually beneficial and achieves SOTA. Check out UFM’s matching in action below! 👇 🌐 Website: uniflowmatch.github.io 🧵👇
The 2nd CMU Vision-Language-Autonomy Challenge is now open for registration!🤖The challenge focuses on vision language navigation and scene understanding, with a specific focus on resolving object-centric spatial relations. Check it out/register here! ➡️ai-meets-autonomy.com/cmu-vla-challe…
On the morning before our presentation, we tested MAC-VO in the crowded and challenging environment of the conference hall. We believe that a truly generalizable SLAM system should be able to run and adapt **anywhere and anytime**. 🤖
🔥Best Paper Award at #ICRA2025 Thrilled to share that our paper MAC-VO has been awarded the 𝘽𝙚𝙨𝙩 𝘾𝙤𝙣𝙛𝙚𝙧𝙚𝙣𝙘𝙚 𝙋𝙖𝙥𝙚𝙧 𝘼𝙬𝙖𝙧𝙙 and the 𝘽𝙚𝙨𝙩 𝙋𝙖𝙥𝙚𝙧 𝘼𝙬𝙖𝙧𝙙 𝙤𝙣 𝙍𝙤𝙗𝙤𝙩 𝙋𝙚𝙧𝙘𝙚𝙥𝙩𝙞𝙤𝙣! Check our project: mac-vo.github.io
Thanks all the co-author @YutianChen03, Zihao, Wenshan and @smash0190. Also the entire team in the AirLab @ShiboZhaoSLAM @cannnnxu @MsFriendlyAI @Nik__V__ @OmarAlama and many lab members @AirLabCMU that I don't know twitter name ----Your support made this possible!
Very excited that MAC-VO is nominated as Best Paper Finalist at hashtag #ICRA2025! We’re planning a live demo at the conference. "Talk is cheap, show me the demo!" May 20. see you in Atlanta! 🚀