Adithya Murali
@Adithya_Murali_
Sr. Research Scientist at @NVIDIAAI. Foundation models for robotics. Previously PhD at @CMU_Robotics, @Berkeley_EECS, @MetaAI, AWS
I’m thrilled to announce that we just released GraspGen, a multi-year project we have been cooking at @NVIDIARobotics 🚀 GraspGen: A Diffusion-Based Framework for 6-DOF Grasping Grasping is a foundational challenge in robotics 🤖 — whether for industrial picking or…
wait omg that was me
Our co-founder, Jonathan Hurst, shares his vision for the path that humanoid robots will take to becoming part of our everyday lives. agilityrobotics.com/content/humano…
I wrote a fun little article about all the ways to dodge the need for real-world robot data. I think it has a cute title. sergeylevine.substack.com/p/sporks-of-agi
Excited to be speaking tomorrow at the Point Cloud Tutorial at #CVPR2025! We’re diving into all things 3D — from fundamental research in point clouds to industrial applications in Physical AI. 📍 Room 202A 🗓️ June 11, 3pm CDT Thanks @KaichunMo @XiaoyangWu_ for the invite!
Join the 2nd Point Cloud Tutorial for #CVPR2025 Theme: All You Need to Know About 3D Point Cloud Date: June 11, full-day length Location: 202A TL;DR: For the 2nd point cloud tutorial at CVPR 2025, we aim to move beyond traditional topics like backbone design and…
Are Diffusion and Flow Matching the best generative modelling algorithms for behaviour cloning in robotics? ✅Multimodality ❌Fast, Single-Step Inference ❌Sample Efficient 💡 We introduce IMLE Policy, a novel behaviour cloning approach that can satisfy all the above. 🧵👇
#CoRL2025 poll: If there is a K-Pop performance by a Korean idol group at the banquet, would you enjoy it?
Excited to introduce PyRoki ("Python Robot Kinematics"): easier IK, trajectory optimization, motion retargeting... with an open-source toolkit on both CPU and GPU
Throwback to some experiments I ran in 2022 in our lab @BrownBigAI before humanoid robots became all the rage. Might revisit this "transformer" robot for demonstrations of my current work.
Constructing interactive simulated worlds has been a challenging problem, requiring considerable manual effort for asset creation and articulation, and composing assets to form full scenes. In our new work - DRAWER, we made the process of creating scenes in simulation as simple…
"Gr00t" vs "Pi0" vs "Pi0 Fast". I compared top open-source robotic models, and here's a detailed overview based on our own experience:
When I was a founder, no one replied to my emails or returned calls. Now I'm an investor and everyone wants to meet me. This is a side of entrepreneurship no one talks about. I've been meaning to share these thoughts for a while now. As a founder, I would reach out to investors,…
🚀 Meet ToddlerBot 🤖– the adorable, low-cost, open-source humanoid anyone can build, use, and repair! We’re making everything open-source & hope to see more Toddys out there!
Time to democratize humanoid robots! Introducing ToddlerBot, a low-cost ($6K), open-source humanoid for robotics and AI research. Watch two ToddlerBots seamlessly chain their loco-manipulation skills to collaborate in tidying up after a toy session. toddlerbot.github.io
How to drive your research forward? “I tested the idea we discussed last time. Here are some results. It does not work. (… awkward silence)” Such conversations happen so often when meeting with students. How do we move forward? You need …
Teaching bimanual robot hands to perform very complex tasks has been notoriously challenging. In our work, Bidex: Bimanual Dexterity for Complex Tasks, we’ve developed a low-cost system that completes a wide range of highly dexterous tasks in real-time. bidex-teleop.github.io
At Physical Intelligence (π) our mission is to bring general-purpose AI into the physical world. We're excited to show the first step towards this mission - our first generalist model π₀ 🧠 🤖 Paper, blog, uncut videos: physicalintelligence.company/blog/pi0
Can my robot cook my food, rearrange my dresser, tidy my messy table and do so much more without ANY demos or real-world training data? Introducing ManipGen: A generalist agent for manipulation that can solve long-horizon robotics tasks entirely zero shot, from text input! 1/N
So, this is what we were up to for a while :) Building SOTA foundation models for media -- text-to-video, video editing, personalized videos, video-to-audio One of the most exciting projects I got to tech lead at my time in Meta!
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in…
Robots need strong visuo-motor representations to manipulate objects, but it’s hard to learn these using demo data alone. Our #RSS2024 project vastly improves robotic representations, using human affordances mined from Ego4D! w/ @mohansrirama @shikharbahl @gupta_abhinav_
Tiny animal movement that nobody asked for But I'm here for it 100% AI via Gen-3 1. Tiny baby puppies
🎉 Exciting evening ahead! 🌆 I'll be presenting two papers this evening (yes, evening! Because 5-6:45 PM is evening! 🌙). First up is SPOC, a joint first author work, our biggest embodied AI effort from PRIOR @allen_ai. 🤖✨ We showcase the impressive results of training an…