Nishanth Kumar
@nishanthkumar23
Robotics + AI PhD Student @MIT_LISLab @MIT_CSAIL. Formerly @NVIDIAAI @rai_inst, @brownbigai, @vicariousai and @uber. S.B @BrownUniversity.
Can we get robots to improve at long-horizon tasks without supervision? Our latest work tackles this problem by planning to practice! Here's a teaser showing initial task -> autonomous practice -> eval (+ interference by a gremlin👿)
📢 Excited to announce the 1st workshop on Making Sense of Data in Robotics @corl_conf! #CORL2025 What makes robot learning data “good”? We focus on: 🧩 Data Composition 🧹 Data Curation 💡 Data Interpretability 📅 Papers due: 08/22/2025 🌐 tinyurl.com/corldata25 🧵(1/3)
Hello world! This is @nishanthkumar23 and I'll be taking over @MIT_CSAIL 's X/Twitter & Instagram for 24 hours! I'm a 4th year PhD @MITEECS working on AI/ML for Robotics and Computer Agents. Drop any and all questions about research, AI, MIT, or dogs (esp. robot dogs!) below 👇
Thrilled to share that I'll be starting as an Assistant Professor at Georgia Tech (@ICatGT / @GTrobotics / @mlatgt) in Fall 2026. My lab will tackle problems in robot learning, multimodal ML, and interaction. I'm recruiting PhD students this next cycle – please apply/reach out!
It was a ton of fun to help @WillShenSaysHi and @CaelanGarrett on new work that does GPU-accelerated TAMP! Exciting progress towards efficient test time scaling for robots 🤖
Check out this MIT News article about our research on GPU-accelerated manipulation planning! news.mit.edu/2025/new-syste… @WillShenSaysHi @nishanthkumar23 @imankitgoyal
This has to be one of the coolest robot demo videos I’ve seen! Very impressive stuff from @physical_int
π-0.5 is here, and it can generalize to new homes! Some fun experiments with my colleagues at @physical_int, introducing π-0.5 (“pi oh five”). Our new VLA can put dishes in the sink, clean up spills and do all this in homes that it was not trained in🧵👇
Very interesting new work (that builds on our work on planning to practice!) on open ended learning and exploration for LLM agents! In general I’m quite excited for ideas from planning, RL, and Robotics to help make LLM agents even more autonomous and capable! 🤖
🚀 Introducing 🧭MAGELLAN—our new metacognitive framework for LLM agents! It predicts its own learning progress (LP) in vast natural language goal spaces, enabling efficient exploration of complex domains.🌍✨Learn more: 🔗 arxiv.org/abs/2502.07709 #OpenEndedLearning #LLM #RL
I’m at #AAAI2025 for the next two days! Come say hi in today’s “Planning in the era of LLMs” or the “GenPlan” workshops. Also keen to chat about any and all things related to test time scaling via search, robotics, agents, or food in Philly!
Now accepted at #ICLR2025! openreview.net/forum?id=QOfsw…)
Can program synthesis learn an abstract world model? Yichao Liang took a big step toward making that happen by using code generation to build a hierarchy of abstractions, grounded in perception and useful for planning.
Happy to share that AHA has been accepted to #ICLR2025! Will be going home!
Humans learn and improve from failures. Similarly, foundation models adapt based on human feedback. Can we leverage this failure understanding to enhance robotics systems that use foundation models? Introducing AHA—a vision-language model for detecting and reasoning over…
It seems that I am giving the commencement address at MIT in May 😳 news.mit.edu/2024/hank-gree…
I’m quite biased, but I think this is some cool and interesting work on using LLMs for robotics :)
Can we teach a robot its limits to do chores safely & correctly? 🧵 To help robots execute open-ended, multi-step tasks, MIT CSAIL researchers used vision models to see what’s near the machine & model its constraints. An LLM sketches up a plan that’s checked in a simulator to…
Excited to share that ReKep won Best Paper Award at CoRL LEAP workshop! Extracting plannable task representations from foundation models unlocks great potential for generalization in manipulation. Huge shout-out to my collaborators and advisor @chenwang_j @YunzhuLiYZ…
What structural task representation enables multi-stage, in-the-wild, bimanual, reactive manipulation? Introducing ReKep: LVM to label keypoints & VLM to write keypoint-based constraints, solve w/ optimization for diverse tasks, w/o task-specific training or env models. 🧵👇
Happening in a few hours (4pm in the main auditorium Audimax!)
Curious to hear about creating generalist robots from leaders in the field? Don’t miss our panel “Representations for Generalist Robots” (4-5pm) @corl_conf LEAP workshop! Feat. @chelseabfinn @animesh_garg Vincent Vanhoucke @Marc__Toussaint @sidsrivast and Leslie Kaelbling!
Curious to hear about creating generalist robots from leaders in the field? Don’t miss our panel “Representations for Generalist Robots” (4-5pm) @corl_conf LEAP workshop! Feat. @chelseabfinn @animesh_garg Vincent Vanhoucke @Marc__Toussaint @sidsrivast and Leslie Kaelbling!

Incredible insights from Prof. Tomás Lozano-Pérez of MIT during his keynote on the evolution of robotics! He took us on a journey through the shifting landscape of robotics over the years, discussing “inverted pendulum” theory of Robotics. #CoRL2024 #RobotLearning #AI #Robotics
Our NVIDIA robotics research team is hiring PhD student interns! Locations: Bay Area or Seattle Apply: nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAEx… Past projects: AHA @DJiafei (aha-vlm.github.io) OPTIMUS @mihdalal (mihdalal.github.io/optimus/) IntervenGen @ryan_hoque (sites.google.com/view/interveng…)
Our CoRL 2024 paper shows Reinforcement Learning can allow robots to learn skills via real-world practice, without any demonstrations or simulation engineering. Rewards are provided using language/vision models, and mobility of robots enables autonomous exploration. 1/N
It’s been a lot of fun to be part of this work! Failure reasoning is a critical component for robot planning and learning methods, and it’s really cool to see how well AHA generalizes to new situations!!
Humans learn and improve from failures. Similarly, foundation models adapt based on human feedback. Can we leverage this failure understanding to enhance robotics systems that use foundation models? Introducing AHA—a vision-language model for detecting and reasoning over…
We’ve decided to extend the submission deadline for this years LEAP workshop - new deadline is October 4!
Humans learn and improve from failures. Similarly, foundation models adapt based on human feedback. Can we leverage this failure understanding to enhance robotics systems that use foundation models? Introducing AHA—a vision-language model for detecting and reasoning over…