Arjun Gupta @ RSS 2025
@arjun__gupta
PhD Student at UIUC
🚀 Introducing RIGVid: Robots Imitating Generated Videos! Robots can now perform complex tasks—pouring, wiping, mixing—just by imitating generated videos, purely zero-shot! No teleop. No OpenX/DROID/Ego4D. No videos of human demonstrations. Only AI generated video demos 🧵👇
How do you build a robot that can open unfamiliar objects in new places? This study put mobile manipulation systems through 100+ real-world tests and found that perception, not precision, is the real challenge.🤖 ▶️youtube.com/watch?v=QcbMnE… 📑arjung128.github.io/opening-articu…
Come by the @GoogleDeepMind booth at @RoboticsSciSys conference in LA! We’re demoing Gemini Robotics On-Device live, come check it out
Excited to release Gemini Robotics On-Device and bunch of goodies today 🍬 on-device VLA that you can run on a GPU 🍬 open-source MuJoCo sim (& benchmark) for bimanual dexterity 🍬 broadening access to these models to academics and developers deepmind.google/discover/blog/…
How can we build mobile manipulation systems that generalize to novel objects and environments? Come check out MOSART at #RSS2025! Paper: arxiv.org/abs/2402.17767 Project webpage: arjung128.github.io/opening-articu… Code: github.com/arjung128/stre…
Workshop on Mobile Manipulation in #RSS2025 kicking off with a talk from @leto__jean! Come by EEB 132 if you’re here in person, or join us on Zoom (link on the website)
🚀 #RSS2025 sneak peek! We teach robots to shimmy objects with fingertip micro-vibrations precisely—no regrasp, no fixtures. 🎶⚙️ Watch Vib2Move in action 👇 vib2move.github.io #robotics #dexterousManipulation
Soaking up the sun at the Robotics: Science and Systems conference in Los Angeles this weekend? Stop by the Hello Robot booth to say hi and get a hands on look at Stretch! Hope to see you there 😎 roboticsconference.org
This was a key feature in enabling DexterityGen, our teleop that can support tasks like using a screw driver Led by @zhaohengyin, now open source
Just open-sourced Geometric Retargeting (GeoRT) — the kinematic retargeting module behind DexterityGen. Includes tools for importing custom hands. Give it a try: github.com/facebookresear… A software by @berkeley_ai and @AIatMeta. More coming soon.
Our paper, "One-Shot Real-to-Sim via End-to-End Differentiable Simulation and Rendering", was recently published at IEEE RA-L. Our method turns a single RGB-D video of a robot interacting with the environment, along with the tactile measurements, into a generalizable world model.…
Sparsh-skin, our next iteration of general pretrained touch representations Skin like tactile sensing is catching up on the prominent vision-based sensors with the explosion of new dexteorus hands A crucial step in leveraging full hand sensing; work led by @akashshrm02 🧵👇
Robots need touch for human-like hands to reach the goal of general manipulation. However, approaches today don’t use tactile sensing or use specific architectures per tactile task. Can 1 model improve many tactile tasks? 🌟Introducing Sparsh-skin: tinyurl.com/y935wz5c 1/6
How can we train robot policies without any robot data—just using two-view videos of humans manipulating tools? Check out our new paper: "Tool-as-Interface: Learning Robot Policies from Human Tool Usage through Imitation Learning" Honored to be a Best Paper Finalist at the…
HumanUp has been accepted by #RSS2025 looking forward to seeing you in LA this June!
🤖 Want to train a humanoid to stand up safely and smoothly? Try HumanUP: Sim-to-Real Humanoid Getting-Up Policy Learning! 🚀 ✨ HumanUP is a two-stage RL framework that enables humanoid robots to stand up from any pose(facing up and down) with stability and safety. Check out…
Excited to organize Workshop on Learning Meets Model-Based Methods for Contact-Rich Manipulation @ ICRA 2025! We welcome submissions on a range of topics—check out our website for details: contact-rich.github.io Join us for an incredible lineup of speakers! #ICRA2025
🤖 Want to train a humanoid to stand up safely and smoothly? Try HumanUP: Sim-to-Real Humanoid Getting-Up Policy Learning! 🚀 ✨ HumanUP is a two-stage RL framework that enables humanoid robots to stand up from any pose(facing up and down) with stability and safety. Check out…
How can VLMs specify visual rewards for diverse manipulation tasks and evolve them iteratively? Introducing Iterative Keypoint Reward (IKER)—a visually grounded reward that leverages VLMs for flexible, human-like task execution through a real-to-sim-to-real pipeline. 🧵🔽
Our RoboEXP will be presented in CoRL Poster Session 2 #14 tomorrow. Feel free to drop by and know more!!