Tairan He
@TairanHe99
Robotics&AI PhD Student @CMU_Robotics Research Intern at @NVIDIA Prev: @MSFTResearch @sjtu1896 Emboddied AI; Humanoid; Robot Learning
🚀 Can we make a humanoid move like Cristiano Ronaldo, LeBron James and Kobe Byrant? YES! 🤖 Introducing ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills Website: agile.human2humanoid.com Code: github.com/LeCAR-Lab/ASAP
How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping 🖐️and articulation 💻 tasks using a simple C-VAE model.
I'm observing a mini Moravec's paradox within robotics: gymnastics that are difficult for humans are much easier for robots than "unsexy" tasks like cooking, cleaning, and assembling. It leads to a cognitive dissonance for people outside the field, "so, robots can parkour &…
🧠With the shift in humanoid control from pure RL to learning from demonstrations, we take a step back to unpack the landscape. 🔗breadli428.github.io/post/lfd/ 🚀Excited to share our blog post on Feature-based vs. GAN-based Learning from Demonstrations—when to use which, and why it…
I've been a bit quiet on X recently. The past year has been a transformational experience. Grok-4 and Kimi K2 are awesome, but the world of robotics is a wondrous wild west. It feels like NLP in 2018 when GPT-1 was published, along with BERT and a thousand other flowers that…
We now open-source a general sim2sim/sim2real deployment codebase for FALCON: github.com/LeCAR-Lab/FALC…, supporting both Unitree SDK and Booster SDK !
🦾How can humanoids unlock real strength for heavy-duty loco-manipulation? Meet FALCON🦅: Learning Force-Adaptive Humanoid Loco-Manipulation. 🌐: lecar-lab.github.io/falcon-humanoi… See the details below👇:
Highly recommended — a tremendous amount of effort to rigorously test an assumption often taken for granted by the community: "Does a multi-task pretrained vision-language policy actually outperform single-task policies?"
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the…
The team from the RI @LeCARLab and the @nvidia GEAR robotics research lab recently presented ASAP's capabilities at #RSS2025 🤖🚀🦾 The article on this incredible work is out now!: ri.cmu.edu/robots-with-mo…
🚀 Can we make a humanoid move like Cristiano Ronaldo, LeBron James and Kobe Byrant? YES! 🤖 Introducing ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills Website: agile.human2humanoid.com Code: github.com/LeCAR-Lab/ASAP
We're entering an era where frontier AI researchers and robotics AI startups are getting valued and "traded" like star athletes — with seed rounds looking like full-on acquisitions from just a few years ago. It’s intense, but also exciting. The pace of progress is so fast that…
Today, We’re launching Genesis AI — a global physical AI lab and full-stack robotics company — to build generalist robots and unlock unlimited physical labor. We’re backed by $105M in seed funding from @EclipseVentures, @khoslaventures, @Bpifrance, HSG, and visionaries…
Recording of my talk "From Sim2Real 1.0 to 4.0 for Humanoid Whole-Body Control and Loco-Manipulation" (at ICRA&CVPR workshops and Caltech): youtu.be/AGNcw4qnimk?si… Slides: drive.google.com/file/d/1h5MxNH…
Simulation could give you so much more than you think before you do real-world teleop. Check out @HaoruXue's latest work on latent humanoid VLA by connecting low-level humanoid control with high-level vision-language understanding—pure Sim2Real magic!
🚀 Introducing LeVERB, the first 𝗹𝗮𝘁𝗲𝗻𝘁 𝘄𝗵𝗼𝗹𝗲-𝗯𝗼𝗱𝘆 𝗵𝘂𝗺𝗮𝗻𝗼𝗶𝗱 𝗩𝗟𝗔 (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. 🧵 ember-lab-berkeley.github.io/LeVERB-Website/
I'm presenting ASAP today at RSS Humanoid Session starting 4:30pm. See you then! Location: Bovard Auditorium Time: 4:30pm–5:30pm
🚀 Can we make a humanoid move like Cristiano Ronaldo, LeBron James and Kobe Byrant? YES! 🤖 Introducing ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills Website: agile.human2humanoid.com Code: github.com/LeCAR-Lab/ASAP
"We perceive in order to act and we act in order to perceive" Enabling active perception will unlock better learning from human data in the future.
Your bimanual manipulators might need a Robot Neck 🤖🦒 Introducing Vision in Action: Learning Active Perception from Human Demonstrations ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust…
How we improve VLA generalization? 🤔 Last week we upgraded #NVIDIA GR00T N1.5 with minor VLM tweaks, FLARE, and richer data mixtures (DreamGen, etc.) ✨. N1.5 yields better language following — post-trained on unseen Unitree G1 with 1K trajectories, it follows commands on…
Demos like this make me wanna work on BC:)
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early…
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Swarm navigation at 20 m/s with no communication, no state estimation, & on a $21 computer? This paper combines deep learning + differentiable sim for zero-shot sim-to-real flight. It is an underrated breakthrough. Paper: arxiv.org/abs/2407.10648 #naturemachineintelligence
Introducing Mobi-π: Mobilizing Your Robot Learning Policy. Our method: ✈️ enables flexible mobile skill chaining 🪶 without requiring additional policy training data 🏠 while scaling to unseen scenes 🧵↓
Cool and solid work. The vision-pro humanoid teleop setup is we did with OmniH2O (omni.human2humanoid.com), but this work used MoE distillation, and better lidar odometry on G1 robot. Excited to see people pushing the limits of humanoid whole-body teleop!
🤖 Ever dreamed of controlling a humanoid robot to perform complex, long-horizon tasks — using just a single Vision Pro? 🎉 Meet CLONE: a holistic, closed-loop, whole-body teleoperation system for long-horizon humanoid control! 🏃♂️🧍 CLONE enables rich and coordinated…
How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
Ever seen a humanoid robot serve beer without spilling a drop? Now you have. 🍻 Introducing Hold My Beer: learning gentle locomotion + stable end-effector control. lecar-lab.github.io/SoFTA/
🤖Can a humanoid robot carry a full cup of beer without spilling while walking 🍺? Hold My Beer ! Introducing Hold My Beer🍺: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control Project: lecar-lab.github.io/SoFTA/ See more details below👇