Zixuan Chen
@C___eric417
Incoming PhD at @UCSanDiego; Bachelor's Degree at @FudanUni
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping 🖐️and articulation 💻 tasks using a simple C-VAE model.
How can we leverage diverse human videos to improve robot manipulation? Excited to introduce EgoVLA — a Vision-Language-Action model trained on egocentric human videos by explicitly modeling wrist & hand motion. We build a shared action space between humans and robots, enabling…
🚀 Introducing LeVERB, the first 𝗹𝗮𝘁𝗲𝗻𝘁 𝘄𝗵𝗼𝗹𝗲-𝗯𝗼𝗱𝘆 𝗵𝘂𝗺𝗮𝗻𝗼𝗶𝗱 𝗩𝗟𝗔 (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. 🧵 ember-lab-berkeley.github.io/LeVERB-Website/
#RSS2025 Excited to be presenting our HumanUP tomorrow at the Humanoids Session (Sunday, June 22, 2025) 📺 Spotlight talk: 4:30pm–5:30pm, Bovard Auditorium 📜Poster: 6:30pm-8:00pm, #3, Associates Park
The performance looks good! Congrats.
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early…
Very impressive! 2025 will be a year going from single-motion agile WBC policy (e.g., ASAP) to versatile & agile & steerable "Behavioral Foundation Models" for humanoids! We will also likely see research combining such models with VLA-style system2 at the end of 2025!
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Check Zixuan's recent progress on general humanoid controllers! General humanoid controllers are contrary to systems that have multiple skill networks and call each skill separately. Once we have such general controllers, the humanoid intelligence problem can be simply formulated…
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
💡Wow—super dynamic motion controlled by a unified general policy! 🔗 gmt-humanoid.github.io Feels like the recipe for training a general whole-body controller has almost converged: MoE oracle teacher → generalist student policy In our previous research: - HOVER…
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Impressive work. Lots of works this year shows good engineering can really demystify WBC. There is no more excuse for crappy policies. Next steps: making WBC policy more accessible, making it easier to interface with vision-language.
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Humanoids have shown incredible capabilities in simulation. What’s missing in the real world is a unified policy that can generalize across all these motions. Now it’s here!!! Use it to power your own tasks and build the next generation of humanoid applications.
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Single generalist policy for tracking diverse, agile humanoid motions! Check out our new paper, GMT—a universal motion tracking framework leveraging Adaptive Sampling and a Motion Mixture-of-Experts architecture to achieve seamless, high-fidelity motion tracking. Thrilled to be…
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Motion tracking is a hard problem, especially when you want to track a lot of motions with only a single policy. Good to know that MoE distilled student works so well, congrats @C___eric417 on such exciting results!
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Thanks Xiaolong for summarizing our work. Engineering is quite important in robotics. For motion tracking, even though DeepMimic was proposed by @xbpeng4 years ago, there are still tons of things to do to make it work on real robot.
This work is not about a new technique. GMT (General Motion Tracking) shows good engineering practices that you can actually train a single unified whole-body control policy for all agile motion, and it works in the real world, directly with sim2real without adaptation. This is…
Coordinating diverse, high-speed motions with a single control policy has been a long-standing challenge. Meet GMT—our universal tracker that keeps up with a whole spectrum of agile movements, all with one single policy.
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Meet GMT: a new framework by the @C___eric417 team that enables high-fidelity motion tracking on humanoid robots via a single policy trained on large, unstructured human motion datasets.
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
All with one policy😋 General Full-body general tracker!
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
Full body tracker now on a deployed G1 🤩
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…
There are so many tracking paper nowadays. One policy that can track all fragile motions is impressive. Checkout this GMT paper.
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:…