Guangqi Jiang
@LuccaChiang
MSCS @UCSanDiego | Prev. RA @Tsinghua_IIIS | Robot Learning, Embodied AI, and Vision
What makes visual representations truly effective for robotics? Introducing Manipulation Centricity that bridges visual representations & manipulation performance, leading to a simple yet powerful representation trained on large-scale robotic datasets. 👉: robots-pretrain-robots.github.io
Dex1B is a massive dataset of 1 billion robot demonstrations for grasping & articulation tasks! Our DexSimple generative model uses geometric constraints for feasibility and conditions for diversity. Validated in sim & real-world experiments!
How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping 🖐️and articulation 💻 tasks using a simple C-VAE model.
How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping 🖐️and articulation 💻 tasks using a simple C-VAE model.
Full episode dropping soon! Geeking out with @RogerQiu_42 on Humanoid Policy ~ Human Policy human-as-robot.github.io Co-hosted by @chris_j_paxton & @micoolcho
Looking forward to the product.
Today, We’re launching Genesis AI — a global physical AI lab and full-stack robotics company — to build generalist robots and unlock unlimited physical labor. We’re backed by $105M in seed funding from @EclipseVentures, @khoslaventures, @Bpifrance, HSG, and visionaries…
To do a good job, one must first sharpen the tools. Check out the teleoperation system with force feedback here.
🚀 Meet ACE-F — a next-gen teleop system merging human and robot precision. Foldable, portable, cross-platform — it enables 6-DoF haptic control for force-aware manipulation. 🦾 See our demo & talk at the Robot Hardware-Aware Intelligence workshop this Wed @RoboticsSciSys!
Congratulations and good luck!
Congratulations to the graduation of @Jerry_XU_Jiarui @JitengMu @RchalYang @YinboChen ! I am excited for their future journeys in industry: Jiarui -> OpenAI Jiteng -> Adobe Ruihan -> Amazon Yinbo -> OpenAI
Check out our new attention GSPN here.
The code of GSPN #CVPR2025 is released! We proposed a new sqrt(N) complexity attention mechanism, which enables efficient high resolution image generation. We can generate 8k images with 42x speed up compared to self-attention in StableDiffusionXL! Code: github.com/NVlabs/GSPN…
We have been focusing on policy learning for robotics for a while. But can hardware be learned as well? Check out @yswhynot ‘s recent co-design work that learns what a soft gripper should be if we want to do better manipulation.
For years, I’ve been tuning parameters for robot designs and controllers on specific tasks. Now we can automate this on dataset-scale. Introducing Co-Design of Soft Gripper with Neural Physics - a soft gripper trained in simulation to deform while handling load.
For years, I’ve been tuning parameters for robot designs and controllers on specific tasks. Now we can automate this on dataset-scale. Introducing Co-Design of Soft Gripper with Neural Physics - a soft gripper trained in simulation to deform while handling load.
If you are impressed by @Tesla_Optimus, also check out @RogerQiu_42 's talk on leveraging human videos for humanoid bimanual manipulation. Paper: Humanoid Policy ~ Human Policy Link: human-as-robot.github.io
Ep#10 with @RogerQiu_42 on Humanoid Policy ~ Human Policy human-as-robot.github.io Co-hosted by @chris_j_paxton & @micoolcho
FALCON, a dual-agent RL framework, enables humanoids to perform complex force-adaptive tasks. It outperforms baselines with 2× better upper-body precision while maintaining stable locomotion during heavy pushing, pulling, and carrying tasks up to 100N of force. Check it out here!
🦾How can humanoids unlock real strength for heavy-duty loco-manipulation? Meet FALCON🦅: Learning Force-Adaptive Humanoid Loco-Manipulation. 🌐: lecar-lab.github.io/falcon-humanoi… See the details below👇:
Join us tomorrow, Friday, at 4 pm CET, for a talk by @xiaolonw @UCSanDiego on Modeling Humans for Humanoid Robots as part of our talk series! Zoom: ethz.zoom.us/j/63716670526 Schedule: robotics-talks.com
Join us on Wed, 12 pm CET, for a talk by @pulkitology @MIT_CSAIL @MIT on Pathway to Robotic Intelligence as a part of our talk series! The talk will be @ETH_en in person! Place: HG F 26.5 Zoom: ethz.zoom.us/j/63716670526 Schedule: robotics-talks.com
TWIST (Teleoperated Whole-Body Imitation System) enables robots to mimic human motion with unprecedented coordination. Nice work and great demo.
🤖Introducing TWIST: Teleoperated Whole-Body Imitation System. We develop a humanoid teleoperation system to enable coordinated, versatile, whole-body movements, using a single neural network. This is our first step toward general-purpose robots. 🌐humanoid-teleop.github.io
Adaptive Motion Optimization (AMO) combines RL with trajectory optimization to give robots incredible whole-body dexterity. The 29-DoF Unitree G1 can now perform complex tasks like floor-level object retrieval with enhanced stability and adaptability. Future versatile humanoid👍
Meet 𝐀𝐌𝐎 — our universal whole‑body controller that unleashes the 𝐟𝐮𝐥𝐥 kinematic workspace of humanoid robots to the physical world. AMO is a single policy trained with RL + Hybrid Mocap & Trajectory‑Opt. Accepted to #RSS2025. Try our open models & more 👉…
Our work on manipulation-centric representation from large-scale robot dataset will be presented in ICLR. Catch Huazhe @HarryXu12 on the site for more details!
Our #ICLR2025 paper MCR will be presented at Hall 3 + Hall 2B #42 on Apr 24th from 7:00 to 9:30 PM PDT. Won't be able to attend the conference since I'm working on CoRL submission. Please check it out and drop me an email if you are interested!
Our manipulation-centric representation model trained on large-scale robotic datasets will also be presented at #ICLR2025 🤲🏻🧸
Our #ICLR2025 paper MCR will be presented at Hall 3 + Hall 2B #42 on Apr 24th from 7:00 to 9:30 PM PDT. Won't be able to attend the conference since I'm working on CoRL submission. Please check it out and drop me an email if you are interested!
#ICLR2025 Thrilled for our ICLR 2025 Spotlight: DenseMatcher🍌!📍 Hall 3 + Hall 2B #569, Fri 25 Apr, 3-5:30 AM EDT. Meet my awesome collaborators Junzhe, Junyi @junyi42 , Kaizhe @hkz222 & our advisor Huazhe @HarryXu12 to discuss! ☺️