Kyoung Whan Choe
@kywch500
Robot Learning Engineer @ http://RLWRLD.ai
Perceptual Humanoid Control repo (by @zhengyiluo) is awesome. I made it simpler by greatly reducing the dependencies and class inheritances, and hooked up pufferlib so that it can train 60k+ SPS in 4090. Sponsored by Puffer AI. Repo: github.com/kywch/puffer-p…
We at @DaxoRobotics found a new (and better) way to build towards true robot dexterity. This dexterous robotic hand is something I’ve been working on since graduating from @GRASPlab. Below is just a teaser. Enjoy the spin. The full story drops tomorrow.
(2/n) This is the "superset" of all hands. We do not just borrow the human hand shape but also more fundamental principles of human muscle control. Compliance, redundancy, and proprioception are all built-in default. We achieved pen spinning the moment a second finger was built.
Our findings show that camera poses and spatial arrangements are critical for large-scale data collection. However, only a handful of target demos were enough to alleviate object texture misalignment between target and co-training datasets. 🧵5/
I will be working on RL for drone racing and swarms on stream here/YT/Twitch for the next few hours. Goal is a ~100k param multitask model that we can deploy on real hardware
We extend the UMI gripper by integrating thin, flexible tactile sensors -- FlexiTac! Thanks to the lightweight design of additional tactile, the whole gripper is just ~962g! This makes handheld systems comfortable for extended human use and ideal for in-the-wild data collection.…
We’re organizing the RoboArena Challenge at CoRL this year! Show the performance of your best generalist policy, in a fair, open benchmark for the robotics community! 🤖 Sign up, even if you don’t have a robot! More details in 🧵👇
A student just trained this within a day, no tedious tuning, no sim2real tricks, not even with sys-id. Worked in the first trial on the real robot. This explains the many recent impressive demos on G1 robot -- just the hardware. Still sim2real gaps on ankle and waist dofs tho.
We're open-sourcing "The Amazing Hand", an eight-degree of freedom humanoid robot hand compatible with @lerobot that can be 3-D printed at home for less than $250 ✌️✌️✌️ Given the success of Reachy Mini (2,000+ robots sold in a few days), we won't have the bandwidth to…
To support this, we created a hierarchical architecture consisting of a high-level (HL) language policy and a low-level (LL) control policy. The HL language policy generates task and corrective instructions to guide the LL policy through the long steps
Want to add diverse, high-quality data to your robot policy? Happy to share that the DexWild Dataset is now fully public, hosted by @huggingface 🤗 Find it here! huggingface.co/datasets/board…
Training robots for the open world needs diverse data But collecting robot demos in the wild is hard! Presenting DexWild 🙌🏕️ Human data collection system that works in diverse environments, without robots 💪🦾 Human + Robot Cotraining pipeline that unlocks generalization 🧵👇
Our hardware setup features an active vision system, a bi-manual robot arm, and two high-DoF dexterous hands, SharpaWave, equipped with high-resolution tactile sensors. To enable rich data collection, we use a teleoperation system with a precision exoskeleton for fine-grained…
We compared WRSL against SERL/RLPD, and found it to be much more sample efficient: while SERL wasn't able to learn anything in 20k steps in 50 minutes, WSRL learns in 18 minutes (5k warmup steps + 3k online interaction steps) See video below for the training progress of the two
Announcing egohub: @IstariRobotics's new open-source Python pipeline for working with egocentric data! Ingest heterogeneous datasets (EgoDex, etc.), convert them to a canonical format, and visualize them instantly with Rerun. Built for robotics research. github.com/IstariRobotics…
RoboEval goes beyond success/failure: 🛠 Coordination metrics (velocity + height sync) 📉 Trajectory metrics (jerk, path length, and more) ✋ Spatial precision metrics (grasp stability, collision monitoring) 📊 Stagewise progression for each task 🧵3/n #robotics #ai
We propose Data-Guided Noise (DGN): 1. compare expert vs. policy actions at states in the expert data 2. use the differences to learn a state-conditioned noise distribution 3. perturb policy actions with sampled exploration noise