Haoran Geng
@HaoranGeng2
CS PhD at @Berkeley_AI. Prev: @Stanford, @PKU1898. Robotics, RL, 3D Vision
In my past research experience, finding or developing an appropriate simulation environment, dataset, and benchmark has always been a challenge. Missing features, limited support, or unexpected bugs often occupied my days and nights. Moreover, current simulation platforms are…
🚀 RoboVerse has been accepted to RSS 2025 and is now live on arXiv: arxiv.org/abs/2504.18904 ✨ Also be selected in HuggingFace Daily: huggingface.co/papers/2504.18… 🛠️ Explore our open-source repo: github.com/RoboVerseOrg/R… We're actively developing and adding new features daily — come…
In my past research experience, finding or developing an appropriate simulation environment, dataset, and benchmark has always been a challenge. Missing features, limited support, or unexpected bugs often occupied my days and nights. Moreover, current simulation platforms are…
Introducing ViTacFormer: Next-Level Dexterous Manipulation with Active Vision and High-Resolution Touch by @HaoranGeng2 #AI #Robotics #MachineLearning #ArtificialIntelligence #ML #Innovation cc: @sallyeaves @amuellerml @marcusborba
Again the power of tactile sensing and multi-finger hands comes through. This is the future of dexterous manipulation!
🤖 What if a humanoid robot could make a hamburger from raw ingredients—all the way to your plate? 🔥 Excited to announce ViTacFormer: our new pipeline for next-level dexterous manipulation with active vision + high-resolution touch. 🎯 For the first time ever, we demonstrate…
There is a raging debate over sensory modes and redundancy How much is enough and is sensory overload an issue Perhaps the key is redundant modes are the fabric that hold actions together to solve long term planning Think fascia
🤖 What if a humanoid robot could make a hamburger from raw ingredients—all the way to your plate? 🔥 Excited to announce ViTacFormer: our new pipeline for next-level dexterous manipulation with active vision + high-resolution touch. 🎯 For the first time ever, we demonstrate…
Everyday human data is robotics’ answer to internet-scale tokens. But how can robots learn to feel—just from videos?📹 Introducing FeelTheForce (FTF): force-sensitive manipulation policies learned from natural human interactions🖐️🤖 👉 feel-the-force-ftf.github.io 1/n
FastTD3: "Minimum innovation, maximum results" Not the paper we had planned to write, but one of the works I am most proud of. We wanted to make sure our baseline (TD3) was a very solid baseline, so we added a few things that are already known to help in RL (large,…
Excited to present FastTD3: a simple, fast, and capable off-policy RL algorithm for humanoid control -- with an open-source code to run your own humanoid RL experiments in no time! Thread below 🧵
🚀Check out our new work, FastTD3, a reinforcement learning algorithm that is simple, efficient, and highly capable. It achieves truly remarkable performance across challenging RL tasks.
Excited to present FastTD3: a simple, fast, and capable off-policy RL algorithm for humanoid control -- with an open-source code to run your own humanoid RL experiments in no time! Thread below 🧵
Despite great advances in learning dexterity, hardware remains a major bottleneck. Most dexterous hands are either bulky, weak or expensive. I’m thrilled to present the RUKA Hand — a powerful, accessible research tool for dexterous manipulation that overcomes these limitations!
RoboVerse RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning github.com/RoboVerseOrg/R…
"Gr00t" vs "Pi0" vs "Pi0 Fast". I compared top open-source robotic models, and here's a detailed overview based on our own experience: