Pedro Milcent
@MilcentPedro
🤖 Bringing robots to life with real data at @DeplaceAI Opinions are my own
Interesting research from @ylecun and the @metaai + @Mila_Quebec teams on a task-agnostic, action-conditioned world model 🌐 The paper has valuable insights into training strategies and model design. Check out V-JEPA 2 here: arxiv.org/pdf/2506.09985
Awesome few days in Paris, @ycombinator event at Sorbonne had great tips for early-stage founders, and the @RaiseSummit was all about the future of AI (Physical AI too!). More tomorrow 🦾 🤖



Great paper by @mimicrobotics, showcasing the 16 DoF Faive hand in action and highlighting the importance of #real, diverse, and curated #data for Physical AI models that are performant, generalizable, and capable of self-correction 📈✋ 📄 Check out the paper by @elvisnavah and…
The emphasis on dataset diversity, data scaling, and self-correction demos shows impressive results!
Happy to announce mimic-one: a Scalable Model Recipe for General Purpose Robot Dexterity, the culmination of years of research work in dexterous manipulation with imitation learning.
Second day at @VivaTech was even busier than the first, spotted a @huggingface SO-100 and a @PALRobotics TIAGo in the crowd! 🤖

First day at @VivaTech & @NVIDIAGTC with an interesting keynote from Jensen and with #robotics & Physical AI companies represented. Not every day you see @Tesla_Optimus and @engineered_arts under the same roof!




One of the best papers we’ve read this year, and definitely a method we’ll be exploring at @DeplaceAI to scale data collection for manipulation tasks! Great research by @mengdaxu__, @DoubleHan07, @YifanHou2, @Zhenjia_Xu, @SongShuran, and team. 🤖 Check it out here:…
Bringing humans into the reinforcement learning loop leads to faster and more effective training 📈 HIL-SERL is a great paper by @jianlanluo, @CharlesXu0124, @svlevine & team that combines human demos, reward classification, and real-time human-in-the-loop corrections, enabling…
New paper by @ShiqiYang_17, @xuxin_cheng, @chaitanya1cha, @TairanHe99, @EpisodeYang & team shows how human demos can boost generalization in robot manipulation 🧠🤖 They unify state-action spaces for humans & humanoids, leading to strong out-of-distribution performance. 📄:…
Such an amazing resource, it will be very interesting to explore this for data collection 👏🦾
Meet HopeJr, a full humanoid robot lowering the barrier to entry! Capable of walking, manipulating many objects, open-source and costs under $3000 🤯 Designed by @therobotstudio and @huggingface 👇
Thank you, @Ken_Goldberg. Great research coming out of AutoLab, Robo-DM is also very relevant to us!
Thank you Pedro for this excellent summary!
Thrilled to be tackling the data challenge in Physical AI. From open-source hardware (SO-100s, UMIs, DexCap, AirExo), cutting-edge models (GR00T, π0), world-class events (ICRA, RSS), and of state-of-the-art papers every week, there’s never been a more exciting time to be part of…
Indeed! Big part of what we are building at @DeplaceAI, scaling data collection for robotics, including with large-scale, diverse human demos.
moooooore human data!
Some awesome progress last week in using human demos for Physical AI ✋ @ryan_hoque, @peide_huang, and team released an excellent paper on the value of large-scale egocentric demos for dexterous manipulation: arxiv.org/abs/2505.11709 And @Tesla_Optimus demonstrated this in…
Real2Render2Real by @letian_fu, @RaresAmbrus, @Ken_Goldberg & team proposes a new way to scale motion diversity for manipulation: by using human demos + object scan, it generates multiple diverse robot trajectories via rendering ✍ Check out more about it here:…
Exciting progress came out of @ieee_ras_icra this year, impressive manipulation demos and humanoid showcases. Can’t wait to see what’s next at ICRA 2026 in Austria 🇦🇹🤖

We have been seeing great progress in using human demos in Physical AI models. This paper by @ryan_hoque, @peide_huang, and team shows how a large-scale dataset can help scale this: 800+ hours of human demos for manipulation. 📄: arxiv.org/abs/2505.11709
Exactly what we are working on at @DeplaceAI : ✋ Large-scale human demos ➡️ wide network of collectors 🛋️ Built for generalization ➡️ diverse objects, environments, lighting ✍ High-quality ➡️ fully curated & annotated episodes
I want to make clear how crazy impressive this result is. We can now do bi-manual, dexterous manipulation across a wide range of tasks with barely any data on these skills coming from teleoperation. As we know, teleop does not scale! But turns out human video does! This means…