Minyoung Hwang @RSS 2025
@robominyoung
PhD student @MIT_CSAIL, Previously @carnegiemellon, @allen_ai, @SNU | Robotics | Preference-based RL | Human-Robot Interaction
Interested in how generative AI can be used for human-robot interaction? We’re organizing the 2nd Workshop on Generative AI for Human-Robot Interaction (GenAI-HRI) at #RSS2025 in LA — bringing together the world's leading experts in the field. The workshop is happening on Wed,…

Happening now at RTH 109!
Excited to invite you to our #RSS2025 Workshop at RTH 109, where we’ll explore the frontier of Generative Models × Human–Robot Interaction! 🤖✨ Organizing with @robominyoung @sammy_j_c @haroldsoh @andreea7b @julie_a_shah sites.google.com/view/gai-hri/h… 9:00 AM – Workshop Intro •…
How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨ We introduce Dex1B, a framework that generates 1 BILLION diverse dexterous hand demonstrations for both grasping 🖐️and articulation 💻 tasks using a simple C-VAE model.
Amazing to see tactile sensors that are 3d printable in any form factor!
We have developed a new tactile sensor, called e-Flesh, with a simple working principle: measure deformations in 3D printable microstructures. Now all you need to make tactile sensors is a 3D printer, magnets, and magnetometers! 🧵
“As a PHD student, your job is not publishing a paper every quarter. Focus on a problem in deep understanding and solve it in years under the protect of your adviser” from @RussTedrake #RSS2025
If you are at #RSS2025, check out our workshop on Generative AI for Human-Robot Interaction. We have a stacked lineup of speakers and panelists!
Interested in how generative AI can be used for human-robot interaction? We’re organizing the 2nd Workshop on Generative AI for Human-Robot Interaction (GenAI-HRI) at #RSS2025 in LA — bringing together the world's leading experts in the field. The workshop is happening on Wed,…
I’ll talk about the PARTNR framework and how LLMs perform in planning in dynamic environments. I’ll also talk about a unified memory architecture for robotics, an alternative to recent scene representations that rely on ad-hoc combination of multiple large models.
Interested in how generative AI can be used for human-robot interaction? We’re organizing the 2nd Workshop on Generative AI for Human-Robot Interaction (GenAI-HRI) at #RSS2025 in LA — bringing together the world's leading experts in the field. The workshop is happening on Wed,…
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧵1/8
I’ll be giving a spotlight talk at the RSS SemRob workshop (OHE #122, 9:50-10am) about this work today! The talk is followed by the poster session, so feel free to stop by if you’re interested :) Happy to catch up or chat about research and potential collaboration during the…
Evaluating robot motions isn’t just about start and end states; it's about how tasks are performed. We propose 🤖 MotIF (Motion Instruction Fine-tuning) and MotIF-1K dataset to enhance VLMs' understanding of nuanced robotic motions. 🔗 motif-1k.github.io 📄…
What if an LLM could update its own weights? Meet SEAL🦭: a framework where LLMs generate their own training data (self-edits) to update their weights in response to new inputs. Self-editing is learned via RL, using the updated model’s downstream performance as reward.
Happening now! Drop by poster #168 at CVPR to see our work! Also giving a spotlight talk at CVPR EAI workshop at 3:50-4pm. Happy to chat w/ anyone interested during the conference😊
Evaluating robot motions isn’t just about start and end states; it's about how tasks are performed. We propose 🤖 MotIF (Motion Instruction Fine-tuning) and MotIF-1K dataset to enhance VLMs' understanding of nuanced robotic motions. 🔗 motif-1k.github.io 📄…
Gemini 2.0 can reason about the physical world! Try it out today at aistudio.google.com/starter-apps/s… Your robots will thank you for it :)
Excited to introduce 𝐋𝐀𝐏𝐀: the first unsupervised pretraining method for Vision-Language-Action models. Outperforms SOTA models trained with ground-truth actions 30x more efficient than conventional VLA pretraining 📝: arxiv.org/abs/2410.11758 🧵 1/9
This paper inspires me to not only selectively choose partial goals (demonstration, language, environment states like objects, etc.) for robot learning, but also to use a combination of them (e.g., demonstration + language)! Using CVAE and a learned prior seems to be the key🤔
Excited to share our latest work! 🤩 Masked Mimic 🥷: Unified Physics-Based Character Control Through Masked Motion Inpainting Project page: research.nvidia.com/labs/par/maske… with: Yunrong (Kelly) Guo, @ofirnabati, @GalChechik and @xbpeng4. @SIGGRAPHAsia (ACM TOG). 1/ Read…
Big shoutout to my awesome collaborators @JoeyHejna @DorsaSadigh @ybisk on this project😎 Thanks to @Hao_Zhu @DanielXieee @SoYeonTiffMin @lmathur @viddivj @_Yingshan @rosie_vitiello + all others in @CarnegieMellon CLAW lab for thoughtful feedback and help with data collection!👩💻
Evaluating robot motions isn’t just about start and end states; it's about how tasks are performed. We propose 🤖 MotIF (Motion Instruction Fine-tuning) and MotIF-1K dataset to enhance VLMs' understanding of nuanced robotic motions. 🔗 motif-1k.github.io 📄…