Yuchen Cui
@YuchenCui1
Assistant Professor @CS_UCLA researching Interactive Robot Learning 🤖🤖🤖 | previously Postdoc @Stanford, CS PhD @UTAustin
Dreaming about the day that I could do housework from a cozy café... 🤖☕️💡 Thanks @huihan_liu for building the ghostly assistant I didn't know I needed!
Meet Casper👻, a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you stay in control! Instead of passively receiving commands, what if a robot actively sense what you need in the background, and step in when confident? (1/n)
The self-driving cars in SF are amazing! 🚗🤖What if they could also *teach* us how to drive? Excited to be in Melbourne 🦘for #HRI2025 to present our paper “Shared Autonomy for Proximal Teaching”, where we study how to use shared autonomy to improve human skill learning! (1/4)
🚀 I am recruiting PhD students for Fall 2025 at the UCLA Robot Intelligence Lab! 🤖 If you are interested in robot learning and human-robot interaction, mark me as a potential adivisor when you apply to the UCLA CS PhD program! #PhD #Robotics @CS_UCLA

There is an old Chinese saying “Review the old and learn the new.” 🧐🤖📚 Conditioning on a new task at hand, prior data can serve different purposes. In FlowRetrieval, we leverage optical flow to extract motion-similar prior data to augment and expedite policy learning.
How can prior data be effectively leveraged to improve few-shot learning? We propose FlowRetrieval to extract motion-similar prior data with optical flow! 🚀 FlowRetrieval outperforms existing methods by 27% on average across 5 tasks in both sim and real. 🤖✨
Paper decisions have been released, and the accepted papers 🎉 can also be found on our website (rlbrew-workshop.github.io/papers.html). Keep a look out for incoming Poster and Camera-ready instructions. We look forward to seeing everyone in Amherst!
Thrilled to announce that I am joining @CS_UCLA as an Assistant Professor this Fall! 🐻 Many thanks to my incredible advisors, mentors, family and friends for the encouragement and support. ❤️Looking forward to this exciting new chapter and all the opportunities ahead! 🤖🤖🤖
It is frustrating to see robots making the same mistakes over and over again, come to @LihanZha's poster at ICRA and see how we enable robots to remember language corrections with LLMs!
Excited to share our work DROC at #ICRA2024!🚀 DROC empowers robots to learn from online language corrections and effortlessly generalize to new tasks. Join my presentation in room AX-F201 from 13:30 - 15:00 on Thu 16 May,and swing by our poster at BT13-AX.6 between 16:30-18:00!
We are extending the submission deadline to 👉9 May 2024 AOE👈! Feel free to submit preliminary work, work submitted to RLC/a first draft for NeurIPS/any other recently accepted work towards furthering the field of RL beyond Rewards. Submission Link: openreview.net/group?id=rl-co…
Reward functions are often hard or impossible to design. If you're working on RL without a predefined reward function (RLHF, unsupervised, exploration etc.), consider submitting to the RLBrew workshop! Deadline May 3rd.
Announcing the Reinforcement Learning Beyond Rewards workshop at the first @RL_Conference. Think that rewards aren't enough for RL? Working on RLHF? Thinking of alternative ways of alignment? Creating a foundational model for RL? or have ideas on task-agnostic RL algo? Join us
Announcing the Reinforcement Learning Beyond Rewards workshop at the first @RL_Conference. Think that rewards aren't enough for RL? Working on RLHF? Thinking of alternative ways of alignment? Creating a foundational model for RL? or have ideas on task-agnostic RL algo? Join us
We are presenting this work at #NeurIPS2023 Wednesday morning at poster #423. Come and check it out if you are also in New Orleans! ⚜️
In imitation learning (IL), we often focus on better algorithms, but what about improving the data? What does it mean for a dataset to be high quality? Our work takes a first step towards formalizing and analyzing data quality. (1/5) arxiv.org/abs/2306.02437
How can robots 🤖 𝘳𝘦𝘮𝘦𝘮𝘣𝘦𝘳 human language feedback 🗣️ over time? We introduce DROC: a method for distilling and retrieving generalizable knowledge from online language corrections. Paper: arxiv.org/abs/2311.10678 Website: sites.google.com/stanford.edu/d…
Thrilled to announce the first annual Reinforcement Learning Conference @RL_Conference, which will be held at UMass Amherst August 9-12! RLC is the first strongly peer-reviewed RL venue with proceedings, and our call for papers is now available: rl-conference.cc. 🧵
Really excited for what RT-X can enable in robot learning moving forward! Amazing collaboration with a wonderful group of people in academia and industry!
Introducing 𝗥𝗧-𝗫: a generalist AI model to help advance how robots can learn new skills. 🤖 To train it, we partnered with 33 academic labs across the world to build a new dataset with experiences gained from 22 different robot types. Find out more: dpmd.ai/TW_RT-X