Yiğit Korkmaz
@yigitkkorkmaz
Computer Science PhD @USC. Prev @UCSanDiego, @UniBogazici
Are current eval/deployment practices enough for today’s robot policies? Announcing the Eval&Deploy workshop at CoRL 2025 @corl_conf, where we'll explore eval + deployment in the robot learning lifecycle and how to improve it! eval-deploy.github.io 🗓️ Submissions due Aug 30
Just a small reminder that our workshop is happening tomorrow, and we have an amazing line of speakers! Make sure to check out the workshop website for the schedule. 🤖
📢Exciting news! Our workshop Human-in-the-Loop Robot Learning: Teaching, Correcting, and Adapting has been accepted to RSS 2025!🤖🎉Join us as we explore how robots can learn from and adapt to human interactions and feedback. 🔗Workshop website: hitl-robot-learning.github.io 🧵👇
[Blog Post Announcement] The internet is full of “interesting” data: cat videos, think pieces, and highlight reels—but robots often need to learn from mundane data to help us with everyday unexciting tasks. While people aren’t incentivized to share this boring data, we constantly…
VLAs have the potential to generalize over scenes and tasks, but require a ton of data to learn robust policies. We introduce OG-VLA, a novel architecture and learning framework that combines the generalization strengths of VLAs with the robustness of 3D-aware policies. 🧵
How can non-experts quickly teach robots a variety of tasks? Introducing HAND ✋, a simple, time-efficient method of training robots! Using just a **single hand demo**, HAND learns manipulation tasks in under **4 minutes**! 🧵
Reward models that help real robots learn new tasks—no new demos needed! ReWiND uses language-guided rewards to train bimanual arms on OOD tasks in 1 hour! Offline-to-online, lang-conditioned, visual RL on action-chunked transformers. 🧵
I recently wrote a post about MILE for @RASC_USC blog — check it out here: rasc.usc.edu/blog/mile-mode… Feel free to reach out if you have any questions or thoughts! See you at ICRA 🤖🙂
I’ll be presenting this work at ICRA 2025. Feel free to reach out if you are interested in chatting about sample-efficient human-in-the-loop robot learning! Website: liralab.usc.edu/mile/ Paper: arxiv.org/abs/2502.13519 Thanks a lot to my advisor @ebiyik_
Hamza has been working to make progress in AI and reasoning feel more measurable and grounded. He’s also genuinely enjoyable to work with. Check out his work and give him a follow (faculty friends, he’ll be applying to PhD programs next year 👀)
How much does a correct answer from an LM cost? How much has AI lowered the cost of solving problems? Meet Cost‑of‑Pass: An Economic Framework for Evaluating LMs! Cost‑of‑Pass = expected $ for one correct answer. Frontier Cost‑of‑Pass = cheapest route: an LM or a human expert.