Georgia Chalvatzaki
@GeorgiaChal
Professor @CS_TUDarmstadt, @hessian_AI, AI Emmy Noether @dfg_public, #ERCStG SIREN, co-chair TC @MobileManip & chair WiE @ieeeras
Such an honor that our project is highlighted by @ERC_Research! We are so excited to put these humanoid robots into serious work powered by SIREN's breakthroughs! Soon, we will announce open positions! We need highly motivated people with interdisciplinary background for SIREN!
🤖 How can robots adapt to unpredictable environments? @GeorgiaChal's team will create smarter, more flexible robots that learn from their surroundings and handle new tasks. Discover more 👉 europa.eu/!bkrTrH New #FrontierResearch #ERCStG #Robotics @TUDarmstadt
🚨 We're hiring a Postdoc in Robot Learning @ PEARL Lab, TU Darmstadt 🚨 Join our ERC-funded project SIREN (Structured Interactive Perception and Learning for Holistic Robotic Embodied Intelligence). We’re developing methods that will enable robots to understand and adapt to…
🎥 A powerful moment from the RAS-WiE Voices — Women shaping the future of robotics and automation at #ICRA2025 Community Day. Celebrating voices driving change and shaping the future of robotics.
Don’t miss our WiE event at #ICRA2025 ✨ On Wednesday join the WiE committee for our #ICRA25 Lunch event "RAS-#WomenInEngineering Voices — Women shaping the future of #robotics and #automation"! 🤖 The event will begin with short talks by finalists of our inaugural #WiRA Women…
🎓 Attending #ICML2025 and interested in training diffusion policies in online RL? Come chat with me about our work DIME: Diffusion-Based Maximum Entropy Reinforcement Learning at 📍 Poster W-719 (West Hall B2-B3) 🗓️ Wednesday, July 16 @ 4:30 p.m.
I couldn’t make it to #ICML2025, but our work on Diffusion-Based Maximum Entropy RL is there! We introduce DiME, a new approach that swaps the standard actor in MaxEnt RL with a conditional diffusion model. This bypasses the need for tricky entropy approximations and lets our…
🤖📺 What if a robot could learn complex, bimanual tasks just by watching YouTube? A huge barrier has always been translating what we see into something a robot can understand and do. I’m beyond excited about this work from my lab PEARL, that was just accepted to…
PSA for the robotics community: Stop labeling affordances or distilling them from VLMs. Extract affordances from bimanual human videos instead! Excited to share 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, accepted at #ICCV2025! 🎉 🧵1/5
We're excited to announce the third workshop on LEAP: Learning Effective Abstractions for Planning, to be held at #CoRL2025 @corl_conf! Early submission deadline: Aug 12 Late submission deadline: Sep 5 Website link below 👇
Hesse plans major cuts to university funding. This article, featuring my colleague Stefan Roth (TU Darmstadt), highlights the serious consequences. It risks widening the gap with states like Bavaria & Baden-Württemberg—and with leading high-tech nations: bit.ly/44prCWO
Congratulations to @GeorgiaChal whose research focuses on improving the way robots and humans function together. She has been awarded the 2025 Alfried Krupp Prize! #FrontierResearch
I am deeply honored and grateful to receive the Alfried Krupp Prize, and incredibly proud to bring this prestigious recognition to Technische Universität Darmstadt for the first time. This award is not only a personal milestone but also a meaningful endorsement of the research…
I am deeply honored and grateful to receive the Alfried Krupp Prize, and incredibly proud to bring this prestigious recognition to Technische Universität Darmstadt for the first time. This award is not only a personal milestone but also a meaningful endorsement of the research…
Thank you to all the speakers & attendees for making the EgoAct workshop a great success! Congratulations to the winners of the Best Paper Awards: EgoDex & DexWild! The full recording is available at: youtu.be/64yLApbBZ7I Some highlights:
Join us on Saturday, 21st June at EgoAct 🥽🤖: the 1st Workshop on Egocentric Perception & Action for Robot Learning @ RSS 2025 @RoboticsSciSys in Los Angeles! ☀️🌴 Full program w/ accepted contributions & talks at: egoact.github.io/rss2025 Online stream: tinyurl.com/egoact
Very happy that EgoDex received Best Paper Awards of 1st EgoAct workshop at #RSS2025! Huge thanks to the organizing committee @SnehalJauhri @GeorgiaChal @GalassoFab10 @danfei_xu @YuXiang_IRVL for putting out this forward-looking workshop. Also kudos to my colleagues @ryan_hoque…
Join us on Saturday, 21st June at EgoAct 🥽🤖: the 1st Workshop on Egocentric Perception & Action for Robot Learning @ RSS 2025 @RoboticsSciSys in Los Angeles! ☀️🌴 Full program w/ accepted contributions & talks at: egoact.github.io/rss2025 Online stream: tinyurl.com/egoact
Excited to announce EgoAct🥽🤖: the 1st Workshop on Egocentric Perception & Action for Robot Learning @ #RSS2025 in LA! We’re bringing together researchers exploring how egocentric perception can drive next-gen robot learning! Full info: egoact.github.io/rss2025 @RoboticsSciSys
Could geometric cues help improve goal inference in robotics? We explore this question at #RLDM today | Spot 86. Stop by if you're curious about bridging motion planning and intent prediction.
⏰ Less than 48 hours left for submitting your work at the IBRL Workshop! 🚀 We remark that three papers will be accepted as spotlights and will have the opportunity to give a 10-minute presentation before the poster session! 🔗 OpenReview portal: openreview.net/group?id=rl-co…
📢 Submission deadline extension: the new deadline is June 6th AoE 🔗 Portal: openreview.net/group?id=rl-co… 🌐 More information at: sites.google.com/view/ibrl-work… 🚀 Looking forward to seeing you at @RL_Conference !
Congratulations and best of luck in this new role and adventure! I look forward to further collaboration once you are in Lund!
I'm pleased to announce that, starting October 1st, I will be joining the Computer Science department at Lund University (in Sweden) as a Senior Lecturer. I will join the Robotic and Semantic Systems group and collaborate in the RobotLab LTH, where I'll bring my RL expertise.
Exciting news from @TUDarmstadt! We have officially secured the Cluster of Excellence for Reasonable AI (RAI). This is a major milestone for our university and an important development within the hessian.AI ecosystem. This achievement was made possible by the…
🚀 TU Darmstadt leads the new Cluster of Excellence "Reasonable AI" – advancing trustworthy, efficient & adaptive AI grounded in common sense. A big thank you to the entire team, the university, and the state of Hesse for their tremendous work & support! buff.ly/N7xLKfW
I'm excited to give a virtual invited talk at the #ICRA2025 Workshop on Structured Robot Learning, titled: "Reclaiming Structure: A Vision for the Next Generation of Robot Learning" 🔍 When I started using the term "Structured Robot Learning", it was about unifying perception,…
