Georgios Pavlakos
@geopavlakos
Assistant Professor at UT Austin @UTCompSci | Working on Computer Vision and Machine Learning
Super excited to share that I will be starting as an Assistant Professor at UT Austin @UTCompSci in January 2024! 🥳🥳 I'm extremely grateful to my amazing mentors and colleagues for their unwavering support every step of the way! Looking forward to this exciting new chapter!
What would a World Model look like if we start from a real embodied agent acting in the real world? It has to have: 1) A real, physically grounded and complex action space—not just abstract control signals. 2) Diverse, real-life scenarios and activities. Or in short: It has to…
We're grateful to receive multiple sponsorships (to be announced soon)! 🏆 $1000 Best Paper Award 🏆 $1000 Best Demo Paper Award 🎁 Onsite gifts for attendees! Join us to shape the future of 3D learning in robotics, autonomous driving, the metaverse, and scientific imaging! 🔗…
📢 Call for Papers: End-to-End 3D Learning @ ICCV 2025 Workshop Advance 3D Representation, Geometry & Generative AI for Robotics, Autonomous Driving, XR, and Science. 🌍 Domain-leading speakers 🏆 Best Paper Award 🗓 Submission Deadline: June 29 🔗 e2e3d.github.io…
Aaaand, Hanwen @hanwenjiang1 is presenting MegaSynth! We are at poster #57. Come to our poster to find out more! Project page, Code, Data: hwjiang1510.github.io/MegaSynth/

During this poster session, Ashutosh @chargedneutron_ is presenting FIction. Come to poster #173 to chat with us! Project page, Code: vision.cs.utexas.edu/projects/FIcti…

Ashutosh @chargedneutron_ is presenting ExpertAF during this poster session! We are at poster #280. Come by to chat about it! Project page: vision.cs.utexas.edu/projects/Exper…

Brent Yi @brenthyi will be presenting EgoAllo during this poster session! Please come to poster #164 to find our more! Project page, Code, Models: egoallo.github.io

Yan Xia @IsshikihXY just gave an awesome presentation of HSMR! We will be at our poster #91 for the poster session this afternoon. Come by to find out more! Project page, Code, Models: isshikihugh.github.io/HSMR/


If you're at #CVPR2025, come by the Workshop on 3D Human Understanding tomorrow and meet all our amazing speakers in person! 🕑 June 12, 1:50 PM 📍 Room 110b 👤 @akanazawa @Michael_J_Black @GerardPonsMoll1 @blacksquirrel__ @jhugestar 🌐 tinyurl.com/3d-humans-2025
Big congratulations, @hanwenjiang1! So proud of everything you’ve achieved so far! Can’t wait to see your next steps!
@hanwenjiang1 defended his thesis today and became Dr. Jiang. He is jointly supervised by @geopavlakos. Hanwen did amazing work. His recent work Rayzer got quite some visibility. He is quite independent and most of his PhD work came from his own ideas! He will join Adobe Research…
🚀 Struggling with the lack of high-quality data for AI-driven human-object interaction research? We've got you covered! Introducing HUMOTO, a groundbreaking 4D dataset for human-object interaction, developed with a combination of wearable motion capture, SOTA 6D pose…
🔍 3D is not just pixels—we care about geometry, physics, topology, and functions. But how to balance these inductive biases with scalable learning? 👀 Join us at Ind3D workshop @CVPR (June 12, afternoon) for discussions on the future of 3D models! 🌐 ind3dworkshop.github.io/cvpr2025
The 2nd 3D HUMANS workshop is back at @CVPR! 📍Join us on June 12 afternoon in Nashville for a 2025 perspective on 3D human perception, reconstruction & synthesis. 🖼️ Got a CVPR paper on 3D humans? Nominate it to be featured in our poster session! 👉 tinyurl.com/3d-humans-2025
Make sure to check out Hanwen's @hanwenjiang1 latest work! 🚀 We introduce RayZer, a self-supervised model for novel view synthesis. We use zero 3D supervision, yet we outperform supervised methods! Some surprising and exciting results inside! 🔍🔥
Supervised learning has held 3D Vision back for too long. Meet RayZer — a self-supervised 3D model trained with zero 3D labels: ❌ No supervision of camera & geometry ✅ Just RGB images And the wild part? RayZer outperforms supervised methods (as 3D labels from COLMAP is noisy)…
I'm excited to present "Fillerbuster: Multi-View Scene Completion for Casual Captures"! This is work with my amazing collaborators @Normanisation, @yash2kant, Vasu Agrawal, @MZollhoefer, @akanazawa, @c_richardt during my internship at Meta Reality Labs. ethanweber.me/fillerbuster/
Atlas Gaussians will be presented as a 🎉Spotlight🎉 at ICLR 2025! 🥳 Huge congratulations to Haitao Yang (yanghtr.github.io) for this amazing work! Project Page: yanghtr.github.io/projects/atlas…
Very happy that AtlasGaussians was accepted by ICLR 25. openreview.net/forum?id=H2Gxi…). I did very little and the students, in particular the first author Haitao Yang (yanghtr.github.io), came up with the idea. Haitao is graduating soon. Also, my second published paper with…
Very happy that AtlasGaussians was accepted by ICLR 25. openreview.net/forum?id=H2Gxi…). I did very little and the students, in particular the first author Haitao Yang (yanghtr.github.io), came up with the idea. Haitao is graduating soon. Also, my second published paper with…