Qixing Huang
@qixing_huang
I am an associate professor at UT Austin. I do research at the intersection of computer graphics, computer vision, and machine learning.
nsf.gov/awardsearch/sh… Very happy to get an NSF grant on injecting physical, topological, and geometric priors into 3D generative models. Usually this is not a big deal for a tenured professor, but I am excited because this is something that I have been working on recently.
Six papers were accepted by ICCV 25, including five papers from three very strong students: @hanwenjiang1, @jacinth_lu, and Yunpeng Bai (bbaaii.github.io). There will be big news on recent success in faculty recruitment at UT in graphics/vision. Stay tuned!

Just learned that @IsilDillig won the #SIGPLAN Robin Milner Junior researcher award this year! 🎈 🍾 The award goes to one outstanding mid-career PL researcher each year, and it’s hard to think of a more deserving candidate for it. Congratulations, Isil! sigplan.org/Awards/Milner/
Hope you enjoyed our workshop 😛 We have now released the slides of the presentations. Thanks to the amazing speakers! Check it if you didn't attend!
🔍 3D is not just pixels—we care about geometry, physics, topology, and functions. But how to balance these inductive biases with scalable learning? 👀 Join us at Ind3D workshop @CVPR (June 12, afternoon) for discussions on the future of 3D models! 🌐 ind3dworkshop.github.io/cvpr2025
Every year at @CVPRConf (and other conferences), a lot of money was spent on printing these posters (~150$ x 2500=375k), and after the conference, all these posters are trashed and wasted like these. A better business model should save costs here.
I’m looking for PhD students for the 2026 cycle! If you’re @CVPR and think we might be a good fit, come say hi or send me an email with [CVPR2025] in the subject line so that I don’t miss it. #CVPR2025
I’m thrilled to share that I will be joining Johns Hopkins University’s Department of Computer Science (@JHUCompSci, @HopkinsDSAI) as an Assistant Professor this fall.
Top minds. Deep ideas. 🎯 Inductive Bias in 3D Generation 🗓️ June 12 — Day 2 of CVPR. Be there!
🔍 3D is not just pixels—we care about geometry, physics, topology, and functions. But how to balance these inductive biases with scalable learning? 👀 Join us at Ind3D workshop @CVPR (June 12, afternoon) for discussions on the future of 3D models! 🌐 ind3dworkshop.github.io/cvpr2025
Applying a Schengen visa to attend SGP (sgp2025.my.canva.site) in Spain is among the worst visa application experience in my life. They do not accept mail in applications and I spent half-day in Houston to apply in person (total cost 400 dollars). They retook my biometrics,…
🔥 HUMOTO: complex human object interaction dataset. Fine-grained text annotation. Detailed finger and full-body poses. Multiple objects. Mixamo compatible. Amazing collaboration led by @jacinth_lu @Papagina_Yi and team @uttaran127 @qixing_huang. Project: jiaxin-lu.github.io/humoto/
🚀 Struggling with the lack of high-quality data for AI-driven human-object interaction research? We've got you covered! Introducing HUMOTO, a groundbreaking 4D dataset for human-object interaction, developed with a combination of wearable motion capture, SOTA 6D pose…
🚨 Introducing HUMOTO! 🚨 Our new 4D dataset of human-object interactions with stunning details ✨, capturing daily activities from cooking 🍳 to organizing 📚. Perfect for robotics 🤖, computer vision 👁️ & animation 🎬!
🚀 Struggling with the lack of high-quality data for AI-driven human-object interaction research? We've got you covered! Introducing HUMOTO, a groundbreaking 4D dataset for human-object interaction, developed with a combination of wearable motion capture, SOTA 6D pose…
🚀 Struggling with the lack of high-quality data for AI-driven human-object interaction research? We've got you covered! Introducing HUMOTO, a groundbreaking 4D dataset for human-object interaction, developed with a combination of wearable motion capture, SOTA 6D pose…
Brilliant insights from @Michael_J_Black on the importance of data and 3D+ for 4D foundation models that understand humans, and the future of embodied intelligence in the last keynote talk of #Eurographics2025! See you next year in Aachen :)
Awesome look into the future of humanoid robots and what we can learn from character animation from Karen Liu’s keynote at #Eurographics2025!
Amazing keynote by Alyosha Efros on the role of data in visual computing at #Eurographics2025! Thought-provoking insights from generative models to 3D perception :)
This is an exciting workshop, on a very important topic, with an excellent line of speakers.
🔍 3D is not just pixels—we care about geometry, physics, topology, and functions. But how to balance these inductive biases with scalable learning? 👀 Join us at Ind3D workshop @CVPR (June 12, afternoon) for discussions on the future of 3D models! 🌐 ind3dworkshop.github.io/cvpr2025
Supervised learning has held 3D Vision back for too long. Meet RayZer — a self-supervised 3D model trained with zero 3D labels: ❌ No supervision of camera & geometry ✅ Just RGB images And the wild part? RayZer outperforms supervised methods (as 3D labels from COLMAP is noisy)…
RayZer: A Self-supervised Large View Synthesis Model @hanwenjiang1, @HaoTan5, @totoro97_, @Haian_Jin, @__yuezhao__, @Sai__Bi, @KaiZhang9546, @fujun_luan, Kalyan Sunkavalli, @qixing_huang, @geopavlakos arxiv.org/abs/2505.00702
Reconstructing Humans with a Biomechanically Accurate Skeleton just dropped on Hugging Face
Check out HSMR, Yan Xia's @IsshikihXY latest work. We reconstruct humans using a biomechanically accurate skeleton. Code and HuggingFace demo are live. Give it a try! Webpage: isshikihugh.github.io/HSMR/ Code: github.com/IsshikiHugh/HS… Demo: huggingface.co/spaces/Isshiki…
GenVDM: Generating Vector Displacement Maps From a Single Image. arxiv.org/abs/2503.00605