Felix Taubner
@taubnerfelix
PhD Student at University of Toronto, working on generative 3D face animation
Introducing 🧢CAP4D🧢 CAP4D turns any number of reference images (single, few, and many) into controllable real-time 4D avatars. 🧵⬇️ Website: felixtaubner.github.io/cap4d/ Paper: arxiv.org/abs/2412.12093
All forms of intelligence co-emerged with a body, except AI We're building a #future where AI evolves as your lifelike digital twin to assist your needs across health, sports, daily life, creativity, & beyond... myolab.ai ➡️ Preview your first #HumanEmbodiedAI
We will present the paper this afternoon. Come to chat with us!!
Thrilled to share the papers that our lab will present at @CVPR. Learn more in this thread 🧵 and meet @Kai__He, @yash2kant, @Dazitu_616, and our previous visitor @toshiya427 in Nashville! 1/n
📣📣📣 Neural Inverse Rendering from Propagating Light 💡 just won Best Student Paper award at #CVPR!!!
📢📢📢 Neural Inverse Rendering from Propagating Light 💡 Our CVPR Oral introduces the first method for multiview neural inverse rendering from videos of propagating light, unlocking applications such as relighting light propagation videos, geometry estimation, or light…
Honored that our work received the best student paper award at #CVPR2025! This was a really fun and exciting collaboration with @mpotoole led by amazing students @anagh_malik @imarhombus @AndrewEJXie! Check out the work at anaghmalik.com/InvProp/
#CVPR2025 paper awards
I will be presenting CAP4D at #CVPR2025! Come check out our poster on today at poster board #327 (15:00-16:30) and Friday at poster board #9 (16:00-18:00). Don’t miss out on our 📣oral📣 presentation on Friday in ExHall A2 (15:00)!
Introducing 🧢CAP4D🧢 CAP4D turns any number of reference images (single, few, and many) into controllable real-time 4D avatars. 🧵⬇️ Website: felixtaubner.github.io/cap4d/ Paper: arxiv.org/abs/2412.12093
Check out the Toronto Computational Imaging Group at CVPR this week! - felixtaubner.github.io/cap4d/ (Fri: Oral Sess 2B) - anaghmalik.com/InvProp/ (Sat: Oral Sess 3A) - Opportunistic Single-Photon Time of Flight (Sat: Oral Sess 4C) - snap-research.github.io/ac3d/ (Sun: Poster Sess 5)
Be sure to check out this work done by 🤩@anagh_malik 🤩
📢📢📢 Neural Inverse Rendering from Propagating Light 💡 Our CVPR Oral introduces the first method for multiview neural inverse rendering from videos of propagating light, unlocking applications such as relighting light propagation videos, geometry estimation, or light…
📢📢📢 Neural Inverse Rendering from Propagating Light 💡 Our CVPR Oral introduces the first method for multiview neural inverse rendering from videos of propagating light, unlocking applications such as relighting light propagation videos, geometry estimation, or light…
📢 Introducing DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models Compared to vanilla DPO, we improve paired data construction and preference label granularity, leading to better visual quality and motion strength with only 1/3 of the data. 🧵
🚀 Just released: FLAIR – a new training-free approach to solving inverse problems using flow-matching models! 🎯 Try it live: huggingface.co/spaces/prs-eth… 📚 Learn more: inverseflair.github.io
📢Excited to be at #ICLR2025 for our paper: VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control Poster: Thu 3-5:30 PM (#134) Website: snap-research.github.io/vd3d/ Code: github.com/snap-research/… Also check out our #CVPR2025 follow-up AC3D: snap-research.github.io/ac3d/
Happy to share that 🧢CAP4D🧢 has been accepted to CVPR 2025 (Oral)! Looking forward to seeing you all in Nashville 🎉
Introducing 🧢CAP4D🧢 CAP4D turns any number of reference images (single, few, and many) into controllable real-time 4D avatars. 🧵⬇️ Website: felixtaubner.github.io/cap4d/ Paper: arxiv.org/abs/2412.12093
⚡️ Introducing Bolt3D ⚡️ Bolt3D generates interactive 3D scenes in less than 7 seconds on a single GPU from one or more images. It features a latent diffusion model that *directly* generates 3D Gaussians of seen and unseen regions, without any test time optimization. 🧵👇 (1/9)
📢📢📢 Come and submit to our workshop on Physics-inspired 3D Vision and Imaging at CVPR 2025! Speakers 🗣️ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @SeungHwanBaek8 and @GordonWetzstein! Thanks to coorganizers @imarhombus, @ceciliazhang77, @dorverbin and @jtompkin!
📢📢 𝐀𝐯𝐚𝐭𝟑𝐫 📢📢 Avat3r creates high-quality 3D head avatars from just a few input images in a single forward pass with a new dynamic 3DGS reconstruction model. Video: youtu.be/P3zNVx15gYs Project: tobias-kirschstein.github.io/avat3r Our core idea is to make Gaussian…
🚀 Introducing Pippo – our diffusion transformer pre-trained on 3B Human Images and post-trained with 400M high-res studio images! ✨Pippo can generate 1K resolution turnaround video from a single iPhone photo! 🧵👀 Full deep dive thread coming up next!
Meta presents: Pippo : High-Resolution Multi-View Humans from a Single Image Generates 1K resolution, multi-view, studio-quality images from a single photo in a one forward pass
📢📢𝐍𝐞𝐑𝐒𝐞𝐦𝐛𝐥𝐞 𝐯𝟐 𝐃𝐚𝐭𝐚𝐬𝐞𝐭 𝐑𝐞𝐥𝐞𝐚𝐬𝐞📢📢 Head captures of 7.1MP from 16 cameras at 73fps: * More recordings (425 people) * Better color calibration * Convenient download scripts github.com/tobias-kirschs… The new version of our dataset adds 156…
My group is looking for motivated PhD students that want to work on the future of digital humans. Within the ERC project 'LeMo: Learning Digital Humans in Motion' there are two open positions: career.tu-darmstadt.de/HPv3.Jobs/TU-D… career.tu-darmstadt.de/HPv3.Jobs/TU-D…
🚀 We present SynShot! A novel method leveraging synthetic prior to create fully drivable 3D head avatars using only a few shots (typically just 3 images).🎭✨ zielon.github.io/synshot arxiv.org/abs/2501.06903