Zan Gojcic
@ZGojcic
Research manager at @NVIDIAAI working on neural reconstruction and data-driven simulation.
Had a great time chatting with @sopharicks and the @buZZrobot community about our recent work, DiffusionRenderer, and the exciting research my team is doing at @NVIDIAAI! DiffusionRenderer project page: research.nvidia.com/labs/toronto-a…
@NVIDIAAI rendering model allows manipulating lighting in video, opening up new possibilities for creative editing. What is even cooler, it can generate synthetic data for autonomous vehicles and robot training, helping overcome bottlenecks in collecting physical data. Huge…
Super cool!
Trained directly on @insta360 X5 circular fisheyes with @NVIDIAAIDev 3DGUT, and rendered using a fisheye camera in the gsplat viewer. Princess of Wales Conservatory, Kew Gardens, London. #NVIDIA3DGUT #NVIDIASweepstakes #3DGS
Time to throw away the Plücker raymaps - a really elegant formulation of a camera-aware RoPE like embedding for multiview ViTs! Great work by @ruilong_li @JunchenLiu77 and the team!
For everyone interested in precise 📷camera control 📷 in transformers [e.g., video / world model etc] Stop settling for Plücker raymaps -- use camera-aware relative PE in your attention layers, like RoPE (for LLMs) but for cameras! Paper & code: liruilong.cn/prope/
Sadly, I am no longer a professor at ETH (@eth_en) due to very severe #longCovid and #MECFS. ethrat.ch/de/ernennungen….
Thought I'd share this WebGL viewer that uses a combination of ray tracing and depth testing to render 3D (or 2D) Gaussians. github.com/fhahlbohm/dept… Runs smoothly (>120 Hz) on an M1 MacBook Pro at 1080p. Quality is decent. Gaussians truncated at 2σ. No higher degree SH support.
NVIDIA’s AI watched 150,000 videos… and learned to relight scenes incredibly well! No game engine. No 3D software. And it has an amazing cat demo. 🐱💡 Hold on to your papers! Full video: youtube.com/watch?v=yRk6vG…
All three works will be presented in the oral session today at 1pm in the Karl F Dean room!
To wrap up the open-sourcing season, we’re excited to announce that DiffusionRenderer, based on the NVIDIA Cosmos world model, is now open sourced! That means that the code of all three of our #CVPR25 oral papers is now available: - 3DGUT - DiffusionRenderer - Diffix3D+ @CVPR
Attending @CVPR and looking for a PhD or postdoc position in the area of 3d reconstruction (Gaussian splatting, nerfs, scene understanding, etc.)? Find me or drop me an email ;)
It’s day 1 of the main #CVPR2025 conference! 🤗 We kick things off at 8:30am with the opening ceremonies and award announcements. Who will take home the honours? 🏆 Check out the 14 paper award nominees! 1/2
🚀 We just open-sourced Cosmos DiffusionRenderer! This major upgrade brings significantly improved video de-lighting and re-lighting—powered by NVIDIA Cosmos and enhanced data curation. Released under Apache 2.0 and Open Model License. Try it out! 🔗 github.com/nv-tlabs/cosmo…
🚀 Introducing DiffusionRenderer, a neural rendering engine powered by video diffusion models. 🎥 Estimates high-quality geometry and materials from videos, synthesizes photorealistic light transport, enables relighting and material editing with realistic shadows and reflections
Excited to host @ZGojcic talk next week on @nvidia DiffusionRenderer, a new technique for neural rendering. It approximates how light behaves in the real world and can turn daytime scenes into night, sunny scenes into cloudy ones, and so on. It combines inverse and forward…
🚀 Just in time for the #CVPR rush: we’ve released the code for Difix3D+ — a Best Paper Award candidate! 🔧 Try out the code & demos: github.com/nv-tlabs/Difix… 🎤 Oral (June 15): 1:00–1:15 PM CDT, Karl Dean Grand Ballroom 🖼️ Poster: 4:00–6:00 PM CDT, ExHall D (#57) Join us! @CVPR
🚀 Difix3D+ is now open-sourced! Check out the code and try the demo: github.com/nv-tlabs/Difix… We're presenting at #CVPR2025 this Sunday, June 15 — come say hi! 🗣️ Oral: 1:00–1:15 PM CDT, Karl Dean Grand Ballroom 🖼️ Poster: 4:00–6:00 PM CDT, ExHall D (Poster #57)
Keynote by @paschalidoud_1 at USM3D right now! Room 104 D #CVPR2025