Tianchang Shen
@TianchangS
Generating nice meshes in AI pipelines is hard. Our #SIGGRAPHAsia2024 paper proposes a new representation which guarantees manifold connectivity, and even supports polygonal meshes -- a big step for downstream editing and simulation. (1/N) SpaceMesh: research.nvidia.com/labs/toronto-a……
🚀 We just open-sourced Cosmos DiffusionRenderer! This major upgrade brings significantly improved video de-lighting and re-lighting—powered by NVIDIA Cosmos and enhanced data curation. Released under Apache 2.0 and Open Model License. Try it out! 🔗 github.com/nv-tlabs/cosmo…
🚀 Introducing DiffusionRenderer, a neural rendering engine powered by video diffusion models. 🎥 Estimates high-quality geometry and materials from videos, synthesizes photorealistic light transport, enables relighting and material editing with realistic shadows and reflections
We are excited to share Cosmos-Drive-Dreams 🚀 A bold new synthetic data generation (SDG) pipeline powered by world foundation models—designed to synthesize rich, challenging driving scenarios at scale. Models, Code, Dataset, Tookit are released. Website:…
📢 GEN3C is now open-sourced, with code released under Apache 2.0 and model weights under the NVIDIA Open Model License! 🚀 Along with it, we're releasing a GUI tool that lets you specify your desired video trajectory in 3D — come play with it and generate your own! The…
🚀Excited to introduce GEN3C #CVPR2025, a generative video model with an explicit 3D cache for precise camera control. 🎥It applies to multiple use cases, including single-view and sparse-view NVS🖼️ and challenging settings like monocular dynamic NVS and driving simulation🚗.…
FlexiCubes is now under Apache 2.0! 🎉 We've been excited to see FlexiCubes extracting high-quality meshes across the community in projects like TRELLIS and TripoSF --- now it's available with a more permissive license. Let's keep building. 💙 👉 Flexicubes is in NVIDIA…
Nvidia just released Cosmos-Transfer1 on Hugging Face Conditional World Generation with Adaptive Multimodal Control
Want precise control over the camera trajectory in your generated videos? Need to edit or remove objects in the scene? Check out how we leverage 3D in video models to make it happen! 🎉
🚀Excited to introduce GEN3C #CVPR2025, a generative video model with an explicit 3D cache for precise camera control. 🎥It applies to multiple use cases, including single-view and sparse-view NVS🖼️ and challenging settings like monocular dynamic NVS and driving simulation🚗.…
Excited to share our #CVPR2025 paper: Difix3D+ Difix3D+ reimagines 3D reconstruction with single-step diffusion, distilling 2D generative priors for realistic novel view synthesis from large viewpoint shifts. 📄Paper: arxiv.org/abs/2503.01774 🌐Website: research.nvidia.com/labs/toronto-a…
Nvidia just dropped GEN3C 3D-Informed World-Consistent Video Generation with Precise Camera Control
We found a way to generate manifold, polygonal meshes from feature vectors at points -- even if the vectors are random, you are still guaranteed to get a manifold mesh! How? Halfedge meshes, permutations, spacetimes, and more! Check out the 🧵. This project was a blast!
Generating nice meshes in AI pipelines is hard. Our #SIGGRAPHAsia2024 paper proposes a new representation which guarantees manifold connectivity, and even supports polygonal meshes -- a big step for downstream editing and simulation. (1/N) SpaceMesh: research.nvidia.com/labs/toronto-a……
📢📢 Align Your Steps: Optimizing Sampling Schedules in Diffusion Models research.nvidia.com/labs/toronto-a… TL;DR: We introduce a method for obtaining improved sampling schedules for diffusion models, resulting in better samples at the same computation cost. (1/5)
New #NVIDIA #GTC24 paper 🎊 We generate high-quality 3D assets in only 400ms from text by combining (a) amortized optimization for speed, (b) surface rendering for quality, and (c) 3D data for robustness. ☕ LATTE3D project details: research.nvidia.com/labs/toronto-a… 🧵with many fun gifs