Zhaoxi Chen
@Frozen_Burning
Ph.D. student @MMLabNTU | Neural Rendering & 3D Generation | Ex Intern @RealityLabs | BS @Tsinghua_Uni
3DTopia-XL for PBR asset generation is accepted to #CVPR2025👏 The training code has been open-sourced including our **high-quality tokenization scheme, PrimX**, which turns textured meshes into NxD tensors! Code: github.com/3DTopia/3DTopi… Project Page: 3dtopia.github.io/3DTopia-XL/
🔥3D-Native GenAI Foundation Model🔥 We present 🦊3DTopia-XL🐰, a 1B diffusion transformer on spatial primitives for 3D PBR asset generation. - Project: 3dtopia.github.io/3DTopia-XL/ - Code: github.com/3DTopia/3DTopi… - Demo @huggingface : huggingface.co/spaces/FrozenB… . Thanks to @_akhaliq !
One more thing, I just found out that using the #Anycoder feature from @huggingface can easily redesign the project page of #PhysX in a different style! #vibecoding
🌟Physical-Grounded 3D Asset Generation #PhysX is the first physics-grounded 3D generative suites, where #PhysXNet contains 6M objects with physical annotations! - Page: physx-3d.github.io - Code: github.com/ziangcao0312/P… - Data @huggingface: huggingface.co/datasets/Caoza…
🔥Physical-Grounded 3D Asset Generation🔥 #PhysX is the first physics-grounded 3D framework with *absolute scale*, *material*, *affordance*, *kinematics*, and *function* - Page: physx-3d.github.io - Code: github.com/ziangcao0312/P… - Data @huggingface: huggingface.co/datasets/Caoza…
PhysX Physical-Grounded 3D Asset Generation
#Free4D is accepted to @ICCVConference, see y’all in Honolulu🌺🌴 Free4D is a training-free method for 4D generation from a single image, pls checkout our code and project page for more!!! ⭐️Code: github.com/TQTQliu/Free4D 📝Page: free4d.github.io
🔥 Free4D is accepted to #ICCV2025! 🔥 Free4D is a tuning-free framework for 4D scene generation from a single image, with high quality, efficiency & generalizability. - Project: free4d.github.io - Paper: arxiv.org/abs/2503.20785 - Code: github.com/TQTQliu/Free4D
🔥 Free4D is accepted to #ICCV2025! 🔥 Free4D is a tuning-free framework for 4D scene generation from a single image, with high quality, efficiency & generalizability. - Project: free4d.github.io - Paper: arxiv.org/abs/2503.20785 - Code: github.com/TQTQliu/Free4D
🔥Tuning-Free 4D Scene Generation🔥 #Free4D is a tuning-free approach for 4D scene generation from a single image, with high quality, efficiency & generalizability - Project: free4d.github.io - Paper: arxiv.org/abs/2503.20785 - Code (open-sourced): github.com/TQTQliu/Free4D
Thrilled to unveil LINO - Light of Normals! 🌟 We Achieve Up to 4K Resolution & 3D Scanner-Level Accurate Surface Normals Estimation. How? ☀️Learnable Light Register Tokens 🔬Preserve high-frequency details via Wavelet Sampling Try it at github.com/houyuanchen111… !
Light of Normals Unified Feature Representation for Universal Photometric Stereo
Light of Normals Unified Feature Representation for Universal Photometric Stereo
Combining VGGT with lighting registers gives rise to today’s strongest foundation model for photometric stereo. Thanks @_akhaliq for highlighting our work on LINO: predicting ultra-detailed 4K normal maps from unified features! 👀
Light of Normals Unified Feature Representation for Universal Photometric Stereo
LINO = VGGT + Learnable Light Tokens + Detail-Aware Losses 🔥 Huge thanks to @raoanyi @chen_yuan76802 — loved building this together! Project: houyuanchen111.github.io/lino.github.io
Universal Photometric Stereo (PS) aims for robust normal maps under any light. 🚨 But big hurdles remain! 1️⃣ Deep coupling: Ambiguous intensity - is it the light changing or the surface turning? 🤔 2️⃣ Detail loss: Complex surfaces (shadows, inter-reflections, fine details) stump…
Light of Normals: Unified Feature Representation for Universal Photometric Stereo Hong Li, Houyuan Chen, @ychngji6, @Frozen_Burning, Bohan Li, @xshocng1, Xianda Guo, Xuhui Liu, Yikai Wang, Baochang Zhang, Satoshi Ikehata, Boxin Shi, @raoanyi, @HaoZhao_AIRSUN tl;dr: learnable…
Photometric stereo meets VGGT: LINO leverages geometry backbones + light register tokens to deliver universal, 4K-detailed normal maps under arbitrary lighting. 👀 Thanks for the post @zhenjun_zhao
Light of Normals: Unified Feature Representation for Universal Photometric Stereo Hong Li, Houyuan Chen, @ychngji6, @Frozen_Burning, Bohan Li, @xshocng1, Xianda Guo, Xuhui Liu, Yikai Wang, Baochang Zhang, Satoshi Ikehata, Boxin Shi, @raoanyi, @HaoZhao_AIRSUN tl;dr: learnable…
Please drop by and check our 🌟highlight🌟 #3DTopia-XL today afternoon @CVPR ! ExHall D Poster #40 Sun 15 Jun 4:00 PM CDT — 6:00 PM CDT 🔥 Code: github.com/3DTopia/3DTopi… 👀 Project Page: 3dtopia.github.io/3DTopia-XL/
3DTopia-XL for PBR asset generation is accepted to #CVPR2025👏 The training code has been open-sourced including our **high-quality tokenization scheme, PrimX**, which turns textured meshes into NxD tensors! Code: github.com/3DTopia/3DTopi… Project Page: 3dtopia.github.io/3DTopia-XL/
Catch our poster 𝐆𝐚𝐮𝐬𝐬𝐢𝐚𝐧𝐂𝐢𝐭𝐲 at #CVPR2025 today, 4:00–6:00 PM (GMT-5) in ExHall D, Poster #64! Zhaoxi @Frozen_Burning will be presenting — come say hi!
🎉Our work GaussianCity is accepted to #CVPR2025! ⚡Unbounded 3D City Generation—60x Faster! 🚀Push Gaussian Splatting to infinite-scale cities! 📄Paper: arxiv.org/abs/2406.06526 🌐Project Page: haozhexie.com/project/gaussi… 👨💻Code (open-sourced): github.com/hzxie/Gaussian…
About to start. 208 A CVPR 2nd Workshop on Efficient and On-Device Generation (EDGE) @CVPR
Angjoo @akanazawa is now presenting about Streaming Perception: Towards Learning Structured Models of the World In Room 204 at Our tutorial: from video generation to world model @CVPR world-model-tutorial.github.io
🎬#CVPR2025 𝐓𝐮𝐭𝐨𝐫𝐢𝐚𝐥 🗺️𝑭𝒓𝒐𝒎 𝑽𝒊𝒅𝒆𝒐 𝑮𝒆𝒏𝒆𝒓𝒂𝒕𝒊𝒐𝒏 𝒕𝒐 𝑾𝒐𝒓𝒍𝒅 𝑴𝒐𝒅𝒆𝒍 @CVPR 🔗world-model-tutorial.github.io 📅June 11 🚀Hosted by @MMLabNTU x @Kling_ai 🧠Incredible lineup of speakers: @jparkerholder @Koven_Yu @baaadas @wanfufeng @akanazawa @sherryyangML