Chenguo Lin
@lin_chenguo
CS/AI Ph.D. student at Peking University.
🚨 We just released 🎞️MoVieS — a feed-forward model that reconstructs 4D scenes in ⚡️1 second My favorite part: It learns dense (pixel-wise) sharp 3D world movements from novel view rendering + sparse point tracking supervision 🤯🎯 Check it out 👉 chenguolin.github.io/projects/MoVieS
"Generate in Parts" just got better with "Split in Parts"! 🥹😇 @hervenivon added a toggle in the @Scenario_gg 3D viewer that instantly splits generated meshes into an exploded view. See how PartCrafted breaks down your model - no need to download and check in Blender anymore.
Preprint of (not) today: Lin and Lin et al., "MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second" -- chenguolin.github.io/projects/MoVie… Feed-forward VGGT + Splats/Motion estimation heads, trained also with rendering & motion estimation losses. Multitask training improves all.
Beyond thrilled to officially roll out "Generate in Parts" on app.scenario.com - a new standard for Image-to-3D workflows! Say goodbye to monolithic Al meshes, and hello to clean, modular 3D assets you can animate, edit, or 3D print. Let's dive in with #PartCrafter👇!
"Generate in Parts" is so insanely good, it even generated the bullets inside the cylinder - automatically, from a single image. Every part comes cleanly separated: barrel, cylinder, trigger, hammer, etc… The future of 3D AI goes live next week on app.scenario.com 🔥
🧩 PartCrafter: Structured 3D Mesh Generation via Compositional Latent Diffusion Transformers 📐 Jupyter Notebook 🥳 Thanks to @kevin_yuchenlin ❤ @lin_chenguo ❤ @paulpanwang ❤ Honglei Yan ❤ Yiqiang Feng ❤ Yadong Mu ❤ Katerina Fragkiadaki ❤ Thanks to @alexandernasa ❤…
Just found a great product-ready implementation of our PartCrafter (wgsxm.github.io/projects/partc…) 🤩 Can't wait to try it!
Say goodbye to monolithic AI meshes! 👋 We're rolling out "Generate in Parts" for 3D models on app.scenario.com, next week. Things will get modular. ⚙ Another wild update! :) 🚀
"🎞️MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second" TL;DR: Feed-forward framework that jointly reconstructs appearance, geometry and motion for 4D scene perception from monocular videos in one second.
MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second
🚨 We just released 🎞️MoVieS — a feed-forward model that reconstructs 4D scenes in ⚡️1 second My favorite part: It learns dense (pixel-wise) sharp 3D world movements from novel view rendering + sparse point tracking supervision 🤯🎯 Check it out 👉 chenguolin.github.io/projects/MoVieS
MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second @lin_chenguo, @kevin_yuchenlin, @paulpanwang, @markyu98, Honglei Yan, @KaterinaFragiad, Yadong Mu tl;dr: dynamic splatter pixels->renderable deforming 3D particles->3D dynamic scenes arxiv.org/abs/2507.10065
We just found a wonderful Huggingface🤗 space demo for 🧩PartCrafter made by @alexandernasa. Thank you very much🤩 !!! If you'd like to generate 3D objects in parts, try it out! It's super simple & fast & free: huggingface.co/spaces/alexnas…
💥 We just open-sourced the inference & training code of PartCrafter 🧩 — along with pretrained checkpoints! 🚀 Check it out here 👉 wgsxm.github.io/projects/partc… ❗️❗️ #OpenSource #3D #Diffusion #AI #PartCrafter
🚨 New drop: PartCrafter 🧩 A 3D-native DiT that generates 3D objects in parts 👉 wgsxm.github.io/projects/partc… The most exciting part for me: 🌟 We realised most 3D datasets already come with part-structured meshes — but previous works just ignored that. 🤯 Simple & Effective 🚀
ByteDance and Carnegie Mellon researchers just announced PartCrafter This AI turns a single photo into fully editable 3D parts in seconds 10 wild examples:
PartCrafter Structured 3D Mesh Generation via Compositional Latent Diffusion Transformers
🚨 New drop: PartCrafter 🧩 A 3D-native DiT that generates 3D objects in parts 👉 wgsxm.github.io/projects/partc… The most exciting part for me: 🌟 We realised most 3D datasets already come with part-structured meshes — but previous works just ignored that. 🤯 Simple & Effective 🚀
Still waiting for @DeemosTech Gen-2 to generate 3D objects in parts? 🫠 We just dropped an open-source solution that actually understands 3D structure 🧠💥 ✅ No extra segmentation ✅ Pure 3D-native DiT 🎯 Generates part-aware 3D objects out of the box Check it now 👇
🚀 Introducing PartCrafter – a breakthrough in 3D generation! From just one RGB image, it generates multiple structured, semantically meaningful 3D parts — all in one unified pass, no pre-segmentation needed. Paper: arxiv.org/abs/2506.05573 Project Page: wgsxm.github.io/projects/partc…
Here's my 3DV talk, in chapters: 1) Intro / NeRF boilerplate. 2) Recent reconstruction work. 3) Recent generative work. 4) Radiance fields as a field. 5) Why generative video has bitter-lessoned 3D. 6) Why generative video hasn't bitter-lessoned 3D. 5 & 6 are my favorites.
Just arrived in Singapore for #ICLR2025! 🌴🇸🇬 Excited to present "DiffSplat" (github.com/chenguolin/Dif…) on Friday and connect with everyone at the conference. Let’s talk 3D vision, spatial intelligence, AIGC, and everything in between -- see you there! 👋✨
