Yawar Siddiqui
@yawarnihal
Researcher in 3D Computer Vision at Meta. Views expressed are my own.
Tired of 3D asset generation approaches with baked in lighting effects? Our latest work, Meta 3D AssetGen, can generate high quality meshes with PBR materials given text prompts in seconds! assetgen.github.io The work was done with the amazing GenAI 3D team @AIatMeta
📣 New research from GenAI at Meta, introducing Meta 3D Gen: A new system for end-to-end generation of 3D assets from text in <1min. Meta 3D Gen is a new combined AI system that can generate high-quality 3D assets, with both high-resolution textures and material maps end-to-end,…
Happy to report that AllTracker was accepted to #ICCV2025! The twists and turns and methodical experimentation here took at least 12 months in all. Super hard project, though in retrospect our solution is pretty simple. code: github.com/aharley/alltra… paper: arxiv.org/abs/2506.07310
AllTracker: Efficient Dense Point Tracking at High Resolution If you're using any point tracker in any project, this is likely a drop-in upgrade—improving speed, accuracy, and density, all at once.
Thrilled and honored to receive the Best Paper Award at #CVPR2025! Huge thanks to my fantastic collaborators @MinghaoChen23, @n_karaev, Andrea Vedaldi, Christian Rupprecht, and @davnov134. Could not be there without you!
📢📢 We’ll be presenting MeshArt tomorrow morning (Friday 13.06) in the poster session at ExHall D Poster #42 from 10:30-12:30. Come and chat about articulated 3D mesh genereation or any 3D generative stuff! Project page: daoyig.github.io/Mesh_Art/
I’ll be in Nashville for #CVPR this week presenting 2 papers. Keen to connect with people interested in Generative AI and 3D Computer Vision. If you see me at the venue & are interested in connecting for projects, research positions or just a chat, feel free to say hi!
Aria Gen 2 glasses mark a significant leap in wearable technology, offering enhanced features and capabilities that cater to a broader range of applications and researcher needs. We believe researchers from industry and academia can accelerate their work in machine perception,…
This looks amazing! Great work @Peter4AI !!
📢 IntrinsiX: High-Quality PBR Generation using Image Priors 📢 From text input, we generate renderable PBR maps! Next to editable image generation, our predictions can be distilled into room-scale scenes using SDS for large-scale PBR texture generation. We first train…
📢 IntrinsiX: High-Quality PBR Generation using Image Priors 📢 From text input, we generate renderable PBR maps! Next to editable image generation, our predictions can be distilled into room-scale scenes using SDS for large-scale PBR texture generation. We first train…
Tomorrow in our TUM AI - Lecture Series we'll have Andrea Tagliasacchi (@taiyasaki), SFU. He'll talk about "𝐑𝐚𝐝𝐢𝐚𝐧𝐭 𝐅𝐨𝐚𝐦: 𝐑𝐞𝐚𝐥-𝐓𝐢𝐦𝐞 𝐃𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭𝐢𝐚𝐛𝐥𝐞 𝐑𝐚𝐲 𝐓𝐫𝐚𝐜𝐢𝐧𝐠". Live Stream: youtube.com/live/1u7ahb9bg… 5pm GMT+1 / 9am PST (Mon Mar 24th)
Check out Chris' work on promptable SceneScript using infilling transformers!
Check out our extension of SceneScript to human-in-the-loop local corrections! Our method leverages infilling techniques from NLP to refine a 3D scene in a "one-click fix" workflow, enabling more accurate modeling of complex layouts. 📰arxiv.org/abs/2503.11806…
Check out our extension of SceneScript to human-in-the-loop local corrections! Our method leverages infilling techniques from NLP to refine a 3D scene in a "one-click fix" workflow, enabling more accurate modeling of complex layouts. 📰arxiv.org/abs/2503.11806…
Check out our #CVPR2025 papers on articulated mesh generation, 4d shape generation with dictionary neural fields, large-scale 3d scene generation and editing, and 3d editing! Congrats to @DaoyiGao, @xinyi092298, @ABokhovkin, @QTDSMQ, @ErkocZiya for their amazing work!
🥳Excited to share my recent work at Meta, "PartGen: Part-level 3D Generation and Reconstruction with Multi-View Diffusion Models", which aims at compositional/part-level 3D generation and reconstruction from various modalities. Project page: silent-chen.github.io/PartGen/
Excited to announce ScanNet++ v2!🎉 @chandan__yes and @liuyuehcheng have been working tirelessly to bring: 🔹1006 high-fidelity 3D scans 🔹+ DSLR & iPhone captures 🔹+ rich semantics Elevating 3D scene understanding to the next level!🚀 w/ @MattNiessner kaldir.vc.in.tum.de/scannetpp
Everything you love about generative models — now powered by real physics! Announcing the Genesis project — after a 24-month large-scale research collaboration involving over 20 research labs — a generative physics engine able to generate 4D dynamical worlds powered by a physics…
Let's generate functional meshes! Checkout @DaoyiGao's work MeshArt on generating articulated meshes!
📢MeshArt: Generating Articulated Meshes with Structure-guided Transformers @DaoyiGao generates articulated meshes with a hierarchical transformer, modeling articulation-aware structures that guide mesh synthesis. w/ @yawarnihal @craigleili Project: daoyig.github.io/Mesh_Art/
Our work Meta 3D AssetGen on 3D shape generation will be presented at #NeurIPS2024, on Thursday afternoon session (12 Dec 4:30 p.m. PST — 7:30 p.m. PST) in East Exhibit Hall A-C, Poster #4609! I won't be attending, but Prof. Andrea Vedaldi will be there. Come say hi!
Tired of 3D asset generation approaches with baked in lighting effects? Our latest work, Meta 3D AssetGen, can generate high quality meshes with PBR materials given text prompts in seconds! assetgen.github.io The work was done with the amazing GenAI 3D team @AIatMeta
Checkout @manuel_dahnert’s amazing work on scene generation from a single image! #NeurIPS2024
Super happy to present our #NeurIPS paper 𝐂𝐨𝐡𝐞𝐫𝐞𝐧𝐭 𝟑𝐃 𝐒𝐜𝐞𝐧𝐞 𝐃𝐢𝐟𝐟𝐮𝐬𝐢𝐨𝐧 𝐅𝐫𝐨𝐦 𝐚 𝐒𝐢𝐧𝐠𝐥𝐞 𝐑𝐆𝐁 𝐈𝐦𝐚𝐠𝐞 in Vancouver. Come to our poster #2804 on Wednesday 11am - 2pm in East Exhibit Hall A-C and say hi if you want to learn more about 3D Scene…
Check out this amazing work by @ErkocZiya
📢📢 𝐏𝐫𝐄𝐝𝐢𝐭𝐨𝐫𝟑𝐃: 𝐅𝐚𝐬𝐭 𝐚𝐧𝐝 𝐏𝐫𝐞𝐜𝐢𝐬𝐞 𝟑𝐃 𝐒𝐡𝐚𝐩𝐞 𝐄𝐝𝐢𝐭𝐢𝐧𝐠 📢📢 We propose a training-free 3D shape editing approach that rapidly and precisely edits the regions intended by the user and keeps the rest as is. Using a quickly brushed mask and a…
📢📢 𝐏𝐫𝐄𝐝𝐢𝐭𝐨𝐫𝟑𝐃: 𝐅𝐚𝐬𝐭 𝐚𝐧𝐝 𝐏𝐫𝐞𝐜𝐢𝐬𝐞 𝟑𝐃 𝐒𝐡𝐚𝐩𝐞 𝐄𝐝𝐢𝐭𝐢𝐧𝐠 📢📢 We propose a training-free 3D shape editing approach that rapidly and precisely edits the regions intended by the user and keeps the rest as is. Using a quickly brushed mask and a…
Super happy to present our #NeurIPS paper 𝐂𝐨𝐡𝐞𝐫𝐞𝐧𝐭 𝟑𝐃 𝐒𝐜𝐞𝐧𝐞 𝐃𝐢𝐟𝐟𝐮𝐬𝐢𝐨𝐧 𝐅𝐫𝐨𝐦 𝐚 𝐒𝐢𝐧𝐠𝐥𝐞 𝐑𝐆𝐁 𝐈𝐦𝐚𝐠𝐞 in Vancouver. Come to our poster #2804 on Wednesday 11am - 2pm in East Exhibit Hall A-C and say hi if you want to learn more about 3D Scene…