Selena Ling 凌子涵
@seleniumlzh
U of Toronto CS PhD at DGP | Prev. @AdobeResearch @NVIDIA : )
Our #Siggraph25 work found a simple, nearly one-line change that greatly eases neural field optimization for a wide variety of existing representations. “Stochastic Preconditioning for Neural Field Optimization” w/ @merlin_ND @_AlecJacobson @nmwsharp

Check out our new paper on robust motion segmentation! Wanna run your SfM pipeline on dynamic scenes? Consider using our RoMo masks to get improvements!! 🚀
📢📢📢 RoMo: Robust Motion Segmentation Improves Structure from Motion romosfm.github.io arxiv.org/pdf/2411.18650 TL;DR: boost your SfM pipeline on dynamic scenes. We use epipolar cues + SAMv2 features to find robust masks for moving objects in a zero-shot manner. 🧵👇
For folks in the @siggraph community: You may or may not be aware of the controversy around the next #SIGGRAPHAsia location, summarized here: cs.toronto.edu/~jacobson/webl… If you're concerned, consider signing this letter: docs.google.com/document/d/1ZS… via this form docs.google.com/forms/d/e/1FAI…
Total Pixel Space, which won the Grand Prix at this year's AIFF, is a wonderful video essay and, by the way, one of the clearest descriptions of universal simulation (as search in the space of all possible universes) youtube.com/watch?v=zpAeyg…
Our work was featured by MIT News today! Had so much fun working on this project with Silvia Sellán, Natalia Pacheco-Tallaj and @JustinMSolomon. Can't wait to present it at SIGGRAPH this summer! news.mit.edu/2025/animation…
📢📢📢 Neural Inverse Rendering from Propagating Light 💡 Our CVPR Oral introduces the first method for multiview neural inverse rendering from videos of propagating light, unlocking applications such as relighting light propagation videos, geometry estimation, or light…