Sotiris Nousias
@Sotiris_No
Postdoc at @UofT
I'm thrilled to share that I will be joining Purdue University as an Assistant Professor in the Department of Computer Science in Fall 2025!!! Many thanks to all my mentors, colleagues, and friends for their support. I will be hiring students this upcoming cycle. Please get in…

Happy to have led local arrangements for #ICCP2025! Massive thanks to our volunteers for making it possible: @Dongyu_Du, @andrewyguo, Len Luong, Ali SaraerToosi, Howard Xiao, @AndrewEJXie, Sophia Yang, and @KellyKZhu! And, @Sotiris_No and Jazmin Diaz for being the best co-chairs!
HUGE shoutout to all our #ICCP2025 organizers!
@ICCP_conference well deserved drinks after a great day of talks 📸🤗🍻
📸 Join us at ICCV 2025 for our workshop on Computer Vision with Single-Photon Cameras (CVSPC)! 🗓️ Sunday, Oct 19, AM – PM at the Hawai'i Convention Center. 🔗 Website: cvspc.cs.pdx.edu. 🗣️ Invited Speakers: Mohit Gupta, @mpotoole, @Dongyu_Du, @DaveLindell,…

📣📣📣 Neural Inverse Rendering from Propagating Light 💡 just won Best Student Paper award at #CVPR!!!
📢📢📢 Neural Inverse Rendering from Propagating Light 💡 Our CVPR Oral introduces the first method for multiview neural inverse rendering from videos of propagating light, unlocking applications such as relighting light propagation videos, geometry estimation, or light…
📢Excited to be at #ICLR2025 for our paper: VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control Poster: Thu 3-5:30 PM (#134) Website: snap-research.github.io/vd3d/ Code: github.com/snap-research/… Also check out our #CVPR2025 follow-up AC3D: snap-research.github.io/ac3d/
MambaTM is our latest image restoration method for atmospheric turbulence To appear @CVPR 2025 (Highlight paper 13.5%) arxiv.org/abs/2504.02697 - Learned phase distortion in the loop - Mamba state-space model for speed All credits to Xingguang Zhang @PurdueECE
Happy to share that 🧢CAP4D🧢 has been accepted to CVPR 2025 (Oral)! Looking forward to seeing you all in Nashville 🎉
Introducing 🧢CAP4D🧢 CAP4D turns any number of reference images (single, few, and many) into controllable real-time 4D avatars. 🧵⬇️ Website: felixtaubner.github.io/cap4d/ Paper: arxiv.org/abs/2412.12093
I will be in Tokyo this week for #SiggraphAsia2024 to present our work "Coherent Optical Modems for Full-Wavefield Lidar". We repurposed an off-the-shelf optical modem, typically used for telecommunications, to introduce Full-Wavefield Lidar: a new imaging modality for…
I am on the job market, seeking tenure-track or industry research positions starting in 2025. My research combines human-computer interaction and robotics—please visit karthikmahadevan.ca for updated publications and CV. Feel free to reach out if interested. RT appreciated!
At #ECCV2024, we presented Minimalist Vision with Freeform Pixels, a new vision paradigm that uses a small number of freeform pixels to solve lightweight vision tasks. We are honored to have received the Best Paper Award! Check out the project here: cave.cs.columbia.edu/projects/categ…
We will be presenting Flying with Photons next week at ECCV!!! Oral: 9.50AM Tuesday, Session 1C Poster: 10.30AM Tuesday, #246
📢📢📢 A pulse of light takes ~3ns to pass through a Coke bottle—100 million times less than it takes you to blink. Our work lets you fly around this 3D scene at the speed of light, revealing propagating wavefronts of light that are invisible to the naked eye—from any viewpoint!…
Hi all, I’m on the job market for industry research scientist or TT faculty positions starting Summer 2025. Interested in roles related to HCI, XR, Eye Tracking, Adaptive Interfaces, and Human-AI Interaction. Please reach out if hiring or aware of any positions! RT appreciated!
Tutorial on Diffusion Models for Imaging and Vision 2nd edition is up on arXiv arxiv.org/abs/2403.18103 (51 pages --> 89 pages) - Expanded VAE - More detailed DDPM - New section: physics of diffusion - and more Feedback is welcome!
Excited to share our new work 😊 VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control My main research interest is 4D generation, and a promising way towards realistic 4D scenes is through 3D extended video models! snap-research.github.io/vd3d Thanks @_akhaliq!
Snap presents VD3D Taming Large Video Diffusion Transformers for 3D Camera Control Modern text-to-video synthesis models demonstrate coherent, photorealistic generation of complex videos from a text description. However, most existing models lack fine-grained control over…
I'm at #CVPR2024 all week—reach out if you want to chat or are interested in a phd or a postdoc with the Toronto Computational Imaging Group! tcig.ca
Super excited to be in Seattle this week for #CVPR2024 to present our recent work 𝐓𝐮𝐫𝐛𝐨𝐒𝐋: 𝐃𝐞𝐧𝐬𝐞, 𝐀𝐜𝐜𝐮𝐫𝐚𝐭𝐞 𝐚𝐧𝐝 𝐅𝐚𝐬𝐭 𝟑𝐃 𝐛𝐲 𝐍𝐞𝐮𝐫𝐚𝐥 𝐈𝐧𝐯𝐞𝐫𝐬𝐞 𝐒𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞𝐝 𝐋𝐢𝐠𝐡𝐭. Joint work with Maxx Wu, @ChenWenzheng, @DaveLindell,…
Fun fact: the 'PR' in @CVPR is Pattern Recognition! But what if your environment, or parts of it, don't have good patterns? No problem: Just paint them remotely with light! (a laser), then track them with an infrared camera. Check it out at: marksheinin.com/thermal 1/2
This is an opportunity to do a PhD with me at Imperial College, fully funded and starting in October this year. Apply via the link below by 12th June next week. On-sensor vision will be very important to the future of low power vision in robotics + AR/VR. jobs.ac.uk/job/DHT079/res…