Ilya Chugunov @ilyac on bsky
@_ilya_c
Adobe Research Scientist doing computational photography stuff, previously Princeton PhD. I post more on bsky. All opinions are my own.
(1/2) Tired of your panoramas being turned into sad flat rectangles? Our latest #SIGGRAPHAsia2024 work "Neural Light Spheres" lets you turn panoramic captures into dynamic wide FOV renders (with real-time rendering!). Code, data, and info: light.princeton.edu/publication/ne…
Do mac people actually like using command for cut/paste/etc? (I rebind fn to command because of pinkie finger muscle memory)
I’m genuinely bewildered why “I have to present at a top ranked academic conference” doesn’t just instantly grant you a visa. What is the travel risk posed by a renowned fish expert coming to your country to give a talk on fish.
👏 HUGE congrats to the #ICCP2025 award winners for their outstanding contributions!
#ICCP2025 Paper Awards 🏆
> me trying my best to not wake up my fiancé while leaving for my 7am flight The humble bluetooth speaker: DEVICE NOT FOUND, PAIRING, DEVICE NOT FOUND
Overheard over coffee: “If corporations are people then mine is a 40 year old man with anxiety and a gambling addiction”
Lived most of my life in fear of Marmite, and have now discovered it’s an amazing ingredient to add some depth to soups and stews!
Recording of the workshop is now online, big thanks to all the organizers and everyone who attended both in person and online! neural-bcc.github.io
This Wednesday (1-6PM, Room 106A) @CVPR we have a great lineup of keynote speakers, posters, and spotlights on neural fields and beyond: neural-bcc.github.io Have a question you want answered by a panel of experts in the field? Send it to us via: docs.google.com/forms/d/e/1FAI…
Flight: leaves in 1.5hrs Brain: we must check that the gate exists before we can purchase a starbuck
My favourite photo from my last vacation was #CapturedWithIndigo, the computational photography app that Adobe Nextcam just released after years of hard work! (and that I helped debug at least a little) App: apps.apple.com/us/app/project… Blog with more info: research.adobe.com/articles/indig…

I'll be presenting our work with @KaiZhang9546 at #cvpr2025. We finetune video models to be 3d consistent without any 3d supervision! Feel free to stop by our poster or reach out to chat: Sunday, Jun 15, 4-6pm ExHall D, poster #168 cvpr.thecvf.com/virtual/2025/p…
We've released our paper "Generating 3D-Consistent Videos from Unposed Internet Photos"! Video models like Luma generate pretty videos, but sometimes struggle with 3D consistency. We can do better by scaling them with 3D-aware objectives. 1/N page: genechou.com/kfcw
Adobe Labs releases an experimental digital photography app Project Indigo (adobe.ly/43VWAV7) to showcase breakthrough innovations, including reflection removal, which is being published at CVPR this week. Check out this blog: adobe.ly/4kFNFOJ
📢📢📢 A reminder to join us tomorrow (June 12) afternoon at #CVPR2025 in room 106 C for the first workshop on Physics-inspired 3D Vision and Imaging!
📢📢📢 Come and submit to our workshop on Physics-inspired 3D Vision and Imaging at CVPR 2025! Speakers 🗣️ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @SeungHwanBaek8 and @GordonWetzstein! Thanks to coorganizers @imarhombus, @ceciliazhang77, @dorverbin and @jtompkin!
Kind of interesting, Scholar Inbox helped me mathematically confirm a feeling I've had for a couple CVPRs (that I prefer the average poster to the average oral). It suggests that about ~15% of all CVPR papers are "relevant to me" compared to only ~5% of orals.
Check out the Toronto Computational Imaging Group at CVPR this week! - felixtaubner.github.io/cap4d/ (Fri: Oral Sess 2B) - anaghmalik.com/InvProp/ (Sat: Oral Sess 3A) - Opportunistic Single-Photon Time of Flight (Sat: Oral Sess 4C) - snap-research.github.io/ac3d/ (Sun: Poster Sess 5)
This Wednesday (1-6PM, Room 106A) @CVPR we have a great lineup of keynote speakers, posters, and spotlights on neural fields and beyond: neural-bcc.github.io Have a question you want answered by a panel of experts in the field? Send it to us via: docs.google.com/forms/d/e/1FAI…

📢📢📢 Neural Inverse Rendering from Propagating Light 💡 Our CVPR Oral introduces the first method for multiview neural inverse rendering from videos of propagating light, unlocking applications such as relighting light propagation videos, geometry estimation, or light…
youtu.be/CG0qRAOoVgI One of the best illustrations of how awful car-centric design is when applied to spaces people actually like to be in
We've extended the submission deadline by 2 weeks to April 25th! #CVPR2025 @CVPR Link: neural-bcc.github.io/#call4paper
Only a couple weeks left to submit to Neural Fields Beyond Conventional Cameras at CVPR 2025! neural-bcc.github.io Our *non-archival* workshop welcomes both previously published and novel work. A great opportunity to get project feedback and connect with other researchers!