Or Litany
@orlitany
Assistant professor @TechnionLive and Sr. Research Scientist @NVIDIA | I think therefore AI
TtA (Thrilled to announce) that our paper, "MonSTeR: A Unified Model for Motion, Scene, and Text Retrieval" has been accepted at #ICCV2025 🌋🌺🌴🌊🏄♂️🍹 MonSTeR creates a unified latent space that understands the relationship between text, human motion, and 3D scenes.
Recording of the workshop is now online, big thanks to all the organizers and everyone who attended both in person and online! neural-bcc.github.io
This Wednesday (1-6PM, Room 106A) @CVPR we have a great lineup of keynote speakers, posters, and spotlights on neural fields and beyond: neural-bcc.github.io Have a question you want answered by a panel of experts in the field? Send it to us via: docs.google.com/forms/d/e/1FAI…
The recordings from our workshop on Open-World 3D Scene Understanding @CVPR are now available! See you @ICCVConference in Honolulu🏄♂️ for the next edition! ➡️ youtube.com/playlist?list=… 🌍 opensun3d.github.io
Join us at OpenSUN3D☀️ workshop this afternoon @CVPR 🚀 📍: Room 105 A 🕰️: 2:00-6:00 pm 🌍: opensun3d.github.io @afshin_dn @leto__jean @lealtaixe
Starting now 🤩
This Wednesday (1-6PM, Room 106A) @CVPR we have a great lineup of keynote speakers, posters, and spotlights on neural fields and beyond: neural-bcc.github.io Have a question you want answered by a panel of experts in the field? Send it to us via: docs.google.com/forms/d/e/1FAI…
Curious about 3D Gaussians, simulation, rendering and the latest from #NVIDIA? Come to the NVIDIA Kaolin Library live-coding session at #CVPR2025, powered by a cloud GPU reserved especially for you. Wed, Jun 11, 8-noon. Bring your laptop! tinyurl.com/nv-kaolin-cvpr…
✨We introduce SuperDec, a new method which allows to create compact 3D scene representations via decomposition into superquadric primitives! Webpage: super-dec.github.io ArXiv: arxiv.org/abs/2504.00992 @BoyangSun @FrancisEngelman @mapo1 @cvg_ethz @ETH_AI_Center
Neural fields are showing huge promise for sensing far beyond just cameras — if you're working at this intersection, this non-archival CVPR workshop is a great place to share your work and connect with others pushing the boundaries. Submit soon! 👇 neural-bcc.github.io
Only a couple weeks left to submit to Neural Fields Beyond Conventional Cameras at CVPR 2025! neural-bcc.github.io Our *non-archival* workshop welcomes both previously published and novel work. A great opportunity to get project feedback and connect with other researchers!
Impressions of the Nectar Track Session at #3DV2025 which provides great 3D Vision works additional visibility. The first 4 speakers and session chair @orlitany during the Q&A session. @3DVconf
Congrats! Truly an amazing team
Thrilled to see this plot in a recent survey on 'personalized image generation' (arxiv.org/abs/2502.13081) — highlighting the impact of our work! Huge congratulations to my fantastic students, whose creativity and dedication continue to drive exciting advances in the field!
Big thanks to @frankzydou for an insightful presentation at our weekly lab meeting! We had a great discussion on diffusion models and physics-based RL for human motion #LITlab @TechnionLive
Neural fields are such a flexible workhorse, and last year’s talks were inspiring in showcasing their potential across various sensors beyond standard cameras! If you missed them: youtube.com/watch?v=aciDS9… Looking forward to seeing what this year’s talks bring! 🚀
Following an excellent debut at ECCV 2024, we're excited to announce the 2nd Workshop on Neural Fields Beyond Conventional Cameras at this CVPR 2025 in Nashville, Tennessee! Workshop site: neural-bcc.github.io Call for papers is open from now until: April 11th
Fascinating talk by our invited speaker @KimJaihoon who presented SyncTweedies and StochSync. Exciting lesson in diffusion model inference time manipulation @TechnionLive #LITlab
🎉 Don’t miss 3DV 2025's Nectar Track! 🚀 This is a fantastic opportunity for 3D vision enthusiasts to: 1️⃣ Showcase strong 3D papers from 2024 conferences (increase visibility!). 2️⃣ Share early-stage ideas and get mentorship in the Exploration Edge Track. 📅 Deadline: Jan 15.
3DV 2025 - Call for Nectar Track Contributions! Instead of the regular tutorials and workshops, we are calling for two unique sessions: 1) Spotlight on strong papers from recent conferences; 2) Exploration edge track. ⏰Deadline: Jan 15, 2025 Details: 3dvconf.github.io/2025/call-for-…
Excited about image-conditioned diffusion models like Zero123 but struggling with reliable outputs? Come visit poster #2404 at #NeurIPS2024 to learn how to elevate ‘Zero to Hero’!🦸♀️🦸♂️ I’ll be there to chat research and answer your questions! ⏳ Happening in two hours
We are presenting "Zero-to-Hero" today at @NeurIPSConf, join us to hear about improving diffusion models, by filtering the attention maps, without further training! 📅Today, 11 a.m. 📍East Exhibit Hall A-C #2404 zero2hero-nvs.github.io #NeurIPS2024 #NeurIPS
We are presenting "Zero-to-Hero" today at @NeurIPSConf, join us to hear about improving diffusion models, by filtering the attention maps, without further training! 📅Today, 11 a.m. 📍East Exhibit Hall A-C #2404 zero2hero-nvs.github.io #NeurIPS2024 #NeurIPS
Your Diffusion Model May Know More than it Shows 🐢🦝 Excited to share "Zero-to-Hero" 🦸 to appear at #NeurIPS2024! We propose a test-time filtering method for attention maps that significantly enhances image-conditioned diffusion models. Kudos to @IdoSobol @Chenfeng_X 🧵 1/n
The portal is open: Our #ELLISPhD Program is now accepting applications! Apply by November 15 to work with leading #AI labs across Europe and choose your advisors among 200 top #machinelearning researchers! #JoinELLISforEurope #PhD #PhDProgram #ML ellis.eu/news/ellis-phd…
Couldn't agree more @geoffreyhinton (passed this to my students) Congrats by the way 😉 May this lead to finally having a Nobel in CS.
Congrats @geoffreyhinton on the Nobel Prize in Physics! You are a role model not only for your work but also for your curiosity and kindness. Full interview we did earlier this year in the 🧵.
Afraid to get lost inside a NeRF? Just take a photo and we'll help you localize yourself :) If you're attending #ECCV2024 come check out our poster to learn more about our Nerfect-Match 🚨Code is also out now
Come to check our NerfectMatch paper tomorrow morning at the poster hall! You can also try it out yourselves, we just released the code today! github.com/nv-dvl/nerfmat… @QunjieZhou Maxim Maximov @orlitany
Doctoral Consortium at #ECCV2024 🎓 Incredible to witness mentors and mentees connecting in person! Hope it’s been a valuable and inspiring experience for everyone. Share your thoughts with us in the replies! Huge thanks to my co-organizer @BeyanCigdem ❤️
