Yusuf Dalva
@yusuf_dalva
PhD Student at @virginia_tech | Intern @Snap | Ex-Intern @AdobeResearch | Ex @CS_Bilkent
👨🎨 LoRAShop is now out! We introduce LoRAShop, that enables both generation and editing with multiple personalized concepts (no training), pushing the boundaries of the task of image editing! Kudos to my long time collaborator @d_yesiltepe and my advisor @PINguAR !
ICCV decisions are out — if your paper didn’t make it, don’t worry! Submit your work to the P13N Workshop instead! Let’s push the frontier of personalized generative AI together!💡 #ICCV2025 #P13NWorkshop #Personalization @ICCVConference More info: p13n-workshop.github.io
FluxSpace is live today at @CVPR! If you want to hear more about how you can edit images using rectified flow transformers with linear attention directions, visit us at poster 232, ExHall D (10.30 - 12.30)!!
🌟 Research Update We introduce FluxSpace, a new method for manipulating the image semantics in rectified flow transformers in a disentangled way! Shoutout to my collaborator @kavanav2912 and my advisor @PINguAR
Dinner at #CVPR turned into a GenAI think tank. 🍜 Veo3, personalization rants, and nonstop energy from amazing folks across Fal and Google. Couldn’t have asked for a better crew. @natanielruizg @gorkemyurt @d_yesiltepe @d_yesiltepe @yusuf_dalva
FluxSpace: Disentangled Semantic Editing in Rectified Flow Models Main proc & poster @yusuf_dalva @kavanav2912 @PINguAR (all VT) arxiv.org/abs/2412.09611 TL; DR: Introduces an approach that enables semantic editing while preserving input characteristics across different domains.
📢 Deadline Extended! The deadline for long papers is now July 7 and Aug 18 for short papers! Come and join us in Hawaii!
🚨 Final days to submit! Have a paper that redefines personalization? We're looking for long papers that go beyond the state of the art. 🗓️ Deadline: June 27 (AOE) – don’t miss it! p13n-workshop.github.io
🚨 Final days to submit! Have a paper that redefines personalization? We're looking for long papers that go beyond the state of the art. 🗓️ Deadline: June 27 (AOE) – don’t miss it! p13n-workshop.github.io
🚨FLUX.1 Kontext [dev], the open image editing model, is now available at fal with training capabilities! ✨4x faster inference (2s vs 7s) 💰 Ultra-affordable at $0.025/megapixel 🔧 Full LoRA training support 🖌️Game-changing image editing capabilities fal.ai/models/fal-ai/…
Turkish CV community gettogether was lovely like every year 🥰 Thanks and hugs to all attended. Cheers to sparkling new ideas, collaborations, and friendships. #CVPR2025
Just arrived at #CVPR2025 ! It’s great to be back for another conference. If you are around and would like to have a chat hmu, DMs open!

Check out the work of @d_yesiltepe on training-free novel view synthesis!
✨ We introduce Dynamic View Synthesis as an Inverse Problem, a training free framework for generating novel views from a single monocular video by operating entirely in the diffusion noise initialization phase, with no weight updates, no architecture changes.