Daniel Garibi
@DanielGaribi
Thrilled to share that our paper TokenVerse received a Best Paper Award at #SIGGRAPH2025! 🎉
Excited to share that "TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space" got accepted to SIGGRAPH 2025! It tackles disentangling complex visual concepts from as little as a single image and re-composing concepts across multiple images into a coherent…
1/ Can we teach a motion model to "dance like a chicken" Or better: Can LoRA help motion diffusion models learn expressive, editable styles without forgetting how to move? Led by @HSawdayee, @chuan_guo92603, we explore this in our latest work. 🎥 haimsaw.github.io/LoRA-MDM/ 🧵👇
Really impressive results for human-object interaction. They use a two-phase process where they optimize the diffusion noise, instead of the motion itself, to get to sub-centimeter precision while staying on manifold 🧠 HOIDiNi - hoidini.github.io
Excited to share that TokenVerse won Best Paper Award at SIGGRAPH 25 🥳
Excited to share that "TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space" got accepted to SIGGRAPH 2025! It tackles disentangling complex visual concepts from as little as a single image and re-composing concepts across multiple images into a coherent…
Best Paper Award @ SIGGRAPH'25 🥳
So much is already possible in image generation that it's hard to get excited. TokenVerse has been a refreshing exception! Disentangling complex visual concepts (pose, lighting, materials, etc.) from a single image — and mixing them across others with plug-and-play ease!
Excited to share that TokenVerse won Best Paper Award at SIGGRAPH 2025! 🎉 TokenVerse enables personalization of complex visual concepts, from objects and materials to poses and lighting, each can be extracted from a single image and be recomposed into a coherent result. 👇
Excited to share that our new work, Be Decisive, has been accepted to SIGGRAPH! We improve multi-subject generation by extracting a layout directly from noise, resulting in more diverse and accurate compositions. Website: omer11a.github.io/be-decisive/ Paper: arxiv.org/abs/2505.21488
Excited to share that our latest paper: "InstanceGen: Image Generation with Instance Level Instructions" was recently accepted to #SIGGRAPH2025! InstanceGen tackles the problem of generating images for complex multi-object prompts tau-vailab.github.io/InstanceGen/ 👇🧵[1/7]
Excited to share that "IP-Composer: Semantic Composition of Visual Concepts" got accepted to #SIGGRAPH2025!🥳 We show how to combine visual concepts from multiple input images by projecting them into CLIP subspaces - no training, just neat embedding math✨ Really enjoyed working…
🔔just landed: IP Composer🎨 semantically mix & match visual concepts from images ❌ text prompts can't always capture visual nuances ❌ visual input based methods often need training / don't allow fine grained control over *which* concepts to extract from our input images So👇
🔔Excited to announce that #AnyTop has been accepted to #SIGGRAPH2025!🥳 ✅ A diffusion model that generates motion for arbitrary skeletons ✅ Using only a skeletal structure as input ✅ Learns semantic correspondences across diverse skeletons 🌐 Project: anytop2025.github.io/Anytop-page
RefVNLI Towards Scalable Evaluation of Subject-driven Text-to-image Generation
🔔just landed: IP Composer🎨 semantically mix & match visual concepts from images ❌ text prompts can't always capture visual nuances ❌ visual input based methods often need training / don't allow fine grained control over *which* concepts to extract from our input images So👇
Ever stared at a set of shapes and thought: 'These could be something… but what?' Designed for visual ideation, PiT takes a set of concepts and interprets them as parts within a target domain, assembling them together while also sampling missing parts. eladrich.github.io/PiT/
Vectorization into a neat SVG!🎨✨ Instead of generating a messy SVG (left), we produce a structured, compact representation (right) - enhancing usability for editing and modification. Accepted to #CVPR2025 !