Sigal Raab
@sigal_raab
🔔Excited to announce that #AnyTop has been accepted to #SIGGRAPH2025!🥳 ✅ A diffusion model that generates motion for arbitrary skeletons ✅ Using only a skeletal structure as input ✅ Learns semantic correspondences across diverse skeletons 🌐 Project: anytop2025.github.io/Anytop-page
{1/8} 🧵 When you click a link, have you ever wondered: “Which webpage is actually important?” Google answered that with PageRank—treating the web as a Markov chain. Now imagine doing the same… but for transformer attention.👇 🔗 yoterel.github.io/attention_chai…
1/ Can we teach a motion model to "dance like a chicken" Or better: Can LoRA help motion diffusion models learn expressive, editable styles without forgetting how to move? Led by @HSawdayee, @chuan_guo92603, we explore this in our latest work. 🎥 haimsaw.github.io/LoRA-MDM/ 🧵👇
Really impressive results for human-object interaction. They use a two-phase process where they optimize the diffusion noise, instead of the motion itself, to get to sub-centimeter precision while staying on manifold 🧠 HOIDiNi - hoidini.github.io
🏃Today at CVPR!!! 📅🕐 13:00–17:00, 📍Room 110B 💃
🎉 The #CVPR HuMoGen Workshop is happening TODAY afternoon! We’ll be featuring an exciting lineup of invited talks and poster presentations covering cutting-edge work in generative modeling of 3D and 2D human motion. If you’re working in this space, you won’t want to miss it!
Excited to share that our new work, Be Decisive, has been accepted to SIGGRAPH! We improve multi-subject generation by extracting a layout directly from noise, resulting in more diverse and accurate compositions. Website: omer11a.github.io/be-decisive/ Paper: arxiv.org/abs/2505.21488
Excited to share that "IP-Composer: Semantic Composition of Visual Concepts" got accepted to #SIGGRAPH2025!🥳 We show how to combine visual concepts from multiple input images by projecting them into CLIP subspaces - no training, just neat embedding math✨ Really enjoyed working…
🔔just landed: IP Composer🎨 semantically mix & match visual concepts from images ❌ text prompts can't always capture visual nuances ❌ visual input based methods often need training / don't allow fine grained control over *which* concepts to extract from our input images So👇
Excited to share that "TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space" got accepted to SIGGRAPH 2025! It tackles disentangling complex visual concepts from as little as a single image and re-composing concepts across multiple images into a coherent…
🔔just landed: IP Composer🎨 semantically mix & match visual concepts from images ❌ text prompts can't always capture visual nuances ❌ visual input based methods often need training / don't allow fine grained control over *which* concepts to extract from our input images So👇
Ever stared at a set of shapes and thought: 'These could be something… but what?' Designed for visual ideation, PiT takes a set of concepts and interprets them as parts within a target domain, assembling them together while also sampling missing parts. eladrich.github.io/PiT/
📢 Deadline extended! 📢 You now have an extra week to submit! New deadline: March 19. Want to submit? Find all the details here: humogen.github.io 🚀
Vectorization into a neat SVG!🎨✨ Instead of generating a messy SVG (left), we produce a structured, compact representation (right) - enhancing usability for editing and modification. Accepted to #CVPR2025 !
“Tight Inversion” uses an IP-Adapter during DDIM inversion to preserve the original image better when editing. arxiv.org/abs/2502.20376
Human Motion people - There is still a path to Nashville...Just saying @CVPR
We invite you to submit your Motion Generation papers to the HuMoGen @CVPR workshop! The deadline is on March 12 More details @ humogen.github.io