Elad Richardson
@EladRichardson
Teaching Pixels New Tricks | Research @runwayml
Ever stared at a set of shapes and thought: 'These could be something… but what?' Designed for visual ideation, PiT takes a set of concepts and interprets them as parts within a target domain, assembling them together while also sampling missing parts. eladrich.github.io/PiT/

🧵1/ Text-to-video models generate stunning visuals, but… motion? Not so much. You get extra limbs, objects popping in and out... In our new paper, we present FlowMo -- an inference-time method that reduces temporal artifacts without retraining or architectural changes. 👇
I got early access to this secret @pika_labs feature. Manipulate any character or object in your video while keeping the rest perfectly intact! Curious? 👀🤔Here's a sneak peek...
I'm very excited to announce our #SIGGRAPH2025 workshop: Drawing & Sketching: Art, Psychology, and Computer Graphics 🎨🧠🫖 🔗 lines-and-minds.github.io 📅 Sunday, August 10th Join us to explore how people draw, how machines draw, and how the two might draw together! 🤖✍️
1/ Can we teach a motion model to "dance like a chicken" Or better: Can LoRA help motion diffusion models learn expressive, editable styles without forgetting how to move? Led by @HSawdayee, @chuan_guo92603, we explore this in our latest work. 🎥 haimsaw.github.io/LoRA-MDM/ 🧵👇
Introducing Act-Two, our next-generation motion capture model with major improvements in generation quality and support for head, face, body and hand tracking. Act-Two only requires a driving performance video and reference character. Available now to all our Enterprise…
Tel Aviv friends: we're hosting an amazing rooftop meetup with a killer speaker lineup (not including me 😅) lu.ma/q8bigfqn
Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: @shaulneta @urielsinger @lipmanya Link: arxiv.org/abs/2506.06215
Really impressive results for human-object interaction. They use a two-phase process where they optimize the diffusion noise, instead of the motion itself, to get to sub-centimeter precision while staying on manifold 🧠 HOIDiNi - hoidini.github.io

Artifacts in your attention maps? Forgot to train with registers? Use 𝙩𝙚𝙨𝙩-𝙩𝙞𝙢𝙚 𝙧𝙚𝙜𝙞𝙨𝙩𝙚𝙧𝙨! We find a sparse set of activations set artifact positions. We can shift them anywhere ("Shifted") — even outside the image into an untrained token. Clean maps, no retrain.
New job, new reading list 📚 Excited to share I'm joining @runwayml in its journey to reshape the future of storytelling

🔔Excited to announce that #AnyTop has been accepted to #SIGGRAPH2025!🥳 ✅ A diffusion model that generates motion for arbitrary skeletons ✅ Using only a skeletal structure as input ✅ Learns semantic correspondences across diverse skeletons 🌐 Project: anytop2025.github.io/Anytop-page
nailed a new sweet spot with my HiDream yarn art LoRA 🧶
Thanks for sharing our demo for Piece-it-Together🧩 The training and inference code is now available at github.com/eladrich/PiT
Not all models are built the same! Checkout the images generated by GPT4o 🫠 and Piece-it-Together 👑 using the same set of input images! Piece it Together (PiT) app is live on 🤗 Spaces!
Not all models are built the same! Checkout the images generated by GPT4o 🫠 and Piece-it-Together 👑 using the same set of input images! Piece it Together (PiT) app is live on 🤗 Spaces!
Excited to share that "TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space" got accepted to SIGGRAPH 2025! It tackles disentangling complex visual concepts from as little as a single image and re-composing concepts across multiple images into a coherent…
Ever dreamed up a different ending to Harry Potter? Now’s your chance to make it real — rewrite the magic your way! 🪄 @pika_labs
I really had a lot of fun with @pika_labs (still unreleased) new feature! Kids went NUTS for it! 🤯 Sound on! 🔊