Yaron Lipman
@lipmanya
Research scientist @AIatMeta (FAIR), prev/visiting @WeizmannScience. Interested in generative models and deep learning of irregular/geometric data.ποΈ
A new (and comprehensive) Flow Matching guide and codebase released! Join us tomorrow at 9:30AM @NeurIPSConf for the FM tutorial to hear more... arxiv.org/abs/2412.06264 github.com/facebookresearβ¦
DTM vs FMπ Lots of interest in how Difference Transition Matching (DTM) connects to Flow Matching (FM). Here is a short animation that illustrates Theorem 1 in our paper: For a very small step size (1/T), DTM converges to an Euler step of FM.
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
If you're curious to dive deeper into Transition Matching (TM)β¨π, a great starting point is understanding the similarities and differences between ππ’ππππ«ππ§ππ ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (πππ) and Flow Matching (FM)π‘.
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
Difference Transition Matching (DTM) process is so simple to Illustrate, you can calculate it on a whiteboard! At each step: Draw all lines connecting source and target (shaded) β¬οΈ List those intersecting with the current state (yellow) β¬οΈ Sample a line from the list (green)
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
Introducing Transition Matching (TM) β a new generative paradigm that unifies Flow Matching and autoregressive models into one framework, boosting both quality and speed! Thank you for the great collaboration @shaulneta @itai_gat @lipmanya
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
Check out our team's latest work, led by @urielsinger and @shaulneta!
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!
[1/n] New paper alert! π Excited to introduce ππ«ππ§π¬π’ππ’π¨π§ πππππ‘π’π§π (ππ)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative modelπ€―, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
Adjoint-based diffusion samplers have simple & scalable objectives w/o impt weight complication. Like many, though, they solve degenerate SchrΓΆdinger bridges, despite all being SB-inspired. π’ Proudly introduce #Adjoint #SchrΓΆdinger #Bridge #Sampler, a full SB-based sampler thatβ¦
Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: @shaulneta @urielsinger @lipmanya Link: arxiv.org/abs/2506.06215
A new paper: We finetune an LLM to rethink and resample previously generated tokens, allowing to reduce sampling errors and improve performance.
Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: @shaulneta @urielsinger @lipmanya Link: arxiv.org/abs/2506.06215
Padding in our non-AR sequence models? Yuck. π π Instead of unmasking, our new work *Edit Flows* perform iterative refinements via position-relative inserts and deletes, operations naturally suited for variable-length sequence generation. Easily better than using mask tokens.
So Flow Matching is *just* xt = mix(x0, x1, t) loss = mse((x1 - x0) - nn(xt, t)) Nice, here it is in a fragment shader :) shadertoy.com/view/tfdXRM
We've open sourced Adjoint Sampling! It's part of a bundled release showcasing FAIR's research and open source commitment to AI for science. github.com/facebookresearβ¦ x.com/AIatMeta/statuβ¦
Announcing the newest releases from Meta FAIR. Weβre releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience. 1οΈβ£ Open Molecules 2025 (OMol25): A datasetβ¦
Against conventional wisdom, I will be giving a talk with particular focus on the "how" and the various intricacies of applying stochastic control for generative modeling. Mon 9:50am Hall 1 Apex #ICLR2025 Also check out the other talks at delta-workshop.github.io!
Had an absolute blast presenting at #ICLR2025! Thanks to everyone who came to visit my posterπ Special shoutout to @drscotthawley for taking a last-minute photo πΈ
π£I'll be at the poster session with our follow-up on Discrete Flow Matching. We derive a closed-form solution to the kinetic optimal problem for conditional velocity on discrete spaces. Into flow models? come chat! π¬ πPoster: Sat 10am (#191), π€Oral: Sat 3:30pm (6E) #ICLR2025
Got lots of questions about kinetic energy in continuous vs discrete space during my poster! Made a simple slide to help explain β check it out ππ
Even better if friends and colleagues join you for the same session :) Our work on βFlow Matching with General Discrete Pathsβ will be presented by @shaulneta briefly afterwards. Check it out, too! Paper: arxiv.org/abs/2412.03487
Come to our oral presentation on Generator Matching at ICLR 2025 tomorrow (Saturday). Learn about a generative model that works for any data type and Markov process! Oral: 3:30pm (Peridot 202-203, session 6E) Poster: 10am-12:30pm #172 (Hall 3 + Hall 2B) arxiv.org/abs/2410.20587
π£I'll be at the poster session with our follow-up on Discrete Flow Matching. We derive a closed-form solution to the kinetic optimal problem for conditional velocity on discrete spaces. Into flow models? come chat! π¬ πPoster: Sat 10am (#191), π€Oral: Sat 3:30pm (6E) #ICLR2025
Discrete Flow Matching extends the Flow Matching recipe to discrete data. But so far the focus of the community has been on the simple masking corruption process. We now enable general corruption processes. Imagination is the limit! Oral by @shaulneta Sat 3:30pm.
Reward-driven algorithms for training dynamical generative models significantly lag behind their data-driven counterparts in terms of scalability. We aim to rectify this. Adjoint Matching poster @cdomingoenrich Sat 3pm & Adjoint Sampling oral @aaronjhavens Mon 10am FPI