Ricky T. Q. Chen
@RickyTQChen
Research Scientist. FAIR NY, Meta. I build simplified abstractions of the world through the lens of dynamics and flows.
Against conventional wisdom, I will be giving a talk with particular focus on the "how" and the various intricacies of applying stochastic control for generative modeling. Mon 9:50am Hall 1 Apex #ICLR2025 Also check out the other talks at delta-workshop.github.io!

Padding in our non-AR sequence models? Yuck. 🙅 👉 Instead of unmasking, our new work *Edit Flows* perform iterative refinements via position-relative inserts and deletes, operations naturally suited for variable-length sequence generation. Easily better than using mask tokens.
Adjoint-based diffusion samplers have simple & scalable objectives w/o impt weight complication. Like many, though, they solve degenerate Schrödinger bridges, despite all being SB-inspired. 📢 Proudly introduce #Adjoint #Schrödinger #Bridge #Sampler, a full SB-based sampler that…
This new work generalizes the recent Adjoint Sampling approach from Stochastic Control to Schrodinger Bridges, enabling measure transport between data and unnormalized densities. Achieves SOTA on large-scale energy-driven conformer generation. See thread by @guanhorng_liu
Adjoint-based diffusion samplers have simple & scalable objectives w/o impt weight complication. Like many, though, they solve degenerate Schrödinger bridges, despite all being SB-inspired. 📢 Proudly introduce #Adjoint #Schrödinger #Bridge #Sampler, a full SB-based sampler that…
Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: @shaulneta @urielsinger @lipmanya Link: arxiv.org/abs/2506.06215
Introducing Adjoint Sampling, a new learning algorithm that trains generative models based on scalar rewards. Based on theoretical foundations developed by FAIR, Adjoint Sampling leads to a highly scalable practical algorithm, and can become the foundation for further research…
We've open sourced Adjoint Sampling! It's part of a bundled release showcasing FAIR's research and open source commitment to AI for science. github.com/facebookresear… x.com/AIatMeta/statu…
Announcing the newest releases from Meta FAIR. We’re releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience. 1️⃣ Open Molecules 2025 (OMol25): A dataset…
🚀Excited to open source the code for Adjoint Matching --- as part of a new repo centered around reward fine-tuning via stochastic optimal control! github.com/microsoft/soc-…
New paper! We cast reward fine-tuning as stochastic control. 1. We prove that a specific noise schedule *must* be used for fine-tuning. 2. We propose a novel algorithm that is significantly better than the adjoint method*. (*this is an insane claim) arxiv.org/abs/2409.08861
This ICLR is the best conference ever. Attendees are extremely friendly and cuddly. ..What do you mean this is the wrong hall?



