Guan-Horng Liu
@guanhorng_liu
Research Scientist @MetaAI (FAIR NY) • Schrödinger Bridge, diffusion, flow, stochastic optimal control • prev ML PhD @GeorgiaTech 🚀
Adjoint-based diffusion samplers have simple & scalable objectives w/o impt weight complication. Like many, though, they solve degenerate Schrödinger bridges, despite all being SB-inspired. 📢 Proudly introduce #Adjoint #Schrödinger #Bridge #Sampler, a full SB-based sampler that…
Thrilled to be co-organizing FPI at #NeurIPS2025! I'm particularly excited about our new 'Call for Open Problems'track. If you have a tough, cross-disciplinary challenge, we want you to share it and inspire new collaborations. A unique opportunity! Learn more below.
1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org…
📢 We're organizing a #NeurIPS2025 workshop on generative modeling, learning to sample, and optimal transport / control. Two submission tracks this year! (deadline #Aug22) 📰 Call for 4-page paper on research / dataset 💡 Call for #Open #Question: 2-page proposal on open…
1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org…
1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org…
🚨 Our workshop on Frontiers of Probabilistic Inference: Learning meets Sampling got accepted to #NeurIPS2025!! After the incredible success of the first edition. The second edition is aimed to be bolder, bigger, and more ambitious in outlining key challenges in the natural…
1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org…
**Transition Matching** is a new iterative generative paradigm using Flow Matching or AR models to transition between generation intermediate states, leading to an improved generation quality and speed!
[1/n] New paper alert! 🚀 Excited to introduce 𝐓𝐫𝐚𝐧𝐬𝐢𝐭𝐢𝐨𝐧 𝐌𝐚𝐭𝐜𝐡𝐢𝐧𝐠 (𝐓𝐌)! We're replacing short-timestep kernels from Flow Matching/Diffusion with... a generative model🤯, achieving SOTA text-2-image generation! @urielsinger @itai_gat @lipmanya
Cool work led by @guanhorng_liu! Removing the restriction on memoryless SDEs enables a lot of relevant cases in chemistry and more... also better results! Take advantage of the freedom of flow & bridge matching to choose a base dist & learn from energy alone! No more data!
Adjoint-based diffusion samplers have simple & scalable objectives w/o impt weight complication. Like many, though, they solve degenerate Schrödinger bridges, despite all being SB-inspired. 📢 Proudly introduce #Adjoint #Schrödinger #Bridge #Sampler, a full SB-based sampler that…
This new work generalizes the recent Adjoint Sampling approach from Stochastic Control to Schrodinger Bridges, enabling measure transport between data and unnormalized densities. Achieves SOTA on large-scale energy-driven conformer generation. See thread by @guanhorng_liu
Adjoint-based diffusion samplers have simple & scalable objectives w/o impt weight complication. Like many, though, they solve degenerate Schrödinger bridges, despite all being SB-inspired. 📢 Proudly introduce #Adjoint #Schrödinger #Bridge #Sampler, a full SB-based sampler that…
Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: @shaulneta @urielsinger @lipmanya Link: arxiv.org/abs/2506.06215
A new paper: We finetune an LLM to rethink and resample previously generated tokens, allowing to reduce sampling errors and improve performance.
Excited to share our recent work on corrector sampling in language models! A new sampling method that mitigates error accumulation by iteratively revisiting tokens in a window of previously generated text. With: @shaulneta @urielsinger @lipmanya Link: arxiv.org/abs/2506.06215
🚀 After two+ years of intense research, we’re thrilled to introduce Skala — a scalable deep learning density functional that hits chemical accuracy on atomization energies and matches hybrid-level accuracy on main group chemistry — all at the cost of semi-local DFT. ⚛️🔥🧪🧬
Padding in our non-AR sequence models? Yuck. 🙅 👉 Instead of unmasking, our new work *Edit Flows* perform iterative refinements via position-relative inserts and deletes, operations naturally suited for variable-length sequence generation. Easily better than using mask tokens.
Announcing the newest releases from Meta FAIR. We’re releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience. 1️⃣ Open Molecules 2025 (OMol25): A dataset…
Slides: rtqichen.com/pdfs/adjoint_m… Link to the workshop livestream (if you have access): iclr.cc/virtual/2025/w…
Against conventional wisdom, I will be giving a talk with particular focus on the "how" and the various intricacies of applying stochastic control for generative modeling. Mon 9:50am Hall 1 Apex #ICLR2025 Also check out the other talks at delta-workshop.github.io!
We have released an eSEN model that is the current SOTA on Matbench-Discovery. Code/checkpoints are available for both non-commercial and commercial use: code: github.com/facebookresear… checkpoint: huggingface.co/facebook/OMAT24 paper (updated): arxiv.org/abs/2502.12147
For existing MLIPs, lower test errors do not always translate to better performance in downstream tasks. We bridge this gap by proposing eSEN -- SOTA performance on compliant Matbench-Discovery (F1 0.831, κSRME 0.321) and phonon prediction. arxiv.org/abs/2502.12147 1/6
🚀Excited to open source the code for Adjoint Matching --- as part of a new repo centered around reward fine-tuning via stochastic optimal control! github.com/microsoft/soc-…
New paper! We cast reward fine-tuning as stochastic control. 1. We prove that a specific noise schedule *must* be used for fine-tuning. 2. We propose a novel algorithm that is significantly better than the adjoint method*. (*this is an insane claim) arxiv.org/abs/2409.08861
📢#Adjoint #Sampling is a new Diffusion Sampler for Boltzmann distribution that - Grounded on stochastic control - Enjoy scalable matching objective - Extremely efficient in energy NFE - Does NOT require/estimate target data Checkout @aaronjhavens talk on Monday #FPI workshop!
New paper out with FAIR(+FAIR-Chemistry): Adjoint Sampling: Highly Scalable Diffusion Samplers via Adjoint Matching We present a scalable method for sampling from unnormalized densities beyond classical force fields. 📄: arxiv.org/abs/2504.11713
Excited to share React-OT is on the cover of @NatMachIntell, both paper and codes are fully accessible! React-OT significantly accelerates our previous OA-ReactDiff method on double-end transition state search with better prior and regularized path!
Our React-OT paper is finally (after exactly one year) out in @NatMachIntell with full open access (nature.com/articles/s4225…). Great to be featured as the cover for the April issue and by MIT news. Thanks all co-authors for making it happen, especially @guanhorng_liu, @YuanqiD,…
Optimal-transport generative modelling of transition states Transition-state structures control chemical kinetics, yet locating them with density-functional theory remains the bottleneck in building large reaction networks. Duan et al. reformulate double-ended transition-state…