Michael Albergo
@msalbergo
Junior fellow at the Society of Fellows at @Harvard and @iaifi_news fellow, incoming Assistant Professor at @Harvard and the @KempnerInst views my own
We’re thrilled to introduce the 2025 cohort of #KempnerInstitute Graduate Fellows! This year’s recipients include grad students enrolled across five @Harvard Ph.D. programs. Read more: bit.ly/3TYzZ5H @hseas, @HarvardGSAS, @harvardphysics @PiN_Harvard #AI #NeuroAI #ML
1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #NeurIPS2025! 📢 FPI@NeurIPS 2025: Frontiers in Probabilistic Inference – Learning meets Sampling Learn more and submit → fpiworkshop.org…
Pitch the dataset that could spark the next AI for Science revolution 🚀 The PDB revolutionized structural biology (and even helped win a🏅Nobel Prize in 2024). We’re hunting for the next breakthrough dataset that could unlock similar leaps across science—and we want your idea!
AI for Science will be returning to @NeurIPSConf 2025! We aim to bring together scientists and AI researchers to discuss the reach and limits of AI for Scientific Discovery 🚀 📖 Workshop submission deadline: Aug 22 💡 Dataset proposal competition: more details coming soon
A gentle reminder that TMLR is a great journal that allows you to submit your papers when they are ready rather than rushing to meet conference deadlines. The review process is fast, there are no artificial acceptance rates, and you have more space to present your ideas in the…
Dear @NeurIPSConf -- it seems OpenReview is down entirely, and we cannot submit reviews for the upcoming review deadline tonight. Please share if you are having a similar issue. #neurips2025
1/6 Infinite-dim SGD in linear regression is the strawman model for studying scaling laws, critical batch sizes, and LR schedules. We revisit (and simplify) its analysis using just linear algebra, making it easier to derive and reason about. No PSD operators. No tensor calculus.
absolutely beautiful results! i'm very excited about training accelerated generative models such as flow maps, and it is wonderful to see master experimentalists like @karsten_kreis look closely at the details of scaling and training.
Nvidia just announced Align Your Flow Scaling Continuous-Time Flow Map Distillation
What is the probability of an image? What do the highest and lowest probability images look like? Do natural images lie on a low-dimensional manifold? In a new preprint with @ZKadkhodaie @EeroSimoncelli, we develop a novel energy-based model in order to answer these questions: 🧵
🚨 @icmlconf We are unable to submit the camera-ready paper on OpenReview. The official deadline page (icml.cc/Conferences/20…) shows 3.5 hours remaining for submission. Is there a technical issue? Please advise! #ICML2025
#FPIworkshop best paper award goes to @peholderrieth @msalbergo and Tommi Jaakkola. Congrats and great talk Peter!
Francisco Vargas and @msalbergo with a combined talk on sampling inference and transport at #FPIworkshop.
Congratulations to @peholderrieth @msalbergo and Tommi Jaakkola for winning the best paper award for their work entitled "LEAPS: A discrete neural sampler via locally equivariant networks" at this year's Frontiers in Probabilistic Inference workshop #ICLR2025!
Excited to be at @iclrconf for #ICLR2025! I’ll give a talk at the Frontiers on Probabilistic Inference workshop to discuss work with @evdende2, @peholderrieth, @brianlee_lck, @jeha_paul, and Francisco Vargas! Let me know about your work, I will come by :) sites.google.com/view/fpiworksh…
& here are some of my favorite papers on the unification of flows and diffusion. What a decade!! (From my presentation here: bamos.github.io/presentations/…)
How Diffusion unification went: > score based model > then DDPM came along > we have two formalism, DDPM & SBM > SDE came to unify them > now we have Score, DDPM & SDE > Then came flow matching to unify them > now we have Score, DDPM, SDE & Flow Models > Then consistency models…
Having trouble sampling from a complex distribution? @peholderrieth, @msalbergo, and #JameelClinic PI Tommi Jakkola introduce LEAPS, which samples values from discrete distributions more efficiently & at scale via continuous-time Markov chains. ⭐️Paper: arxiv.org/pdf/2502.10843
We have a new cool preprint with @brianlee_lck. We developed a sweet Sequential Monte Carlo algorithm for unbiased samples from a tempered distribution p(x0)p(y|x0)^α and applied it to discrete diffusion for text arxiv.org/abs/2502.06079. Huge thanks to Francisco Vargas @msalbergo…