Linqi (Alex) Zhou
@linqi_zhou
Research Scientist @LumaLabsAI. Ph.D. Student at Stanford University (on leave). Prev co-founder @apparatelabs (acq.).
SO excited to finally share my work at Luma! We introduce Inductive Moment Matching, a new generative paradigm that can be trained stably with a single model and single objective from scratch, achieving 1.99 FID on ImageNet-256x256 in 8 steps and 1.98 FID on CIFAR-10 in 2 steps.
Today, we release Inductive Moment Matching (IMM): a new pre-training paradigm breaking the algorithmic ceiling of diffusion models. Higher sample quality. 10x more efficient. Single-stage, single network, stable training. Read more: lumalabs.ai/news/imm
@linqi_zhou and I will be presenting IMM (lumalabs.ai/news/imm) @ ICML on Tuesday 4pm (oral) and 4:30pm-6:00pm (poster). After that, join us at The Lamplighter Public House (6:00pm - 10:00pm) if you want to chat more! lu.ma/7b3nyhvb
IMM full training code is released at github.com/lumalabs/imm. @baaadas and I are presenting the paper (oral) at ICML. If you want to chat, please also join our Happy Hour lu.ma/7b3nyhvb on Tuesday!
SO excited to finally share my work at Luma! We introduce Inductive Moment Matching, a new generative paradigm that can be trained stably with a single model and single objective from scratch, achieving 1.99 FID on ImageNet-256x256 in 8 steps and 1.98 FID on CIFAR-10 in 2 steps.
Thanks for the recognition of the best theory paper award at ICML 2025 EXAIT: exait-workshop.github.io. Congratulations to the team!
HUGE congrats to @wanqiao_xu -- this paper just got the best theory paper award at ICML 2025 EXAIT (Exploration in AI) -- proposing a new provably efficient exploration algorithm 🛣️ with the right level of abstraction to leverage the strengths of LLMs 💭.
(1/5) 👑 New Discrete Diffusion Model — MDM-Prime Why restrict tokens to just masked or unmasked in masked diffusion models (MDM)? We introduce MDM-Prime, a generalized MDM framework that enables partially unmasked tokens during sampling. ✅ Fine-grained denoising ✅ Better…
Decision-making with LLM can be studied with RL! Can an agent solve a task with text feedback (OS terminal, compiler, a person) efficiently? How can we understand the difficulty? We propose a new notion of learning complexity to study learning with language feedback only. 🧵👇
#CVPR2025 "Personalized Preference Fine-tuning of Diffusion Models". We extend DPO to align text-to-image diffusion models with individual user preferences. At test time, it generalizes to unseen users from just few-shot examples — moving toward pluralistic alignment.
Excited to announce that IMM is accepted as an oral for ICML. As I’ll be going to CVPR as well, if you’d like to chat about research see you at @LumaLabsAI open bar event.
SO excited to finally share my work at Luma! We introduce Inductive Moment Matching, a new generative paradigm that can be trained stably with a single model and single objective from scratch, achieving 1.99 FID on ImageNet-256x256 in 8 steps and 1.98 FID on CIFAR-10 in 2 steps.
"Pre-training as we know it will end, Data is not growing". Limited text data is blocking the path to useful general intelligence. At Luma we are building the mathematical foundations to solve this problem by making video, audio and language multimodal data useful for training.…
Today, we release Inductive Moment Matching (IMM): a new pre-training paradigm breaking the algorithmic ceiling of diffusion models. Higher sample quality. 10x more efficient. Single-stage, single network, stable training. Read more: lumalabs.ai/news/imm
Thanks @iScienceLuvr for sharing our latest work. Our method surpasses diffusion and Flow Matching while being trained stably from scratch. Checkout our blog post: lumalabs.ai/news/inductive…
Inductive Moment Matching Luma AI introduces a new class of generative models for one- or few-step sampling with a single-stage training procedure. Surpasses diffusion models on ImageNet-256×256 with 1.99 FID using only 8 inference steps and achieves state-of-the-art 2-step…
This: x.com/LumaLabsAI/sta…
This is the most excited I feel about something that I have worked on since DDIM 👀
As one of the people who popularized the field of diffusion models, I am excited to share something that might be the “beginning of the end” of it. IMM has a single stable training stage, a single objective, and a single network — all are what make diffusion so popular today.
Today, we release Inductive Moment Matching (IMM): a new pre-training paradigm breaking the algorithmic ceiling of diffusion models. Higher sample quality. 10x more efficient. Single-stage, single network, stable training. Read more: lumalabs.ai/news/imm
Today, we release Inductive Moment Matching (IMM): a new pre-training paradigm breaking the algorithmic ceiling of diffusion models. Higher sample quality. 10x more efficient. Single-stage, single network, stable training. Read more: lumalabs.ai/news/imm
One image is a lot more powerful than I was thinking. Learned in the streets of New York today and here:
Introducing Proteus 0.1, REAL-TIME video generation that brings life to your AI. Proteus can laugh, rap, sing, blink, smile, talk, and more. From a single image! Come meet Proteus on Twitch in real-time. ↓ Sign up for API waitlist: apparate.ai/early-access.h… 1/11
Checkout this awesome work by @_Aaditya_Prasad!
Diffusion Policies are powerful and widely used. We made them much faster. Consistency Policy bridges consistency distillation techniques to the robotics domain and enables 10-100x faster policy inference with comparable performance. Accepted at #RSS2024