Xueliang Zhao
@xlzhao_hku
PhD student @HKUniversity | M. Phil & B.S. from @PKU1898
🔥 Meet PromptCoT-Mamba The first reasoning model with constant-memory inference to beat Transformers on competition-level math & code ⚡ Efficient decoding: no attention, no KV cache ⚡ +16.0% / +7.1% / +16.6% vs. s1.1-7B on AIME 24 / 25 / LiveCodeBench 🚀 Up to 3.66× faster

We present DreamOn: a simple yet effective method for variable-length generation in diffusion language models. Our approach boosts code infilling performance significantly and even catches up with oracle results.
What happend after Dream 7B? First, Dream-Coder 7B: A fully open diffusion LLM for code delivering strong performance, trained exclusively on public data. Plus, DreamOn cracks the variable-length generation problem! It enables code infilling that goes beyond a fixed canvas.
🚀 Thrilled to announce Dream-Coder 7B — the most powerful open diffusion code LLM to date.
Super excited about the release of our open diffusion language model! Dream 7B has finally achieved the general language model capabilities I've been dreaming of since we started working on discrete diffusion models. Check out our blog post for details: hkunlp.github.io/blog/2025/drea…
🚀Excited to announce Dream 7B (Diffusion reasoning model): the most powerful open diffusion large language model to date.
🚀Excited to announce Dream 7B (Diffusion reasoning model): the most powerful open diffusion large language model to date.