Niklas Muennighoff
@Muennighoff
Researching AI/LLMs @Stanford @ContextualAI @allen_ai
Last week we released s1 - our simple recipe for sample-efficient reasoning & test-time scaling. We’re releasing 𝐬𝟏.𝟏 trained on the 𝐬𝐚𝐦𝐞 𝟏𝐊 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 but performing much better by using r1 instead of Gemini traces. 60% on AIME25 I. Details in 🧵1/9
DeepSeek r1 is exciting but misses OpenAI’s test-time scaling plot and needs lots of data. We introduce s1 reproducing o1-preview scaling & performance with just 1K samples & a simple test-time intervention. 📜arxiv.org/abs/2501.19393
Scaling Data-Constrained LMs is now also in JMLR: jmlr.org/papers/v26/24-… Looking back at it 2yrs later, repeating & mixing seem standard now, but maybe another powerful lever to scale data-constrained LMs turns out to have been RL - arguably underrated back then!
**Outstanding Main Track Runner-Ups** Scaling Data-Constrained Language Models Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Congrats to @mattdeitke, Chris, @anikembhavi & team for the Molmo Award!! Fond memories of us hurrying to fix model inference until just before the Sep release😁
Molmo won the Best Paper Honorable Mention award @CVPR! This work was a long journey over 1.5 years, from failing to get strong performance with massive scale, low quality data, to focusing on modest scale extremely high quality data! Proud to see what it became. #CVPR2025
Nice work by @ryanmart3n @etash_guha & co! Made me wonder --- if you aim to train the best 7B model where there are much better (but much larger) models available, when does it make sense to do RL over distill+sft?🤔
Announcing OpenThinker3-7B, the new SOTA open-data 7B reasoning model: improving over DeepSeek-R1-Distill-Qwen-7B by 33% on average over code, science, and math evals. We also release our dataset, OpenThoughts3-1.2M, which is the best open reasoning dataset across all data…
Interesting seeing what are the current best personal assistants that live only in your terminal! Follow @Mike_A_Merrill @alexgshaw & team for future updates on Terminal-Bench :)
Many agents (Claude Code, Codex CLI) interact with the terminal to do valuable tasks, but do they currently work well enough to deploy en masse? We’re excited to introduce Terminal-Bench: An evaluation environment and benchmark for AI agents on real-world terminal tasks. Tl;dr…
Very excited to join @KnightHennessy scholars at Stanford🌲 Loved discussing the big goals other scholars are after — from driving Moore’s Law in biotech to preserving culture via 3D imaging. Personally, most excited about AI that can one day help us cure all diseases :)
Meet the 2025 cohort of Knight-Hennessy scholars! These 84 scholars will join a diverse community of changemakers at Stanford to build lifelong friendships, deepen leadership skills, & collaborate with peers to address complex challenges facing the world. knight-hennessy.stanford.edu/news/knight-he…
Reasoning & test-time scaling don't just matter for generating text with LLMs — @RulinShao, @ray_qiaorui & team show how these are key to retrieval quality. ReasonIR is SoTA on reasoning-intensive retrieval across multiple test-time compute budgets!
Meet ReasonIR-8B✨the first retriever specifically trained for reasoning tasks! Our challenging synthetic training data unlocks SOTA scores on reasoning IR and RAG benchmarks. ReasonIR-8B ranks 1st on BRIGHT and outperforms search engine and retriever baselines on MMLU and GPQA🔥
Finetuning on raw DeepSeek R1 reasoning traces makes models overthink. One of our early s1 versions was overthinking so much, it questioned the purpose of math when just asking what's 1+1😁 Retro-Search by @GXiming & team reduces overthinking + improves performance!
With the rise of R1, search seems out of fashion? We prove the opposite! 😎 Introducing Retro-Search 🌈: an MCTS-inspired search algorithm that RETROspectively revises R1’s reasoning traces to synthesize untaken, new reasoning paths that are better 💡, yet shorter in length ⚡️.
Had a great time giving a talk on s1 at Microsoft GenAI! I enjoy talks most when they're not a monologue but rather a back-and-forth with new ideas that go beyond the paper. This was one of those thanks to an amazing audience with hard questions😅 youtube.com/watch?v=EEkxuq…
Grateful for chatting with @samcharrington about LLM reasoning, test-time scaling & s1!
Today, we're joined by @Muennighoff, a PhD student at @Stanford University, to discuss his paper, “S1: Simple Test-Time Scaling.” We explore the motivations behind S1, as well as how it compares to OpenAI's O1 and DeepSeek's R1 models. We dig into the different approaches to…