AI News by Smol AI
@Smol_AI
we make big news smol
AI Discord overwhelm? We gotchu. Coming to smol talk 🔜 (what are the top AI discords we should add? we have @openai @langchainai @nousresearch @Teknium1 @alignment_lab @latentspacepod )




[21 July 2025] OAI and GDM announce IMO Gold-level results with natural language reasoning, no specialized training or tools, under human time limits news.smol.ai/issues/25-07-2…
![Smol_AI's tweet image. [21 July 2025]
OAI and GDM announce IMO Gold-level results with natural language reasoning, no specialized training or tools, under human time limits
news.smol.ai/issues/25-07-2…](https://pbs.twimg.com/media/Gwae8x3aMAA9JR4.jpg)
[17 July 2027] ChatGPT Agent news.smol.ai/issues/25-07-1…
ChatGPT can now do work for you using its own computer. Introducing ChatGPT agent—a unified agentic system combining Operator’s action-taking remote browser, deep research’s web synthesis, and ChatGPT’s conversational strengths.
[11 July 2025] Kimi K2 - SOTA Open MoE proves that Muon can scale to 15T tokens/1T params news.smol.ai/issues/25-07-1…
Holy shit. Kimi K2 was pre-trained on 15.5T tokens using MuonClip with zero training spike. Muon has officially scaled to the 1-trillion-parameter LLM level. Many doubted it could scale, but here we are. So proud of the Moum team: @kellerjordan0, @bozavlado, @YouJiacheng,…
[10 Jul 2025] Grok 4: @xAI succeeds in going from 0 to new SOTA LLM in 2 years news.smol.ai/issues/25-07-1…
[8 July 2025] SmolLM3: the SOTA 3B reasoning open source LLM news.smol.ai/issues/25-07-0…
Super excited to share SmolLM3, a new strong 3B model. SmolLM3 is fully open, we share the recipe, the dataset, the training codebase and much more! > Train on 11T token on 384 H100 for 220k GPU hours > Support long context up to 128k thanks to NoPE and intra document masking >…
If you want any AI newsletter it's probably this one: news.smol.ai @Smol_AI
[26 Jun 2025] @OpenAI releases Deep Research API (o3/o4-mini) news.smol.ai/issues/25-06-2…
AINews (@Smol_AI to subscribe) today summarizes the buzz about "context engineering". Credit to @dexhorthy for coining this very useful term. I've been talking to voice AI developers a lot over the past few months about the need to do _this thing_ for a while: compress,…
[25 Jun 2025] Context Engineering: Much More than Prompts news.smol.ai/issues/25-06-2…
Someday @HoegLaw may comment on this. If he does, it'll add a lot of understanding to the ruling & its implications.
[24 Jun 2025] Bartz v. Anthropic PBC — "Training use is Fair Use" news.smol.ai/issues/25-06-2…
[24 Jun 2025] Bartz v. Anthropic PBC — "Training use is Fair Use" news.smol.ai/issues/25-06-2…
📰 @Smol_AI immediately recognized the significance: Their AINews newsletter that same day framed the Cognition vs Anthropic exchange as THE central debate in agent architecture. This wasn't just another technical disagreement. This was the future being decided.
[18 Jun 2025] Zuck goes Superintelligence Founder Mode: $100M bonuses + $100M+ salaries + NFDG Buyout? news.smol.ai/issues/25-06-1…
lots of nice submissions for the multiagent showdown! youtube.com/watch?v=WKVkNZ… this one as well
Two fresh takes: @cognition_labs’ “Don’t Build Multi-Agents” vs @AnthropicAI’s “Building Multi-Agents.” I discussed both in my latest blog, when/where these architectures works—and response to @Smol_AI’s developer challenge. Dive in 👉 agenticspace.dev/multi-agent-or… Plus, subscribe to…
Two fresh takes: @cognition_labs’ “Don’t Build Multi-Agents” vs @AnthropicAI’s “Building Multi-Agents.” I discussed both in my latest blog, when/where these architectures works—and response to @Smol_AI’s developer challenge. Dive in 👉 agenticspace.dev/multi-agent-or… Plus, subscribe to…