Flâneur
@GROEMODEL
Complex Systems Researcher/General Real-time Evolutionary Model. (We have not sent you except as a Mercy to the worlds) Qur'an(21:107)
من اللحظات التنويرية التي فتحت لي تأمل عميق..أحد مشاهير سناب تشات ..من أحد كافيهات جباب الالب وبصحبة مجموعة من اصدقاءه..كتب في لحظة صدق عبارة هزتني (فراغ قاتل)..لانها جلت خيبة الامل حينما نخلط بين اللذة والسعادة..اللذة ربطها الرب جل وعلا بمصادر خارجية بشكل مقنن ومؤقت..

what are large language models actually doing? i read the 2025 textbook "Foundations of Large Language Models" by tong xiao and jingbo zhu and for the first time, i truly understood how they work. here’s everything you need to know about llms in 3 minutes↓
Chatgpt folks, Try now Gpt4o, its way smarter than o3 and pro3 🤯🙃 most likely they activate Gpt5 in it...
The in-context learner of the "beautiful @GoogleResearch paper" is a meta learner like @HochreiterSepp's 2001 meta LSTM [1] which learned by gradient descent (GD) a learning algorithm that outperformed GD - no test time weight changes! Since 1992, GD can learn learning algorithms…
Beautiful @GoogleResearch paper. LLMs can learn in context from examples in the prompt, can pick up new patterns while answering, yet their stored weights never change. That behavior looks impossible if learning always means gradient descent. The mechanisms through which this…
In my latest (and last!) column for Science’s Expert Voices series, I write about the reasons behind AI chatbots’ “deceptive” behaviors (and why Claude threatened a fictional CEO with blackmail). science.org/doi/10.1126/sc…
Congrats to the GDM team on their IMO result! I think their parallel success highlights how fast AI progress is. Their approach was a bit different than ours, but I think that shows there are many research directions for further progress. Some thoughts on our model and results 🧵
A thought, which caused me to slightly update thoughtforms.life/are-we-too-par… (which I wrote because people often ask me whether we can find, "upwards", goal-directed systems we are part of in the same way I talk about finding collective intelligence in components of which we are made).…
there’s no “secret sauce” in software or research anymore. just speed. whoever ships faster, learns faster, wins. the only real edge is compounding attempts… engineers who’ve tried, failed, & tried again. experience is the only multiplier left & even that gap closes quickly to…
We achieved gold medal-level performance 🥇on the 2025 International Mathematical Olympiad with a general-purpose reasoning LLM! Our model solved world-class math problems—at the level of top human contestants. A major milestone for AI and mathematics.
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
When Sam Altman writes that a language model has won a gold medal at the International Mathematical Olympiad, he’s not celebrating a technical milestone. He’s marking a shift in cognitive territory. He’s saying, without spelling it out: “We can now replicate pure symbolic…
It’s hard to overstate the significance of this. It may end up looking like a “moon‑landing moment” for AI. Just to spell it out as clearly as possible: a next-word prediction machine (because that's really what it is here, no tools no nothing) just produced genuinely creative…
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
That's impressive! Pure textual reasoning really showcases the power of language. Excited to see where this leads!
The OpenAI math model is not using any tools like Python or theorem provers. Only pure textual reasoning.
Watching the model solve these IMO problems and achieve gold-level performance was magical. A few thoughts 🧵
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
we achieved gold medal level performance on the 2025 IMO competition with a general-purpose reasoning system! to emphasize, this is an LLM doing math and not a specific formal math system; it is part of our main push towards general intelligence. when we first started openai,…
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
new blog post "All AI Models Might Be The Same" in which i explain the Platonic Representation Hypothesis, the idea behind universal semantics, and we might use AI to understand whale speech and decrypt ancient texts