Asher Trockman
@ashertrockman
Research scientist at Google | Prev: CS PhD student at CMU
Tough look for OpenAI They've pissed off the international math community by jumping the gun, meanwhile @GoogleDeepMind has an officially-confirmed result that will be available commercially months earlier
Our IMO gold model is not just an "experimental reasoning" model. It is way more general purpose than anyone would have expected. This general deep think model is going to be shipped so stay tuned! 🔥
[CL] Learning without training: The implicit dynamics of in-context learning B Dherin, M Munn, H Mazzawi, M Wunder... [Google Research] (2025) arxiv.org/abs/2507.16003
Surprised to see some of my earliest AI research on the timeline again after ten years (linear regression)
Welcome back TouchScale
all the exits in ai have been acquihires🧐 one top ai researcher is worth more than an entire non-ai company with PMF revenue? 😱 windsurf --> Google character.ai -->Google deepmind --> Google scale.ai --> Meta inflection --> Microsoft adept…
all the exits in ai have been acquihires🧐 scale.ai --> Meta character.ai --> Google deepmind --> Google inflection --> Microsoft adept --> Amazon mosaic --> Databricks
Google DeepMind followed IMO rules to earn gold, unlike OpenAI
Incredible how little people are talking about this when it is essentially the same result as OpenAI (literally the same score), just reported a couple days later, presumably out of deference to the IMO and its participants. OpenAI burned a lot of capital in the math community…
Official results are in - Gemini achieved gold-medal level in the International Mathematical Olympiad! 🏆 An advanced version was able to solve 5 out of 6 problems. Incredible progress - huge congrats to @lmthang and the team! deepmind.google/discover/blog/…
And this is why I trust Deepmind more than OpenAI. You guys care more about following proper procedures and being respectful instead of trying to build up hype every chance you get. Congrats :)
IMO Gold-Medal🥇 performance for Gemini. Very happy to have contributed to this effort :)
An advanced version of Gemini with Deep Think has officially achieved gold medal-level performance at the International Mathematical Olympiad. 🥇 It solved 5️⃣ out of 6️⃣ exceptionally difficult problems, involving algebra, combinatorics, geometry and number theory. Here’s how 🧵
We have achieved gold medal performance at the International Mathematical Olympiad 🥇 🥳 This is the first general-purpose system to do so through official participation and grading, and I'm thrilled to have contributed a little to this milestone in mathematical reasoning 🌈🫶
An advanced version of Gemini with Deep Think has officially achieved gold medal-level performance at the International Mathematical Olympiad. 🥇 It solved 5️⃣ out of 6️⃣ exceptionally difficult problems, involving algebra, combinatorics, geometry and number theory. Here’s how 🧵
You see? OpenAI ignored the IMO request. Shame. No class. Straight up disrespect. Google DeepMind acted with integrity, aligned with humanity. TRVTHNUKE
Great excuse to share something I really love: 1-Lipschitz nets. They give clean theory, certs for robustness, the right loss for W-GANs, even nicer grads for explainability!! Yet are still niche. Here’s a speed-run through some of my favorite papers on the field. 🧵👇
optimization theorem: "assume a lipschitz constant L..." the lipschitz constant:
Congratulations to Google DeepMind for being the FIRST AI lab ever to win IMO gold! Stay winning.
Laker and I are presenting this work in an hour at ICML poster E-2103. It’s on a theoretical framework and language (modula) for optimizers that are fast (like Shampoo) and scalable (like muP). You can think of modula as Muon extended to general layer types and network topologies
1/ Today we announce Pleiades, a series of epigenetic foundation models (90M→7B params) trained on 1.9T tokens of human methylation & genomic data. Pleiades accurately models epigenetics for genomic track prediction, generation & neurodegenerative disease detection from cfDNA,…
Huge congratulations to Vaishnavh, Chen and Charles on the outstanding paper award 🎉 We will be presenting our #ICML2025 work on creativity in the Oral 3A Reasoning session (West Exhibition Hall C) 10 - 11 am PT. Or please stop by our poster right after @ East Exhibition…
📢 New paper on creativity & multi-token prediction! We design minimal open-ended tasks to argue: → LLMs are limited in creativity since they learn to predict the next token → creativity can be improved via multi-token learning & injecting noise ("seed-conditioning" 🌱) 1/ 🧵
🎉 Excited to share that our paper "Pretrained Hybrids with MAD Skills" was accepted to @COLM_conf 2025! We introduce Manticore - a framework for automatically creating hybrid LMs from pretrained models without training from scratch. 🧵[1/n]
For those at ICML, Audrey will be presenting this paper at the 4:30 poster session this afternoon! West Exhibition Hall B2-B3 W-1009
Is Best-of-N really the best we can do for language model inference? New algo & paper: 🚨InferenceTimePessimism🚨 Led by the amazing Audrey Huang (@auddery) with Adam Block, Qinghua Liu, Nan Jiang (@nanjiang_cs), and Akshay Krishnamurthy. Appearing at ICML '25. 1/11
H-Nets are the future.
H-Net introduces several technical components, including a similarity-score routing module and EMA-based smoothing module, to allow learning discrete chunk boundaries stably. And because it’s fully end-to-end, H-Net can be *recursively iterated* to more stages of hierarchy! 3/