Chen Sun 🤖🧠🇨🇦
@ChenSun92
Research Scientist @ Google DeepMind Building memory & open-endedness for personalized AI ex-neuroscientist ex-IMO team Canada Views are mine alone not GDM's.
Our team at @GoogleDeepMind is looking to hire a talented new Research Scientist! Our group (under @edchi) aims to push the frontier of AI-human interactions through personalization of LLMs and deeply understanding the open-ended nature of user intentions. Beneath this lies…

Having competed in the IMO, I find the generalist training this work provides to be its most incredible aspect. 🧙♂️ No more rabbithole specialization as we did in high school, thinking specifically about olympiad problem solving 🤔, eating and sleeping olympiad problems.…
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
The most spectacular plot in the Gödel Darwin paper is the one below, with the seeds of perpetual motion, escaping the clutches of mediocrity. How did they do it? Among key ingredients was the extremely lucid insight by the authors that self improvement is, itself, a coding…
**When AIs Start Rewriting Themselves** Darwin Gödel Machine: Open-Ended Evolution of Self-Improving Agents The Darwin Gödel Machine can: 1. Read and modify its own code 2. Evaluate if the change improves performance 3. Open-endedly explore the solution space 🧵👇
Can’t recommend enough applying to @robertarail ‘s team and making a yuuuge impact in AI open endedness!
I’m building a new team at @GoogleDeepMind to work on Open-Ended Discovery! We’re looking for strong Research Scientists and Research Engineers to help us push the frontier of autonomously discovering novel artifacts such as new knowledge, capabilities, or algorithms, in an…
Congrats to our very own @GoogleDeepMind and @lmthang for getting the first OFFICIALLY VERIFIED AI Gold at the IMO. As a former IMO student, it was heartening also to see Google respecting the brilliant students at the IMO, to follow the official rules and deferring the…
Very excited to share that an advanced version of Gemini Deep Think is the first to have achieved gold-medal level in the International Mathematical Olympiad! 🏆, solving five out of six problems perfectly, as verified by the IMO organizers! It’s been a wild run to lead this…
Are AI scientists already better than human researchers? We recruited 43 PhD students to spend 3 months executing research ideas proposed by an LLM agent vs human experts. Main finding: LLM ideas result in worse projects than human ideas.
Hello friends! Uploading the video of my invited talk 👇: Transforming long-horizon experience into wisdom in Brains and AIs. Talks thru the day include Jurgen Schmidhuber, Jay McClelland [5:56], Mine [1:03:08] Ivana Kajic [1:48:20] Naomi Saphra [2:12:16] Michael Lepori…
🚀Introducing “StochasTok: Improving Fine-Grained Subword Understanding in LLMs”!🚀 LLMs are incredible but still struggle disproportionately with subword tasks, e.g., for character counts, wordplay, multi-digit numbers, fixing typos… Enter StochasTok, led by @anyaasims! [1/]
It’s been a while since I have read a neuroscience manuscript in depth but this one published in Nature was a gem 💎 & interesting to compare how the brain learns vs deep learning. 👇 The setting here is on the topic of systems consolidation - the brain’s mechanism to turn…
Sharing a new paper from the lab. This paper, led by Sangyoon Ko, represents a merging of two longstanding research themes in the lab-- adult neurogenesis and systems consolidation. rdcu.be/el18q A short thread follows for those interested
What an enormous privilege to give the opening lecture at the OxML summer school this morning. Never have I had such a thought-provoking set of audience questions! Here's to the automation of innovation towards human flourishing alongside the next generation of researchers.
📣 We’re excited to kick off the course today with a fantastic line-up of speakers: Edward Hughes (Google DeepMind) – AI Squared: Towards AI Capable of AI Research Karo Moilanen (Moonsong Labs)– Agent Guardrails and Proof-of-Agenthood Topologies Peter Gostev(Moonpig) –…
Indeed, open endedness is coming! Thanks @RMBattleday for the opportunity to talk :)
Excellent presentation this morning by @ChenSun92 at the conference. Open-ended capabilities are coming!
Me and my band in 2030, as foretold by veo3. #sound #Veo3 A 🧵
"AlphaEvolve was chosen over a deep reinforcement learning approach because its code solution not only leads to better performance, but also offers clear advantages in interpretability, debuggability, predictability, and ease of deployment - essential qualities for a…
AlphaEvolve is deeply disturbing for RL diehards like yours truly Maybe midtrain + good search is all you need for AI for scientific innovation And what an alpha move to keep it secret for a year Congrats big G