Andrew Lampinen
@AndrewLampinen
Interested in cognition and artificial intelligence. Research Scientist @DeepMind. Previously cognitive science @StanfordPsych. Tweets are mine.
How do language models generalize from information they learn in-context vs. via finetuning? We show that in-context learning can generalize more flexibly, illustrating key differences in the inductive biases of these modes of learning — and ways to improve finetuning. Thread: 1/

In a new paper, we examine recent claims that AI systems have been observed ‘scheming’, or making strategic attempts to mislead humans. We argue that to test these claims properly, more rigorous methods are needed.
We are hiring on the Veo team!📽️ Some people asked me about this at #ICML2025. If that's you, I will have told you to check deepmind.google/careers/ regularly. 👀It's just been updated: Europe (London, Zurich) job-boards.greenhouse.io/deepmind/jobs/… US (Mountain View) job-boards.greenhouse.io/deepmind/jobs/…
Want to be part of a team redefining SOTA for generative video models? Excited about building models that can reach billions of users? The Veo team is hiring! We are looking for amazing researchers and engineers, in North America and Europe. Details below:
Correct progression seems to be: formal systems+ symbol manipulation -> learn to use human defined formal sys as tools-> tools are learned/ created by learning system -> don't need any more explicit tools Hopeful that we'll eventually make our way up this ladder for most domains
Quick thread on the recent IMO results and the relationship between symbol manipulation, reasoning, and intelligence in machines and humans:
Do you have a PhD (or equivalent) or will have one in the coming months (i.e. 2-3 months away from graduating)? Do you want to help build open-ended agents that help humans do humans things better, rather than replace them? We're hiring 1-2 Research Scientists! Check the 🧵👇
Does vision training change how language is represented and used in meaningful ways?🤔 The answer is a nuanced yes! Comparing VLM-LM minimal pairs, we find that while the taxonomic organization of the lexicon is similar, VLMs are better at _deploying_ this knowledge. [1/9]
Official results are in - Gemini achieved gold-medal level in the International Mathematical Olympiad! 🏆 An advanced version was able to solve 5 out of 6 problems. Incredible progress - huge congrats to @lmthang and the team! deepmind.google/discover/blog/…
🚨 According to a friend, the IMO asked AI companies not to steal the spotlight from kids and to wait a week after the closing ceremony to announce results. OpenAI announced the results BEFORE the closing ceremony. According to a Coordinator on Problem 6, the one problem OpenAI…
Excited to share new work @icmlconf by Loek van Rossem exploring the development of computational algorithms in recurrent neural networks. Hear it live tomorrow, Oral 1D, Tues 14 Jul West Exhibition Hall C: icml.cc/virtual/2025/p… Paper: openreview.net/forum?id=3go0l… (1/11)