Katherine Hermann
@khermann_
Research Scientist @GoogleDeepMind | Past: PhD from @Stanford
How do language models generalize from information they learn in-context vs. via finetuning? We show that in-context learning can generalize more flexibly, illustrating key differences in the inductive biases of these modes of learning — and ways to improve finetuning. Thread: 1/
New (short) paper investigating how the in-context inductive biases of vision-language models — the way that they generalize concepts learned in context — depend on the modality and phrasing! 1/4
Check out Thomas's cool exploration of how features emerge over layers and over training in a vision model, and how they contribute to the model's outputs!
🎭Recent work shows that models’ inductive biases for 'simpler' features may lead to shortcut learning. What do 'simple' vs 'complex' features look like? What roles do they play in generalization? Our new paper explores these questions. arxiv.org/pdf/2407.06076 #Neurips2024
🚀 New Open-Source Release! PyTorchTNN 🚀 A PyTorch package for building biologically-plausible temporal neural networks (TNNs)—unrolling neural network computation layer-by-layer through time, inspired by cortical processing. PyTorchTNN naturally integrates into the…
Our first NeuroAgent! 🐟🧠 Excited to share new work led by the talented @rdkeller, showing how autonomous behavior and whole-brain dynamics emerge naturally from intrinsic curiosity grounded in world models and memory. Some highlights: - Developed a novel intrinsic drive…
1/ I'm excited to share recent results from my first collaboration with the amazing @aran_nayebi and @Leokoz8! We show how autonomous behavior and whole-brain dynamics emerge in embodied agents with intrinsic motivation driven by world models.
Humans can tell the difference between a realistic generated video and an unrealistic one – can models? Excited to share TRAJAN: the world’s first point TRAJectory AutoeNcoder for evaluating motion realism in generated and corrupted videos. 🌐 trajan-paper.github.io 🧵
Congratulations, Lukas! 🎉
This past Friday I successfully defended my PhD 🎉🙏🏼 What a journey it was! 4.5 years of many ups and many downs. Can’t believe it’s over. I am still processing… Special thanks to my wonderful committee KR Müller, @martin_hebart, @cpilab, and @scychan_brains!
Train your vision SAE on Monday, then again on Tuesday, and you'll find only about 30% of the learned concepts match. ⚓ We propose Archetypal SAE which anchors concepts in the real data’s convex hull, delivering stable and consistent dictionaries. arxiv.org/pdf/2502.12892…
Had a lot of fun speaking with @avileddie about the practical challenges of scaling (especially in Embodied AI), NeuroAI, what to expect in the future, and advice for students getting into the field. Check it out here! youtube.com/watch?v=ZRo-fL…
1/ 🧵👇 What should count as a good model of intelligence? AI is advancing rapidly, but how do we know if it captures intelligence in a scientifically meaningful way? We propose the *NeuroAI Turing Test*—a benchmark that evaluates models based on both behavior and internal…
Are there fundamental barriers to AI alignment once we develop generally-capable AI agents? We mathematically prove the answer is *yes*, and outline key properties for a "safe yet capable" agent. 🧵👇 Paper: arxiv.org/abs/2502.05934
Devastatingly, we have lost a bright light in our field. Felix Hill was not only a deeply insightful thinker -- he was also a generous, thoughtful mentor to many researchers. He majorly changed my life, and I can't express how much I owe to him. Even now, Felix still has so much…
Excited to speak at the Workshop on Spurious Correlation and Shortcut Learning at ICLR 2025!
We are delighted that our proposal for the Workshop on “Spurious Correlation and Shortcut Learning: Foundations and Solutions” has been accepted at @iclr_conf 2025, hosting many brilliant keynote speakers and panelists. Stay tuned: scslworkshop.github.io @SCSLWorkshop 1/
Stop by our #NeurIPS tutorial on Experimental Design & Analysis for AI Researchers! 📊 neurips.cc/virtual/2024/t… Are you an AI researcher interested in comparing models/methods? Then your conclusions rely on well-designed experiments. We'll cover best practices + case studies. 👇
Excited to announce MooG for learning video representations. MooG allows tokens to move “off-the-grid” enabling better representation of scene elements, even as they move across the image plane through time. 📜arxiv.org/abs/2411.05927 🌐moog-paper.github.io
Don't hesitate to check our previous work: arxiv.org/abs/2310.16228 And I highly recommend checking out this excellent related work by Andrew, @scychan_brains and Katherine: arxiv.org/pdf/2405.05847.
🎭Recent work shows that models’ inductive biases for 'simpler' features may lead to shortcut learning. What do 'simple' vs 'complex' features look like? What roles do they play in generalization? Our new paper explores these questions. arxiv.org/pdf/2407.06076 #Neurips2024
On the latest episode of our podcast, research lead @irinavlh and host @fryrsquared discuss the exciting potential of AI tutors – like our LearnLM Learning Coach on @YouTube – to personalize learning and support teachers. 🧠 Watch the full episode now ↓ Timestamps: 00:06 Intro…
What does it take to build AI systems that meet our expectations and complement our limitations? Our Perspective on thought partners which engage deeply with computational cognitive science is now out in @NatureHumBehav ! nature.com/articles/s4156…
Built on our work from ✨Med-Gemini✨, we're thrilled to unveil ⚡️⚡️CT Foundation⚡️⚡️, a novel AI endpoint that simplifies CT scans analysis!