Cory Shain
@coryshain
Language in minds, brains, and machines. Linguistics prof @Stanford. He/him.
Congratulations to @GabrielPoesia on receiving his @Stanford PhD today!
BabyLMs first constructions: new study on usage-based language acquisition in LMs w/ @LAWeissweiler, @coryshain. Simple interventions show that LMs trained on cognitively plausible data acquire diverse constructions (cxns) @babyLMchallenge 🧵
New study from the lab! @jsrozner (w/@LAWeissweiler) shows that human-scale LMs still learn surprisingly sophisticated things about English syntax.
BabyLMs first constructions: new study on usage-based language acquisition in LMs w/ @LAWeissweiler, @coryshain. Simple interventions show that LMs trained on cognitively plausible data acquire diverse constructions (cxns) @babyLMchallenge 🧵
These days I think a lot about "Henry's Awful Mistake", a book from my childhood in which the main character keeps trying to kill an ant with a hammer until his whole house is a waterlogged pile of rubble.
What are the organizing dimensions of language processing? We show that voxel responses are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals.
I’ve been fascinated lately by the question: what kinds of capabilities might base LLMs lose when they are aligned? i.e. where can alignment make models WORSE? I’ve been looking into this with @ChrisGPotts and here's one piece of the answer: randomness and creativity
5yo stayed home from school today. I'm thinking of writing a horror novela called "The buddy who wasn't really sick."
Want to apply computational tools to the science of human language? But not ready to go into a PhD program? UC Irvine's post-bacc in computational language science bridges the gap. Fall 2025 applications are open!
Excited to announce I'll be starting as an assistant professor at @TTIC_Connect for fall 2026! In the meantime, I'll be graduating and hanging around Ai2 in Seattle🏔️
Accepting the first of two 2025 Troland Research Awards is Evelina Fedorenko of @mitbrainandcog, for groundbreaking contributions and insights into the language network in the human brain. 🧠 #NASaward #NAS162 Watch now: ow.ly/R6gP50VIsih
There are a number of permanent positions available in Glasgow, UK. This includes a professorship for 7T and layer-fMRI work. nature.com/naturecareers/… posted on behalf of @LarsMuckli
🚨 I’m hosting a Student Researcher @GoogleDeepMind! Join us on the Autonomous Assistants team (led by @egrefen ) to explore multi-agent communication—how agents learn to interact, coordinate, and solve tasks together. DM me for details!
Excited to share new work on the language system! Using a large fMRI dataset (n=772) we comprehensively search for language-selective regions across the brain. w/@aaronwriight, @ben_lipkin, and @ev_fedorenko Link to the preprint: biorxiv.org/content/10.110… Thread below!👇🧵
As a child growing up in the former Soviet Union, @ev_fedorenko studied English, French, German, Polish, and Spanish. Today she is is working to decipher the internal structure and functions of the brain’s language-processing machinery. news.mit.edu/2025/evelina-f…
Wow… such a beautiful paper! Real tour de force
New brain/language study w/ @ev_fedorenko! We applied task-agnostic individualized functional connectomics (iFC) to the entire history of fMRI in the Fedorenko lab, parcellating nearly 1200 brains into networks based on activity fluctuations alone. doi.org/10.1101/2025.0… . 🧵
Super excited about Cory's @coryshain work showing that you can recover the language network via func connectivity methods from ~any fMRI data. A massive effort using all the data ever collected in my lab from neurotypical participants! Go, Cory!
New brain/language study w/ @ev_fedorenko! We applied task-agnostic individualized functional connectomics (iFC) to the entire history of fMRI in the Fedorenko lab, parcellating nearly 1200 brains into networks based on activity fluctuations alone. doi.org/10.1101/2025.0… . 🧵