Sheridan Feucht
@sheridan_feucht
PhD student working on LLM interpretability with @davidbau and @byron_c_wallace. Undergrad @Brown_NLP '23. (they/them)
[📄] Are LLMs mindless token-shifters, or do they build meaningful representations of language? We study how LLMs copy text in-context, and physically separate out two types of induction heads: token heads, which copy literal tokens, and concept heads, which copy word meanings.
![sheridan_feucht's tweet image. [📄] Are LLMs mindless token-shifters, or do they build meaningful representations of language? We study how LLMs copy text in-context, and physically separate out two types of induction heads: token heads, which copy literal tokens, and concept heads, which copy word meanings.](https://pbs.twimg.com/media/Gn787vEagAAVPcb.jpg)
🚨 Registration is live! 🚨 The New England Mechanistic Interpretability (NEMI) Workshop is happening August 22nd 2025 at Northeastern University! A chance for the mech interp community to nerd out on how models really work 🧠🤖 🌐 Info: nemiconf.github.io/summer25/ 📝 Register:…
What our government is doing right now is a huge mistake. My advisor wrote a short blog post describing why:
FRIENDS: American science is being decimated by Congress NOW. Your help is needed to fix this! The current DC plan PERMANENTLY slashes NSF, NIH, all science training. Money isn't redirected—it's gone. Please read+share what's happening thevisible.net/posts/004-stre…
New paper: Language models have “universal” concept representation – but can they capture cultural nuance? 🌏 If someone from Japan asks an LLM what color a pumpkin is, will it correctly say green (as they are in Japan)? Or does cultural nuance require more than just language?