Kanishka Misra 🌊
@kanishkamisra
Research Assistant Professor @ttic_connect! Asst. Prof of Ling at @UTAustin soon. language, concepts, and generalization. also on the site where the sky is blue
News🗞️ I will return to UT Austin as an Assistant Professor of Linguistics this fall, and join its vibrant community of Computational Linguists, NLPers, and Cognitive Scientists!🤘 Excited to develop ideas about linguistic and conceptual generalization! Recruitment details soon

Love to see this I am always hoping for papers that show that text-only understanding is influenced by being physically grounded (images, videos, interaction) It was a big hope of people years ago with few positive findings, glad it is still explored!
Does vision training change how language is represented and used in meaningful ways?🤔 The answer is a nuanced yes! Comparing VLM-LM minimal pairs, we find that while the taxonomic organization of the lexicon is similar, VLMs are better at _deploying_ this knowledge. [1/9]
Quick thread on the recent IMO results and the relationship between symbol manipulation, reasoning, and intelligence in machines and humans:
"Seeing" robins and sparrows may not necessarily make them birdier to LMs! Super excited about this paper -- massive shoutout to all my co-authors, especially @yulu_qin and @dhevarghese for leading the charge!
Does vision training change how language is represented and used in meaningful ways?🤔 The answer is a nuanced yes! Comparing VLM-LM minimal pairs, we find that while the taxonomic organization of the lexicon is similar, VLMs are better at _deploying_ this knowledge. [1/9]
I am especially excited about our cute little case study showcasing that within MLM embs, different dative-constructions with the same lexical items construe different levels of animacy vs. place-hood information onto the recipient (DO: animate; PO: place/location)
now that i physically have my green card... it really does feel like it takes a village: so much favor asking, bugging people, getting legal advice, emotional support friends (thanks all!), and of course forms x100. even when i technically "had it easy". all i want to say is...
Our department is recruiting! New tenure-track, open rank faculty position in the Department of Psychology at the University of Michigan (emphasis on human cognition and artificial intelligence). apply.interfolio.com/169170
Honored to get the outstanding position paper award at @icmlconf :) Come attend my talk and poster tomorrow on human centered considerations for a safer and better future of work I will be recruiting PhD students at @stonybrooku @sbucompsc coming fall. Please get in touch.
Very excited for a new #ICML2025 position paper accepted as oral w @mbodhisattwa & @TuhinChakr! 😎 What are the longitudinal harms of AI development? We use economic theories to highlight AI’s intertemporal impacts on livelihoods & its role in deepening labor-market inequality.
I'd highlight the point on generalization: to make a "poor generalization" argument, we need systematic evaluations. A promising protocol is prompting multiple LMs and treating each as an individual in mixed-effect models. arxiv.org/pdf/2502.09589 w/ @tom_yixuan_wang (2/n)
people often talk about llm-as-judge; but no one talks about llm-as-jury or llm-as-executioner (🙀)
Starting in August, I’ll start an Assistant Professor (NLP) position in @mbzuai. I’d continue to work on interdisciplinary topics bridging NLP to fundamental linguistic/cogsci questions. I'll have a small team and look for one postdoc and many visitors! 👉 kuribayashi4.github.io
My 1st first-author paper has been accepted by @COLM_conf ! see u in Montreal 🇨🇦
New preprint w/@_jennhu @kmahowald: Can LLMs introspect about their knowledge of language? Across models and domains, we did not find evidence that LLMs have privileged access to their own predictions. 🧵(1/8)
colm sent us the acceptance announcement / colm sent the acceptance announcement to us
LMs learn argument-based preferences for dative constructions (preferring recipient first when it’s shorter), being quite consistent with humans. Is this from just memorizing the preferences in their training data? New paper w/ @kanishkamisra, @LAWeissweiler, @kmahowald
btw, 1) we proved conditions for a simple version of the Platonic Representation Hypothesis for two layer linear networks back in 2019 in our @PNASNews paper: A mathematical theory of semantic development in deep nets (Fig. 11): pnas.org/doi/10.1073/pn… 2) we also showed evidence…
Can you tell what actions are being mimed in this video? If so, you’re smarter than AI models! Check the last tweet in this thread for answers. In a new paper, we present MIME, which evaluates whether vision language models (VLMs) have a robust understanding of human actions. 🧵
Some personal news ✨ In September, I’m joining @ucl as Associate Professor of Computational Linguistics. I’ll be building a lab, directing the MSc programme, and continuing research at the intersection of language, cognition, and AI. 🧵
Want to request expedited timeline change to the chronology gods