MINT-NLP@MBZUAI🇦🇪
@mint_nlp_mbzuai
At MINT (MBZUAI Interpretability Team), we explore the inner workings of NLP systems such as large language models (LLMs).
[🔈Exciting Announcement!] We have one paper accepted to #ACL2025 Main Conference and two papers accepted to Findings! 🎉 @aclmeeting Check out our accepted papers here ⬇️ mint-nlp-mbzuai.com/news/acl2025-p…
Our in-depth investigation paper with @mint_nlp_mbzuai and the esteemed Dr. @inuikentaro, "Recall: Library-Like Behaviour in Language Models is Enhanced by Self-Referencing Causal Cycles," has been accepted to #ACL2025 Main Conference! 🎉w also to my collaborators @tolusophy…
📢 Thrilled to share our latest research on large language models (LLMs)! 🚀🎉 Did you know that language models generally struggle with something as simple as recalling "the line before"? This limitation, known as the reversal curse, highlights how LLMs often fail to predict…
My first-author paper, "Rectifying Belief Space via Unlearning to Harness LLMs' Reasoning," has been accepted to #ACL2025 Findings! 🎉w/ @MasahiroKaneko_ @inuikentaro
🔈Our team is thrilled to present two papers at #NAACL2025! Come and say hi!✨

Neuroscience suggests humans perceive numbers on a logarithmic scale—small values are perceived with greater resolution than large ones. For example, we tend to count 1 to 10 granularly, but group bigger numbers like "1 million, 1 billion, 1 trilion" in the same "Big" Category.…
📢 Thrilled to share our latest research on large language models (LLMs)! 🚀🎉 Did you know that language models generally struggle with something as simple as recalling "the line before"? This limitation, known as the reversal curse, highlights how LLMs often fail to predict…
[🔈Exciting Announcement!] 3 papers got accepted to #NAACL2025 @naaclmeeting 🎉 More details will follow. Great job, team members! Check out the publication list here ⬇️mint-nlp-mbzuai.com/publications