Kamalika Chaudhuri
@kamalikac
Director, FAIR @ Meta. Former Professor at UCSD. Researcher in AI privacy, security, and generalization.
🧵 Academic job market season is almost here! There's so much rarely discussed—nutrition, mental and physical health, uncertainty, and more. I'm sharing my statements, essential blogs, and personal lessons here, with more to come in the upcoming weeks! ⬇️ (1/N)
This is a major role with massive scope and impact. Help us build a science of post-training data curation.
We are looking for a post-training lead at @datologyai we have gpus, you can make them go brrrr
Had an amazing time @NewInML @icmlconf giving a talk on "What I Wish I knew before starting a PhD (but learnt the hard way)"! Loved the post-talk discussions and the heart warming messages :) Sharing slides since some people asked, link in the tweet below 👇
🚀 Call for Papers — @NeurIPSConf 2025 Workshop Multi-Turn Interactions in LLMs 📅 December 6/7 · 📍 San Diego Convention Center Join us to shape the future of interactive AI. Topics include but are not limited to: 🧠 Multi-Turn RL for Agentic Tasks (e.g., web & GUI agents,…
Proud advisor moment: @CaseyMeehan speaking at the Panel in the Memorization for Trustworthy Foundation Models Workshop at #ICML2025

Excellent talk by @chhaviyadav_ on ZKP and accountable AI!
Listen to my talk on ‘Accountable AI with ZKPs: Certifying Fairness and Explanations under model Confidentiality’ given @SimonsInstitute today! Link : m.youtube.com/watch?v=dH2yqw…
We're excited to announce a second physical location for NeurIPS 2025, in Mexico City. By expanding our physical locations, we hope to address concerns around skyrocketing attendance and difficulties in obtaining travel visas that some attendees have experienced in the past few…
Thrilled to share the Community Alignment dataset -- the product of a massive collaborative effort with so many awesome folks. Can't wait to see the future research it unlocks!
Today we're releasing Community Alignment - the largest open-source dataset of human preferences for LLMs, containing ~200k comparisons from >3000 annotators in 5 countries / languages! There was a lot of research that went into this... 🧵
considering Muon is so popular and validated at scale, we've just decided to welcome a PR for it in PyTorch core by default. If anyone wants to take a crack at it... github.com/pytorch/pytorc…
I'll be presenting my poster today in the East Exhibition Hall A-B #E-701 from 11am-1:30pm & then head to @NewInML to give a talk at 1:55pm in West Meeting Room 211-214! Come say hi, can't wait for the exciting discussions!! :))
For those in London (unfortunately I am not :))
My friends, I want to organise Secure AI Club in London -- gig for people interested in (practical!) AI Security. Not just academic toy setups, but actually making systems reliable. Trying to gauge interest, please sign up here: forms.gle/zSUMh6ykthQwtt…
“Thrilled to announce 14 papers accepted from our lab” The lab:
I'll be at ICML in Vancouver from July 12–17, soaking in the sun and the research! Catch me on Tuesday, July 15: 11:00am–1:30pm : Presenting my poster "ExpProof: Operationalizing Explanations for Confidential Models with ZKPs" (For those who like their AI dramatic — inevitable…
We’re happy to share that our opening keynote speaker on 10/22, will be @niloofar_mire, an incoming Assistant Professor at Carnegie Mellon University & Research Scientist at FAIR (Meta AI). Secure your seat for #CAMLIS2025 before it's sold out! camlis.org/tickets
Interesting position paper. It has long been clear that statistical learning does not fully explain learning in LLMs; but what does? Exact learning is a possible answer. I’m curious to see if it’ll hold up!
First position paper I ever wrote. "Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence" arxiv.org/abs/2506.23908 Background: I'd like LLMs to help me do math, but statistical learning seems inadequate to make this happen. What do you all think?
Our research on embodied AI agents that can perceive, learn, act and interact in the virtual and physical worlds. #metaAI #AIAgent #embodied #worldmodel #superintelligemce arxiv.org/abs/2506.22355
🪄We made a 1B Llama BEAT GPT-4o by... making it MORE private?! LoCoMo results: 🔓GPT-4o: 80.6% 🔐1B Llama + GPT-4o (privacy): 87.7% (+7.1!⏫) 💡How? GPT-4o provides reasoning ("If X then Y"), the local model fills in the blanks with your private data to get the answer!
A good language model should say “I don’t know” by reasoning about the limits of its knowledge. Our new work AbstentionBench carefully measures this overlooked skill in leading models in an open-codebase others can build on! We find frontier reasoning degrades models’ ability to…
Exciting new work with @polkirichenko @neurosamuel @marksibrahim
Excited to release AbstentionBench -- our paper and benchmark on evaluating LLMs’ *abstention*: the skill of knowing when NOT to answer! Key finding: reasoning LLMs struggle with unanswerable questions and hallucinate! Details and links to paper & open source code below! 🧵1/9