Niloofar (✈️ ACL)
@niloofar_mire
Niloofar Mireshghallah — incoming asst. prof @LTIatCMU @CMU_EPP, RS in @AIatMeta, postdoc @uwcse, Ph.D. @ucsd_cse, former @MSFTResearch -Privacy, ML, NLP
📣Thrilled to announce I’ll join Carnegie Mellon University (@CMU_EPP & @LTIatCMU) as an Assistant Professor starting Fall 2026! Until then, I’ll be a Research Scientist at @AIatMeta FAIR in SF, working with @kamalikac’s amazing team on privacy, security, and reasoning in LLMs!



The talk for our work on Retrospective Learning from Interactions, which will be in ACL (once I figure out how to squeeze it shorter) Gist: autonomous post-training from conversational signals for LLM bootstrapping ... look ma, no annotations! 🙌📈🚀 youtube.com/watch?v=qW8S30…
I’ll be at ACL 2025 next week where my group has papers on evaluating evaluation metrics, watermarking training data, and mechanistic interpretability. I’ll also be co-organizing the first Workshop on LLM Memorization @l2m2_workshop on Friday. Hope to see lots of folks there!
📢 Excited to announce that GenMol is now open-sourced. GenMol: A Drug Discovery Generalist with Discrete Diffusion Paper: arxiv.org/abs/2501.06158 Code: github.com/NVIDIA-Digital…
🚀 GenMol is now open‑sourced: you can now train and finetune on your data! It uses masked diffusion + a fragment library to craft valid SAFE molecules, from de novo design to lead optimization. #GenMol #DrugDiscovery #Biopharma
How to write good reviews & rebuttals? We've invited 🌟 reviewers to share their expertise in person at our ACL mentorship session #ACL2025NLP next week
📢 Join us for the ACL Mentorship Session @aclmeeting #ACL2025NLP #NLProc • Session Link: mentorship.aclweb.org/schedule • Ask Questions: tinyurl.com/y2v2j462 Mentors: • @May_F1_ (@hkust) • @d_aumiller (@cohere) • @vernadankers (@Mila_Quebec) • @ziqiao_ma (@UMichCSE) •…
One of my favourite conferences (ALT) chaired by two of my favourite people (@thejonullman and Matus Telgarsky) at my favourite venue (@FieldsInstitute) in my favourite country!🇨🇦 Definitely submit and attend!!! New award this year 👀
Really excited that I will be co-chairing ALT 2026 with Matus Telgarsky! The conference will be Feb 23-26, 2026 at the Fields Institute in Toronto. Website with cfp is now live---stay tuned for updates Please submit your best work and come join us!
Join Abhilasha's lab, she is an awesome researcher and mentor! I can attest, being her collaborator was great fun 🤩
Life update: I’m excited to share that I’ll be starting as faculty at the Max Planck Institute for Software Systems(@mpi_sws_) this Fall!🎉 I’ll be recruiting PhD students in the upcoming cycle, as well as research interns throughout the year: lasharavichander.github.io/contact.html
A team of @CarnegieMellon researchers f/ Brian Singer, @lujobauer, and @vyas_sekar show how LLMs can be taught to autonomously plan and execute real-world cyberattacks against enterprise-grade network environments—and why this matters for future defenses. See full details below:
In a groundbreaking development, a team of CMU @CyLab researchers including Brian Singer, @lujobauer, and @vyas_sekar demonstrated that large language models (LLMs) are capable of autonomously planning and executing complex network attacks. bit.ly/when-llms-auto…
The Invisible Leash: Why RLVR May Not Escape Its Origin "RLVR is constrained by the base model's support-unable to sample solutions with zero initial probability-and operates as a conservative reweighting mechanism that may restrict the discovery of entirely original solutions"…
Great minds think alike! 👀🧠 We also found that more thinking ≠ better reasoning. In our recent paper (arxiv.org/abs/2506.04210), we show how output variance creates the illusion of improvement—when in fact, it can hurt precision. Naïve test-time scaling needs a rethink. 👇…
New Anthropic Research: “Inverse Scaling in Test-Time Compute” We found cases where longer reasoning leads to lower accuracy. Our findings suggest that naïve scaling of test-time compute may inadvertently reinforce problematic reasoning patterns. 🧵
WHY do you prefer something over another? Reward models treat preference as a black-box😶🌫️but human brains🧠decompose decisions into hidden attributes We built the first system to mirror how people really make decisions in our #COLM2025 paper🎨PrefPalette✨ Why it matters👉🏻🧵