Chirag Agarwal
@_cagarwal
Assistant Professor @UVA; PI of Aikyam Lab; Prev - @Harvard, @Adobe @BoschGlobal @thisisUIC ; Increasing the sample size of my thoughts
If you are at #NAACL2025, don’t miss our Oral and Poster sessions showcasing three exciting papers from our lab! 🚀 We'll dive into data protection, memorization in LLMs, and the impact of fine-tuning on CoT reasoning.

Congrats @LogmlSchool for running an in-person grad ML summer school at @imperialcollege and opening up research opportunities for students worldwide! 👏 logml.ai Thanks to organizers, @valegiunca, and mentors @justguadaa @_cagarwal @ruthie_johnson @YEktefaie…
🌟Applications open- LOGML 2025🌟 👥Mentor-led projects, expert talks, tutorials, socials, and a networking night ✍️Application form: logml.ai 📅Apply by 6th April 2025 ✉️Questions? [email protected] #MachineLearning #SummerSchool #LOGML #Geometry
@icmlconf is around the corner! Are you presenting any papers or hot takes in Trustworthy ML? Share your work in this thread and we’ll retweet! 🚀
Thank you for summarizing this work, @rohanpaul_ai 🙏
Unimodal explainability tricks people into thinking a multimodal system uses every input. This paper builds three strict tests that force explanations to prove real cross‑modal reasoning. Today most explanation tools look at one data type at a time. That is fine for an…
I’m at #NAACL2025 to present our latest work: "Analyzing Memorization in LLMs through the lens of Model Attribution" - We dive into which parts of the transformer architecture are responsible for memorizing training data 📄 Paper: arxiv.org/abs/2501.05078 🧵👇
Google DeepMind India is hiring for research scientist role in multicultural & multimodal modeling. Strong candidates with proven research experience are encouraged to apply I shall be at #icassp2025 Hyderabad on Apr 8, happy to meet and chat, pls DM job-boards.greenhouse.io/deepmind/jobs/…
“We found that if you ask the LLM, surprisingly it always says that I'm 100% confident about my reasoning.” @_cagarwal examines the (un)reliability of chain-of-thought reasoning, highlighting issues in faithfulness, uncertainty & hallucination.
LLMs may hallucinate non-factual stmts when QA-ing. So why not ask them to reference facts from the input while answering questions? It works! 🌟 Highlighted Chain of Thought prompting (HoT) enables LLMs to highlight facts in the input & then write a fact-grounded answer. 1/n
The LaCross AI Institute awarded the 2025 Fellowships in AI Research (FAIR) to four members of UVA faculty, including School of Data Science Assistant Professor Chirag Agarwal. To read more about his accomplishment, click here: bit.ly/4gRCcce
Presenting "The Multilingual Mind", a survey of multilingual reasoning in language models for a deep dive into how current language models reason across languages. Our survey comprehensively reviews existing methods that leverage LLMs for multilingual reasoning, outlining…

NOW OPEN: Applications for post-doctoral researchers to enrich UVA’s expertise in climate research. Could this be you!? Positions and more online - environment.virginia.edu/g2c-fellows
Thrilled to announce that our five-month-old lab got three papers accepted at #NAACL2025! 1. Operationalizing Right to Data Protection: shorturl.at/IPfxn 2. Analyzing Memorization in LLMs through the Lens of Model Attribution: shorturl.at/1OBdf 3. Impact of…
Exciting opportunity at the intersection of climate science and XAI to work on groundbreaking research in attributing extreme precipitation events with multimodal models. Check out the details and help spread the word! #ClimateAI #Postdoc #UVA #Hiring Job description:…
Dear Climate and AI community! We are hiring 😀 a postdoc to join @UVAEnvironment at @UVA and work with @_cagarwal and myself, on using multimodal AI models and explainable AI to attribute extreme precipitation events! Fascinating stuff! Link below. Please RT!…
Understanding the reliability of reasonings generated by LLMs is one of the key challenges in deploying them to high-stakes applications.
“We found that if you ask the LLM, surprisingly it always says that I'm 100% confident about my reasoning.” @_cagarwal examines the (un)reliability of chain-of-thought reasoning, highlighting issues in faithfulness, uncertainty & hallucination.
It's happening today!!! We are also hosting a networking event from 5:00 - 5:30 PM. You don't want to miss the opportunity to network with this group and discuss the foundations of AI regulations for the coming years.
Join us at the #RegulatableML workshop at #NeurIPS2024 to learn about AI regulations and how to operationalize them in practice. 🗓️ Date: Dec 15, 2024 (East Meeting Room 13) 🕓 Time: 8:15 am - 5:30 pm 🔗 Details: regulatableml.github.io We have an exciting schedule: ⭐️ Six…