sijia.liu
@sijialiu17
Assistant Professor, CSE@Michigan State University, Affiliated Professor, IBM Research
Seconded! It was a great and rewarding experience collaborating with @pinyuchenTW on this book. I look forward to using it as a textbook and recommended reading in my future courses.
The best way to learn about foundation models is ... to write a book about them! I'm excited for the release of "Introduction to Foundation Models", a book I co-authored with @sijialiu17 and published by @SpringerNature. It covers both basic and advanced topics in modern AI.
Thank you @INNSociety for this great honor. I am deeply grateful to my nominator, students, and collaborators who made this recognition possible. Excited to keep advancing the frontiers of scalable and trustworthy AI! @OptML_MSU
🏆Congratulations to Bo Han, Souvik Kundu, and Sijia Liu for receiving the 2024 #INNS Aharon Katzir Young Investigator Award in recognition of promising research in the field of neural networks! 🔗Learn more: loom.ly/_sbJr0I #AharonKatzir #INNSAwards #neuralnetworks
🎯 ICML 2025 Poster! 📄 “Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning” 🔗 arxiv.org/abs/2506.01339 🗓️ July 15, 4:30–7:00 pm PT 📍 East Exhibition Hall A-B, #E-1108 🧍♂️ I won’t be there in person — but feel free to drop by and chat with my…
🚨 Excited to attend #ICML2025 and share our latest work (@OptML_MSU) on LLM unlearning -- think of it as AI surgery: removing harmful knowledge while preserving general utility. Catch us at: 🔹 [Paper 1] Tues, July 15 @ 4:30pm PT | E-1108 📄 Invariance Makes LLM Unlearning…
Check out our oral paper #ICLR25, led by @LiHongkang_jntm and @zyh2022, which shows when and why model editing via task vector stays probably effective, with application to LLM unlearning.
🔥Our #ICLR2025 Oral paper "When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers" will be presented on 04/26, 4:18 p.m. — 4:30 p.m. at Garnet 216-218. Poster pre will be on 04/26, 10:00 a.m. — 12:30 p.m. #341.
🚨 New finding in LLM unlearning: Even with random selection, using just 5% of the forget set can yield a nearly "equivalent" unlearned model--if you're willing to train longer. 💡 This reveals a stronger-than-expected coreset effect, suggesting that unlearning may be "easier"…
Excited to share a surprising coreset effect finding in LLM unlearning from our paper, "LLM Unlearning Reveals a Stronger-Than-Expected Coreset Effect in Current Benchmarks". Feel free to check it out! Thread 🧵
Excited to be an invited speaker at the #ICML2025 Workshop on Machine Unlearning for Generative AI (MUGen mugenworkshop.github.io)! Looking forward to joining this incredible lineup and diving deep into the latest breakthroughs in unlearning for generative AI. Don’t miss out the…
🚨Exciting @icmlconf workshop alert 🚨 We’re thrilled to announce the #ICML2025 Workshop on Machine Unlearning for Generative AI (MUGen)! ⚡Join us in Vancouver this July to dive into cutting-edge research on unlearning in generative AI—featuring an incredible lineup of…
Honored to receive the prestigious Withrow Rising Scholar Award. Grateful for the unwavering support from my students, advisors, collaborators, nominators, and recommenders. Excited to keep bridging foundational research and real-world impact to advance trustworthy and scalable…
Sijia Liu, Assistant Professor, Computer Science and Engineering. “Prof. Liu is one of the rare few research scholars able to span very theoretical work to very practical and empirical work, bringing forth a novel perspective that galvanizes a community.” spr.ly/60180SXYj
1/Being in academia is such a privilege: You get to collaborate with insanely talented & passionate students on their journey to upskill themselves. Very excited to share *OpenUnlearning*: a unified, easily extensible framework for unlearning led by @anmol_mekala @VineethDorna🧵
Excited to share that our paper, "Rethinking Machine Unlearning for LLMs", is now published in Nature Machine Intelligence (2025)! 🎉🚀 [rdcu.be/eabfN]. Huge thanks to our amazing collaborators! Yuanshun Yao, @jia_jinghan, @StephenLCasper, @NathalieBaraca1, @peterbhase,…
![sijialiu17's tweet image. Excited to share that our paper, "Rethinking Machine Unlearning for LLMs", is now published in Nature Machine Intelligence (2025)! 🎉🚀 [rdcu.be/eabfN]. Huge thanks to our amazing collaborators! Yuanshun Yao, @jia_jinghan, @StephenLCasper, @NathalieBaraca1, @peterbhase,…](https://pbs.twimg.com/media/GkAY3nwaIAE0lBV.jpg)
Congratulations!
I'm thrilled to announce that I've been awarded the prestigious IBM PhD Fellowship 2024! @IBMResearch A heartfelt thank you to my advisor, colleagues, and the @IBM award committee for their support and recognition. #IBMPhDFellowship2024 research.ibm.com/university/awa…
Reminder the deadline to submit a Spotlight (non-archival) track submission to CPAL is TODAY!
Honored to receive the Amazon Research Award for advancing machine unlearning techniques--what a fantastic Christmas gift from @AmazonScience! Huge thanks to my hardworking students @OptML_MSU. Looking forward to new and fruitful collaborations with Amazon in 2025! @MSU_EGR
Announcing the recipients of the #AmazonResearchAwards. They will have access to Amazon public datasets, AWS AI/ML services and tools, and opportunities to collaborate with Amazon scientists. Learn more about the recipients and their research projects:
Live! Keynote talk by Alina Oprea "Training Secure Agents: Is Reinforcement Learning Vulnerable to Poisoning Attacks?" AdvML-Frontiers Workshop (@AdvMLFrontiers ) #NeurIPS2024 East Ballroom C, Vancouver Convention Center advml-frontier.github.io
Seeking volunteers for ACL 2025 reviewer and AC positions in Interpretability and Analysis of NLP Models: - DM me if you are interested in emergency reviewer/AC roles for late March/early April - Self-nominate for reviewer/AC positions here (review period is March 1 through March…