Yiwei Lu
@YiweiLu3r
Incoming Assistant Professor @uOttawa. Prev: @UWaterloo @TheSalonML @VectorInstitute Interested in #MachineLearning, #Trustworthiness, and #AISafety.
We're excited to announce the Call for Papers for SaTML 2026, the premier conference on secure and trustworthy machine learning @satml_conf We seek papers on secure, private, and fair learning algorithms and systems. 👉 satml.org/call-for-paper… ⏰ Deadline: Sept 24
Congrats to Dr. Yiwei Lu (@YiweiLu3r), who defended his PhD today! Yiwei is a leader in trustworthy AI&ML, and has done breakthrough work on data poisoning attacks. He will be starting as an assistant professor at @uOttawa in the fall -- apply to work with him! 💯💯💯
Congrats to @YiweiLu3r on a PhD successfully defended. His thesis will be a great reference point for students interested in data poisoning. For ~25% of the committee Qs, Yiwei responded with "great question, and we're working on it now...", so I'm excited to see what comes next!
Want state-of-the-art data curation, data poisoning & more? Just do gradient descent! w/ @andrew_ilyas Ben Chen @axel_s_feldmann @wsmoses @aleks_madry: we show how to optimize final model loss wrt any continuous variable. Key idea: Metagradients (grads through model training)
📢📢 Happy to share that our paper on unlearning evaluations has been accepted to ICLR 2025 🇸🇬 📜: arxiv.org/abs/2406.17216 Thanks to my great co-authors @SethInternet @thegautamkamath @jimmy_di98 @ayush_sekhari @yiweilu
🧵New paper: Machine Unlearning Fails to Remove Data Poisoning Attacks, ft @MartinPawelczyk, @jimmy_di98, @ayush_sekhari, @SethInternet. Title says it all: current approaches for machine unlearning (MUL) are not effective at removing the effect of data poisoning attacks. 1/n
"Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining," with @florian_tramer & Nicholas Carlini got an #ICML2024 best paper award! x.com/thegautamkamat… 🧵: the personal side of this research, emotional high & lows, & more 👇 1/n
🧵New paper w Nicholas Carlini & @florian_tramer: "Considerations for Differentially Private Learning with Large-Scale Public Pretraining." We critique the increasingly popular use of large-scale public pretraining in private ML. Comments welcome. arxiv.org/abs/2212.06470 1/n
🧵New paper: Machine Unlearning Fails to Remove Data Poisoning Attacks, ft @MartinPawelczyk, @jimmy_di98, @ayush_sekhari, @SethInternet. Title says it all: current approaches for machine unlearning (MUL) are not effective at removing the effect of data poisoning attacks. 1/n
Accepted to #ICML2024: "Disguised Copyright Infringement of Latent Diffusion Models" By @YiweiLu3r*, Matthew Yang*, Zuoqiu Liu*, coadvised by me & Yaoliang Yu. Copyright violations can be *disguised*: detecting them may require more than just looking at the training data! 🧵1/n
Here's @YiweiLu3r presenting on some awesome work at @satml_conf he led on indiscriminate data poisoning attacks (joint w/ Matthew Y.R. Yang, me, and Yaoliang Yu). Yiwei has been spearheading and leading the field in this understudied research direction! #SaTML2024
Next up is Yiwei Lu giving a talk about Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
🧵Paper at #ICML2023: "Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks," led by Yiwei Lu, and co-advised with Yaoliang Yu. We *finally* give ~satisfying indiscriminate data poisoning attacks against neural networks & more! 1/n arxiv.org/abs/2303.03592