Neil Gong
@NeilGong
Security, trustworthy AI. Associate Professor, Duke University
Our paper "DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks" (arxiv.org/abs/2504.11358) received a Distinguished Paper Award at @IEEESSP! Huge thanks and congratulations to my amazing co-authors Yupei Liu, Yuqi Jia, Jinyuan Jia, and @dawnsongtweets!
The 8th Deep Learning Security and Privacy workshop co-located with IEEE S&P @IEEESSP May 15, 2025, San Francisco (dlsp2025.ieee-security.org) is calling for papers, posters and talks! The workshop seeks your awesome contributions on all aspects of deep learning and security, aiming…
We @OSUbigdata and @osunlp are very excited to host Neil Gong @NeilGong tmr (10:30AM-11:30AM ET, Dec 6th) to give an invited talk on Safe and Robust Generative AI. He will cover several critical safety and robustness issues in generative AI, including preventing the generation of…
Excited that our paper on audio watermark benchmarking is accepted to @NeurIPSConf! Congrats to all my amazing collaborators @hbliuustc, Mo Yang, Zheng Yuan, and @NeilGong. Audio authenticity has become a real issue now and we will keep working on this topic. Stay tuned :)
AudioMarkBench: Benchmarking Robustness of Audio Watermarking [arxiv.org/pdf/2406.06979] Despite rapid progress in #audiodeepfake, I feel the related safety risks are still underestimated. Imagine getting a call from somebody you trust who's actually a scammer-controlled bot –…
Glad to see prompt injection is among the interesting competitions. One of the most important security/safety challenges for LLM 😄😄please participate!
Exciting competitions at @satml_conf All of them look super interesting...
The slides for my talk (given at multiple workshops recently) on Safe and Robust Generative AI are available here: people.duke.edu/~zg70/code/Saf… Thanks for the contributions of my students and collaborators (many of them)! Comments are very welcome!


Excited about this work on benchmarking robustness of audio watermarking!
AudioMarkBench: Benchmarking Robustness of Audio Watermarking [arxiv.org/pdf/2406.06979] Despite rapid progress in #audiodeepfake, I feel the related safety risks are still underestimated. Imagine getting a call from somebody you trust who's actually a scammer-controlled bot –…
A paper (arxiv.org/abs/2310.12815) on formalizing and benchmarking prompt injection attacks and defenses for LLM was accepted by USENIX Security Symposium 2024. We thank the reviewers’ very constructive comments. Very excited about this paper. Congratulations to my coauthors!
Excited to co-author this paper. Comments are very welcome!
Excited to share the paper based on a workshop held in October 16, 2023. eprint.iacr.org/2024/855 We first considered the AI legal landscape, technical issues (alignment, provenance), and the technical gaps between the legal landscape and the state of the art of existing…
I got tenure at Duke. Big thanks to my family, students, collaborators, colleagues, and anonymous letter writers!
The deadline for submissions to SAGAI'24 (workshop on security of GenAI at IEEE S&P) has been extended to March 1, thanks to flexibility from our publisher for the camera ready. If you missed the original deadline, this is your chance to submit your exciting research results.
(Second Call for Papers) Submit your work on the security for GenAI systems and applications. *Security Architectures for Generative AI (SAGAI'24)* is a new workshop at IEEE S&P this year. Full CFP: sites.google.com/view/sagai2024… Submission deadline: February 5, 2024