Xiaoyu (Nicholas) Wu 吴晓宇

My research interests include copyright protection and authentication for and against AI personalization, as well as privacy and security topics such as adversarial attacks and defenses, data extraction, and membership inference.

I’m also interested in generative modeling and am currently working with Prof. Chen Wei.

I co-founded a volunteer interest group focused on copyright issues related to image generative models. We develop software and provide technical support for AI copyright litigation on a volunteer basis. Feel free to email us if you’re interested or need assistance.

News

Sep. 2025: Our new work on data extraction after exact unlearning was accepted to NeurIPS 2025 and is available on arXiv. We show that even when a model is exactly unlearned , it can still leak the removed data under realistic, real-world deployments, enabling high-quality extraction in practice.

May 2025: Our new work on training-data extraction for personalized diffusion models was accepted to ICML 2025 and is available on arXiv. We demonstrate high-quality training-data extraction using publicly available checkpoints on Hugging Face.

May, 2024: Published new findings on mitigating quality degradation during few-shot fine-tuning of diffusion models arXiv. We identified a phenomenon called the “corruption stage,” where image quality abnormally degrades, and improved performance using BNNs.

Feb, 2024: Our paper on copyright authentication for diffusion models has been accepted at CVPR 2024. Read the full paper on CVF. The code is available through our project Revelio on GitHub.

Dec, 2023: Mist-v2 has been released! It provides enhanced protection against LoRA-based attacks. More details are available on the homepage.

May, 2023: Our adversarial watermarking project, Mist, is now open-source on GitHub. Add watermarks to protect your artwork from unauthorized use!