Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples

Published in ICML 2023 (Oral), 2023

In this paper, we established a framework to craft adversarial watermarks to protect against unauthorized diffusion model-based artwork mimicry by theoretically modeling adversarial attacks on diffusion models. This work has been packaged into an open-source project called Mist:


Mist: Watermark Against Unauthorized Diffusion-Based Artwork Mimicking

Oct. 2022 — Present

Homepage: https://psyker-team.github.io/index_en.html

  • The only open-source watermarking tool to combat unauthorized art mimicry by Stable Diffusion models.
  • GitHub Stars: 634 (360 + 274)
  • Media: 16k reposts, 20k likes

Recommended citation: Liang C*, Wu X*, Hua Y, et al. (2023). "Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples." ICML 2023 (Oral). (Co-First Author)
Download Paper