skip to content

*Demiguise attack: Crafting invisible semantic adversarial perturbations with perceptual similarity

Leveraging perceptual similarity to craft adversarial perturbations that are invisible to humans.

arXiv | DOI | BibTeX

Yajie Wang*, Shangbo Wu*, Wenyi Jiang, Shengang Hao, Yu-an Tan, Quanxin Zhang†

TL;DR

Adversarial examples are malicious images with visually imperceptible perturbations. Tight Lp-norm bounds do a bad job for constraining adversarial perturbation. To this end, we propose Demiguise Attack - to craft unrestricted perturbation via Perceptual Similarity.

Figures

Attack demo - First figure in the paper

Adversarial examples are crafted with Demiguise-C&W (top) and C&W (L2) (bottom). C&W (L2) crafts spottable perturbations with arbitrary noise. Demiguise-C&W crafts much larger perturbations with rich semantic information while maintaining imperceptibility.

Comments

I am the co-first author of this paper, which was accepted at IJCAI 2021. This work was done when I was research assistant after I graduated (BSc. at Beijing Institute of Technology). I am responsible for most of the experiments and paper writing.

Citing our work

@inproceedings{Wang2021DemiguiseAC,
  title={Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity},
  author={Yajie Wang and Shangbo Wu and Wenyi Jiang and Shengang Hao and Yu-an Tan and Quanxin Zhang},
  booktitle={IJCAI},
  year={2021}
}