版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Guangdong Key Lab of Information Security School of Computer Science and Engineering Sun Yat-Sen University China School of Computing and Information Technology Great Bay University China Electrical and Computer Engineering Dept University of British Columbia Canada
出 版 物:《arXiv》 (arXiv)
年 卷 期:2024年
核心收录:
主 题:Adversarial machine learning
摘 要:Imperceptible adversarial attacks have recently attracted increasing research interests. Existing methods typically incorporate external modules or loss terms other than a simple lp-norm into the attack process to achieve imperceptibility, while we argue that such additional designs may not be necessary. In this paper, we rethink the essence of imperceptible attacks and propose two simple yet effective strategies to unleash the potential of PGD, the common and classical attack, for imperceptibility from an optimization perspective. Specifically, the Dynamic Step Size is introduced to find the optimal solution with minimal attack cost towards the decision boundary of the attacked model, and the Adaptive Early Stop strategy is adopted to reduce the redundant strength of adversarial perturbations to the minimum level. The proposed PGD-Imperceptible (PGD-Imp) attack achieves state-of-the-art results in imperceptible adversarial attacks for both untargeted and targeted scenarios. When performing untargeted attacks against ResNet-50, PGD-Imp attains 100% (+0.3%) ASR, 0.89 (-1.76) l2 distance, and 52.93 (+9.2) PSNR with 57s (-371s) running time, significantly outperforming existing methods. Copyright © 2024, The Authors. All rights reserved.