咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Adaptive Perturbation for Adve... 收藏

Adaptive Perturbation for Adversarial Attack

作     者:Yuan, Zheng Zhang, Jie Jiang, Zhaoyan Li, Liangliang Shan, Shiguang 

作者机构:Chinese Acad Sci Inst Comp Technol Key Lab Intelligent Informat Proc Beijing 100190 Peoples R China Chinese Acad Sci Inst Comp Technol Key Lab Intelligent Informat Proc Beijing 100049 Peoples R China Tencent Shenzhen 518057 Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 (IEEE Trans Pattern Anal Mach Intell)

年 卷 期:2024年第46卷第8期

页      面:5663-5676页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Key R&D Program of China [2021YFC3310100] National Natural Science Foundation of China Beijing Nova Program Youth Innovation Promotion Association CAS 

主  题:Perturbation methods Iterative methods Adaptation models Generators Closed box Security Training Adversarial attack transfer-based attack adversarial example adaptive perturbation 

摘      要:In recent years, the security of deep learning models achieves more and more attentions with the rapid development of neural networks, which are vulnerable to adversarial examples. Almost all existing gradient-based attack methods use the sign function in the generation to meet the requirement of perturbation budget on L-infinity norm. However, we find that the sign function may be improper for generating adversarial examples since it modifies the exact gradient direction. Instead of using the sign function, we propose to directly utilize the exact gradient direction with a scaling factor for generating adversarial perturbations, which improves the attack success rates of adversarial examples even with fewer perturbations. At the same time, we also theoretically prove that this method can achieve better black-box transferability. Moreover, considering that the best scaling factor varies across different images, we propose an adaptive scaling factor generator to seek an appropriate scaling factor for each image, which avoids the computational cost for manually searching the scaling factor. Our method can be integrated with almost all existing gradient-based attack methods to further improve their attack success rates. Extensive experiments on the CIFAR10 and ImageNet datasets show that our method exhibits higher transferability and outperforms the state-of-the-art methods.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分