版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Elect Scienceand Technol China Sch Comp Sci & Engn Chengdu 611731 Peoples R China Sichuan Univ Sch Cyber Sci & Engn Chengdu 610017 Peoples R China Nanyang Technol Univ Sch Comp Sci & Engn Singapore 639798 Singapore Univ Technol Sydney Fac Engn & Informat Technol Sch Comp Sci Sydney NSW 2007 Australia
出 版 物:《IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING》 (IEEE Trans. Dependable Secure Comput.)
年 卷 期:2025年第22卷第3期
页 面:2823-2840页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Key R&D Program of China [2022YFB3103500] National Natural Science Foundation of China [62402087, 62020106013, U24A20259] Fundamental Research Funds for Chinese Central Universities [ZYGX2024J019, Y030232063003002] China Postdoctoral Science Foundation [BX20230060, 2024M760356] Chengdu Science and Technology Program [2023-XT00-00002-GX] Key Laboratory of Computing Power Network and Information Security Ministry of Education Qilu University of Technology (Shandong Academy of Sciences) [2023ZD003] Key Laboratory of Data Protection and Intelligent Management Ministry of Education, Sichuan University [SCUSAKFKT202403Y]
主 题:Image color analysis Perturbation methods Image preprocessing Robustness Training Image coding Predictive models Optimization Loss measurement Iterative algorithms Backdoor attack adversarial attack defense mechanisms image color space
摘 要:Deep neural networks (DNNs) are known to be susceptible to various malicious attacks, such as adversarial and backdoor attacks. However, most of these attacks utilize additive adversarial perturbations (or backdoor triggers) within an $L_{p}$Lp-norm constraint. They can be easily defeated by image preprocessing strategies, such as image compression and image super-resolution. To address this limitation, instead of using additive adversarial perturbations (or backdoor triggers) in the pixel space, this work revisits the design of adversarial perturbations (or backdoor triggers) from the perspective of color space and conducts a comprehensive analysis. Specifically, we propose a color space backdoor attack and a color space adversarial attack where the color space shift is used as the trigger and perturbation. To find the optimal trigger or perturbation in the black-box scenario, we perform an iterative optimization process with the Particle Swarm Optimization algorithm. Experimental results confirm the robustness of the proposed color space attacks against image preprocessing defenses as well as other mainstream defense methods. In addition, we also design adaptive defense strategies and evaluate their effectiveness against color space attacks. Our work emphasizes the importance of the color space when developing malicious attacks against DNN and urges more research in this area.