咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >IOTM: Iterative Optimization T... 收藏
IEEE Transactions on Artificial Intelligence

IOTM: Iterative Optimization Trigger Method - A Runtime Data-Free Backdoor Attacks on Deep Neural Networks

作     者:Arshad, Iram Alsamhi, Saeed Hamood Qiao, Yuansong Lee, Brian Ye, Yuhang 

作者机构:Technological University of Shannon Midlands Midwest N37 HD68 Ireland University of Galway Insight Centre for Data Analytics GalwayH91 TK33 Ireland Ibb University Faculty of Engineering Ibb70270 Yemen 

出 版 物:《IEEE Transactions on Artificial Intelligence》 (IEEE. Trans. Artif. Intell.)

年 卷 期:2024年第5卷第9期

页      面:4562-4573页

核心收录:

基  金:Manuscript received 28 September 2023 revised 31 January 2024 accepted 1 April 2024. Date of publication 4 April 2024 date of current version 10 September 2024. This work was supported by the Technological University of Shannon: Midlands Midwest under President Doctoral Scholarship. The work of Saeed Hamood Alsamhi was supported by Science Foundation Ireland under Grant SFI/12/RC/2289 P2. This article was recommended for publication by Associate Editor Chaoli Sun upon evaluation of the reviewers\u2019 comments. (Corresponding author: Iram Arshad.) Iram Arshad, Yuansong Qiao, Brian Lee, and Yuhang Ye are with the Technological University of Shannon, N37 HD68 Midlands Midwest, Ireland (e-mail: i.arshad@research.ait.ie ysqiao@research.ait.ie brian.lee@tus.ie yye@research.ait.ie) 

主  题:Iterative methods 

摘      要:Deep neural networks are susceptible to various backdoor attacks, such as training time attacks, where the attacker can inject a trigger pattern into a small portion of the dataset to control the model s predictions at runtime. Backdoor attacks are dangerous because they do not degrade the model s performance. This article explores the feasibility of a new type of backdoor attack, a data-free backdoor. Unlike traditional backdoor attacks that require poisoning data and injection during training, our approach, the iterative optimization trigger method (IOTM), enables trigger generation without compromising the integrity of the models and datasets. We propose an attack based on an IOTM technique, guided by an adaptive trigger generator (ATG) and employing a custom objective function. ATG dynamically refines the trigger using feedback from the model s predictions. We empirically evaluated the effectiveness of IOTM with three deep learning models (CNN, VGG16, and ResNet18) using the CIFAR10 dataset. The achieved runtime-attack success rate (R-ASR) varies across different classes. For some classes, the R-ASR reached 100%;whereas, for others, it reached 62%. Furthermore, we conducted an ablation study to investigate critical factors in the runtime backdoor, including optimizer, weight, REG, and trigger visibility on R-ASR using the CIFAR100 dataset. We observed significant variations in the R-ASR by changing the optimizer, including Adam and SGD, with and without momentum. The R-ASR reached 81.25% with the Adam optimizer, whereas the SGD with momentum and without results reached 46.87% and 3.12%, respectively. © 2020 IEEE.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分