咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Parameter Interpolation Advers... 收藏

Parameter Interpolation Adversarial Training for Robust Image Classification

作     者:Liu, Xin Yang, Yichen He, Kun Hopcroft, John E. 

作者机构:Huazhong Univ Sci & Technol Sch Comp Sci & Technol Wuhan 430074 Peoples R China Cornell Univ Comp Sci Dept Ithaca NY 14853 USA 

出 版 物:《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY》 (IEEE Trans. Inf. Forensics Secur.)

年 卷 期:2025年第20卷

页      面:1613-1623页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Foundation of China [U22B2017, 62076105] International Cooperation Foundation of Hubei Province, China [2024EHA032] 

主  题:Training Robustness Overfitting Computational modeling Oscillators Interpolation Accuracy Perturbation methods Mean square error methods Optimization Adversarial examples adversarial training parameter interpolation normalized mean square error 

摘      要:Though deep neural networks exhibit superior performance on various tasks, they are still plagued by adversarial examples. Adversarial training has been demonstrated to be the most effective method to defend against adversarial attacks. However, existing adversarial training methods show that the model robustness has apparent oscillations and overfitting issues in the training process, degrading the defense efficacy. To address these issues, we propose a novel framework called Parameter Interpolation Adversarial Training (PIAT). PIAT tunes the model parameters between each epoch by interpolating the parameters of the previous and current epochs. It makes the decision boundary of model change more moderate and alleviates the overfitting issue, helping the model converge better and achieving higher model robustness. In addition, we suggest using the Normalized Mean Square Error (NMSE) to further improve the robustness by aligning the relative magnitude of logits between clean and adversarial examples rather than the absolute magnitude. Extensive experiments conducted on several benchmark datasets demonstrate that our framework could prominently improve the robustness of both Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs).

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分