咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Reversible Adversarial Example... 收藏

Reversible Adversarial Examples with Minimalist Evolution for Recognition Control in Computer Vision

作     者:Yang, Shilong Leng, Lu Chang, Ching-Chun Chang, Chin-Chen 

作者机构:Nanchang Hangkong Univ Jiangxi Prov Key Lab Image Proc & Pattern Recognit Nanchang 330063 Peoples R China Feng Chia Univ Informat & Commun Secur Res Ctr Taichung 407102 Taiwan Feng Chia Univ Informat Engn & Comp Sci Taichung 407102 Taiwan 

出 版 物:《APPLIED SCIENCES-BASEL》 (Appl. Sci.)

年 卷 期:2025年第15卷第3期

页      面:1142-1142页

核心收录:

基  金:National Natural Science Foundation of China Jiangxi Provincial Key Laboratory of Image Processing and Pattern Recognition [2024SSY03111] Technology Innovation Guidance Program Project (Special Project of Technology Cooperation, Science and Technology Department of Jiangxi Province) [20212BDH81003] Innovation Foundation for Postgraduate Students of Nanchang Hangkong University [YC2023-102] 62466038 

主  题:data security reversible data hiding reversible adversarial example dual-color space black-box attack 

摘      要:As artificial intelligence increasingly automates the recognition and analysis of visual content, it poses significant risks to privacy, security, and autonomy. Computer vision systems can surveil and exploit data without consent. With these concerns in mind, we introduce a novel method to control whether images can be recognized by computer vision systems using reversible adversarial examples. These examples are generated to evade unauthorized recognition, allowing only systems with permission to restore the original image by removing the adversarial perturbation with zero-bit error. A key challenge with prior methods is their reliance on merely restoring the examples to a state in which they can be correctly recognized by the model;however, the restored images are not fully consistent with the original images, and they require excessive auxiliary information to achieve reversibility. To achieve zero-bit error restoration, we utilize the differential evolution algorithm to optimize adversarial perturbations while minimizing distortion. Additionally, we introduce a dual-color space detection mechanism to localize perturbations, eliminating the need for extra auxiliary information. Ultimately, when combined with reversible data hiding, adversarial attacks can achieve reversibility. Experimental results demonstrate that the PSNR and SSIM between the restored images by the method and the original images are infinity and 1, respectively. The PSNR and SSIM between the reversible adversarial examples and the original images are 48.32 dB and 0.9986, respectively. Compared to state-of-the-art methods, the method maintains high visual fidelity at a comparable attack success rate.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分