咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Perturb, Attend, Detect and Lo... 收藏
arXiv

Perturb, Attend, Detect and Localize (PADL): Robust Proactive Image Defense

作     者:Bartolucci, Filippo Masi, Iacopo Lisanti, Giuseppe 

作者机构:Computer Science and Engineering Dept. CVLab University of Bologna Bologna Italy Computer Science Dept. OmnAI Lab Sapienza University of Rome Rome Italy 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Reverse engineering 

摘      要:Image manipulation detection and localization have received considerable attention from the research community given the blooming of Generative Models (GMs). Detection methods that follow a passive approach may overfit to specific GMs, limiting their application in real-world scenarios, due to the growing diversity of generative models. Recently, approaches based on a proactive framework have shown the possibility of dealing with this limitation. However, these methods suffer from two main limitations, which raises concerns about potential vulnerabilities: i) the manipulation detector is not robust to noise and hence can be easily fooled;ii) the fact that they rely on fixed perturbations for image protection offers a predictable exploit for malicious attackers, enabling them to reverse-engineer and evade detection. To overcome this issue we propose PADL, a new solution able to generate image-specific perturbations using a symmetric scheme of encoding and decoding based on cross-attention, which drastically reduces the possibility of reverse engineering, even when evaluated with adaptive attacks [31]. Additionally, PADL is able to pinpoint manipulated areas, facilitating the identification of specific regions that have undergone alterations, and has more generalization power than prior art on held-out generative models. Indeed, although being trained only on an attribute manipulation GAN model [15], our method generalizes to a range of unseen models with diverse architectural designs, such as StarGANv2, BlendGAN, DiffAE, StableDiffusion and StableDiffusionXL. Additionally, we introduce a novel evaluation protocol, which offers a fair evaluation of localisation performance in function of detection accuracy and better captures real-world scenarios. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分