版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:NEC Corp Ltd Kawasaki 2118666 Japan Univ Tsukuba Tsukuba 3058573 Japan RIKEN AIP Tokyo 1030027 Japan Inst Sci Tokyo Tokyo 1528550 Japan
出 版 物:《IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS》 (IEICE Trans Inf Syst)
年 卷 期:2025年第E108D卷第1期
页 面:92-109页
核心收录:
学科分类:0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:Japan science and technology agency (JST) CREST [JPMJCR23M4] Japan society for the promotion of science (JSPS) [23H00483]
主 题:adversarial example certified defense content-based image retrieval
摘 要:This paper develops certified defenses for deep neural netexamples (AXs). Previous works put their effort into certified defense for classification to improve certified robustness, which guarantees that no AX to cause misclassification exists around the sample. Such certified defense, however, could not be applied to CBIR directly because the goals of adversarial attack against classification and CBIR are completely different. To develop the certified defense for CBIR, we first define the new certified robustness of CBIR, which guarantees that no AX that changes the ranking results of CBIR exists around the input images. Then, we propose computationally tractable verification algorithms that verify whether a given feature extraction DNN satisfies the certified robustness of CBIR at given input images. Our proposed verification algorithms are achieved by evaluating the upper and lower bounds of distances between feature representations of perturbed and non-perturbed images in deterministic and probabilistic manners. Finally, we propose robust training methods to obtain feature extraction DNNs that increase the number of inputs that satisfy the certified robustness of CBIR by tightening the upper and lower bounds. We experimentally show that our proposed certified defenses can guarantee robustness deterministically and probabilistically on various datasets.