咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Towards robust detection of ad... 收藏
arXiv

Towards robust detection of adversarial examples

作     者:Pang, Tianyu Du, Chao Dong, Yinpeng Zhu, Jun 

作者机构:Departmeng of Computer Science and Technology Institute for Artificial Intelligence BNRist Center State Key Lab for Intell. Tech. and Sys. THBI Lab Tsinghua University Beijing China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2017年

核心收录:

主  题:Entropy 

摘      要:Although the recent progress is substantial, deep learning methods can be vulnerable to the maliciously generated adversarial examples. In this paper, we present a novel training procedure and a thresholding test strategy, towards robust detection of adversarial examples. In training, we propose to minimize the reverse crossentropy (RCE), which encourages a deep network to learn latent representations that better distinguish adversarial examples from normal ones. In testing, we propose to use a thresholding strategy as the detector to filter out adversarial examples for reliable predictions. Our method is simple to implement using standard algorithms, with little extra training cost compared to the common cross-entropy minimization. We apply our method to defend various attacking methods on the widely used MNIST and CIFAR-10 datasets, and achieve significant improvements on robust predictions under all the threat models in the adversarial setting. Copyright © 2017, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分