咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >MTDeep: Boosting the security ... 收藏
arXiv

MTDeep: Boosting the security of deep neural nets against adversarial attacks with moving target defense

作     者:Sengupta, Sailik Chakraborti, Tathagata Kambhampati, Subbarao 

作者机构:School of Computing Informatics Decision Systems Engineering Arizona State University United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2017年

核心收录:

主  题:Deep neural networks 

摘      要:Recent works on gradient based attacks and universal perturbations can adversarially modify images to bring down the accuracy of state-of-the-art classification techniques based on deep neural networks to as low as 10% on popular datasets like MNIST and ImageNet. The design of general defense strategies against a wide range of such attacks remains a challenging problem. In this paper, we derive inspiration from recent advances in the fields of cybersecurity and multi-agent systems and propose to use the concept of Moving Target Defense (MTD) for increasing the robustness of a set of deep networks against such adversarial attacks. To this end, we formalize and exploit the notion of differential immunity of an ensemble of networks to specific attacks. To classify an input image, a trained network is picked from this set of networks by formulating the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) Users as a repeated Bayesian Stackelberg Game (BSG). We empirically show that our approach, MTDeep reduces misclassification on perturbed images for MNIST and ImageNet datasets while maintaining high classification accuracy on legitimate test images. Lastly, we demonstrate that our framework can be used in conjunction with any existing defense mechanism to provide more resilience to adversarial attacks than those defense mechanisms by themselves. Copyright © 2017, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分