Nowadays, adversarial examples of images which are generated by adding special perturbations on their hosts make great impact on many deepneuralnetwork (DNN)-basedcomputervisiontasks and bring security risks. Wit...
详细信息
Nowadays, adversarial examples of images which are generated by adding special perturbations on their hosts make great impact on many deepneuralnetwork (DNN)-basedcomputervisiontasks and bring security risks. With the deep insight into this field, researchers realize that many simple image processing methods can easily reduce the attack success rate. Therefore, studies on robust adversarial examples that are able to defend destruction become the emphasis. However, most existing methods focus more on the distortion in user transmitting procedures and physical deformation such as JPEG compression and brightness. Given that scale transformation is one of the commonest measures in data transformation and enhancement, a two-step algorithm is proposed to eliminate the effect caused by resizing during the model inferring process. The first step is to select pixels that can affect the classification result through a DNN. As a binary classification task, the network predicts whether the pixel deserves to be modified or not. The second one is to strategically compute the amplitude of noise and add it to the host image. Experiments verify that this method resists the scale transformation and guarantees the invisibility of humans as well.
暂无评论