Show simple item record

dc.contributor.authorSu, Jiawei
dc.contributor.authorVargas, Danilo Vasconcellos
dc.contributor.authorKouichi, Sakurai
dc.date.accessioned2019-10-28T20:51:15Z
dc.date.available2019-10-28T20:51:15Z
dc.date.created2018-09-05 00:47
dc.date.issued2017-10-24
dc.identifieroai:arXiv.org:1710.08864
dc.identifierhttp://arxiv.org/abs/1710.08864
dc.identifier.urihttp://hdl.handle.net/20.500.12424/2495604
dc.description.abstractRecent research has revealed that the output of Deep neural networks(DNN) is not continuous and very sensitive to tiny perturbation on the input vectors and accordingly several methods have been proposed for crafting effective perturbation against the networks. In this paper, we propose a novel method for optically calculating extremely small adversarial perturbation (few-pixels attack), based on differential evolution. It requires much less adversarial information and works with a broader classes of DNN models. The results show that 73.8$\%$ of the test images can be crafted to adversarial images with modification just on one pixel with 98.7$\%$ confidence on average. In addition, it is known that investigating the robustness problem of DNN can bring critical clues for understanding the geometrical features of the DNN decision map in high dimensional input space. The results of conducting few-pixels attack contribute quantitative measurements and analysis to the geometrical understanding from a different perspective compared to previous works.
dc.subjectComputer Science - Learning
dc.subjectComputer Science - Computer Vision and Pattern Recognition
dc.subjectStatistics - Machine Learning
dc.titleOne pixel attack for fooling deep neural networks
dc.typetext
ge.collectioncodeOAIDATA
ge.dataimportlabelOAI metadata object
ge.identifier.legacyglobethics:15144684
ge.identifier.permalinkhttps://www.globethics.net/gel/15144684
ge.lastmodificationdate2018-09-05 00:47
ge.lastmodificationuseradmin@pointsoftware.ch (import)
ge.submissions0
ge.oai.exportid149801
ge.oai.repositoryid58
ge.oai.setnameComputer Science
ge.oai.setspeccs
ge.oai.streamid2
ge.setnameGlobeEthicsLib
ge.setspecglobeethicslib
ge.linkhttp://arxiv.org/abs/1710.08864


This item appears in the following Collection(s)

Show simple item record