面向智能驾驶视觉感知的对抗样本攻击与防御方法综述
作者:
基金项目:

国家自然科学基金(61771250,61602257,61972213,11901299,61872424,6193000388)


A survey of adversarial attacks and defenses on visual perception in automatic driving
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • | | |
  • 文章评论
    摘要:

    现如今,深度学习已然成为机器学习领域最热门的研究方向之一,其在图像识别、目标检测、语音处理、问答系统等诸多领域都取得了巨大成功.然而通过附加经过特殊设计的细微扰动而构造出的对抗样本,能够破坏深度模型的原有性能,其存在使许多对安全性能指标具有极高要求的技术领域,特别是以视觉感知为主要技术优先的智能驾驶系统,面临新的威胁和挑战.因此,对对抗样本的生成攻击和主动防御研究,成为深度学习和计算机视觉领域极为重要的交叉性研究课题.本文首先简述了对抗样本的相关概念,在此基础上详细介绍了一系列典型的对抗样本攻击和防御算法.随后,列举了针对视觉感知系统的多个物理世界攻击实例,探讨了其对智能驾驶领域的潜在影响.最后,对对抗样本的攻击与防御研究进行了技术展望.

    Abstract:

    Nowadays,deep learning has become one of the hottest research directions in the field of machine learning.It has achieved great success in a wide range of fields such as image recognition,target detection,voice processing,and question answering system.However,the emergence of adversarial examples has triggered new thinking on deep learning.The performance of deep learning models can be destroyed by adversarial examples constructed by adding specially designed subtle disturbance.The existence of adversarial examples makes many technical fields with high requirements on safety performance face new threats and challenges,especially the automatic driving system which uses visual perception as the main technology priority.Therefore,the research on adversarial attack and active defense has become an extremely important cross-cutting research topic in the field of deep learning and computer vision.In this paper,relevant concepts on adversarial examples are summarized firstly,and then a series of typical adversarial attack methods and defense algorithms are introduced in detail.Subsequently,a number of physical world attacks against visual perception are introduced along with discussions on their potential impact on the field of automatic driving.Finally,we give a technical outlook on the future study of adversarial attacks and defenses.

    参考文献
    [1] Lecun Y,Bengio Y,Hinton G.Deep learning[J].Nature,2015,521(7553):436-444
    [2] Akhtar N,Mian A.Threat of adversarial attacks on deep learning in computer vision:a survey[J].IEEE Access,2018,6:14410-14430
    [3] Szegedy C,Zaremba W,Sutskever I,et al.Intriguing properties of neural networks[J].arXiv Preprint,2013,arXiv:1312.6199
    [4] Yuan X,He P,Zhu Q,et al.Adversarial examples:attacks and defenses for deep learning[J].IEEE Transactions on Neural Networks and Learning Systems,2019,30(9):2805-2824
    [5] Goodfellow I J,Shlens J,Szegedy C.Explaining and harnessing adversarial examples[J].arXiv Preprint,2015,arXiv:1412.6572
    [6] Tramèr F,Kurakin A,Papernot N,et al.Ensemble adversarial training:attacks and defenses[J].arXiv Preprint,2018,arXiv:1705.07204
    [7] Kurakin A,Goodfellow I J,Bengio S.Adversarial examples in the physical world[J].arXiv Preprint,2017,arXiv:1607.02533
    [8] Moosavi-Dezfooli S M,Fawzi A,Frossard P.DeepFool:a simple and accurate method to fool deep neural networks[C]//29th IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2016:2574-2582
    [9] Moosavi-Dezfooli S M,Fawzi A,Fawzi O,et al.Universal adversarial perturbations[C]//30th IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2017:1765-1773
    [10] Madry A,Makelov A,Schmidt L,et al.Towards deep learning models resistant to adversarial attacks[J].arXiv Preprint,2018,arXiv:1706.06083
    [11] Papernot N,McDaniel P,Jha S,et al.The limitations of deep learning in adversarial settings[C]//2016 IEEE European Symposium on Security and Privacy (Euro S & P),2016:372-387
    [12] Carlini N,Wagner D.Towards evaluating the robustness of neural networks[C]//2017 IEEE Symposium on Security and Privacy (S&P),2017:39-57
    [13] Chen P Y,Sharma Y,Zhang H,et al.EAD:elastic-net attacks to deep neural networks via adversarial examples[C]//32th AAAI Conference on Artificial Intelligence,2018:118-129
    [14] Papernot N,McDaniel P,Wu X,et al.Distillation as a defense to adversarial perturbations against deep neural networks[C]//2016 IEEE Symposium on Security and Privacy(S&P),2016:582-597
    [15] Kurakin A,Goodfellow I,Bengio S.Adversarial machine learning at scale[J].arXiv Preprint,2017,arXiv:1611.01236
    [16] Dziugaite G K,Ghahramani Z,Roy D M.A study of the effect of JPG compression on adversarial images[J].arXiv Preprint,2016,arXiv:1608.00853
    [17] Zantedeschi V,Nicolae M I,Rawat A.Efficient defenses against adversarial attacks[C]//10th ACM Workshop on Artificial Intelligence and Security,2017:39-49
    [18] Buckman J,Roy A,et al.Thermometer encoding:one hot way to resist adversarial examples[C]//Proceedings of the 5th International Conference on Learning Representations (ICLR),2018
    [19] Ross A S,Doshi-Velez F.Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients[C]//32th AAAI Conference on Artificial Intelligence,2018:1660-1669
    [20] Gao J,Wang B L,Lin Z M,et al.DeepCloak:masking deep neural network models for robustness against adversarial samples[J].arXiv Preprint,2017,arXiv:1702.06763
    [21] Lu J J,Issaranon T,Forsyth D.SafetyNet:detecting and rejecting adversarial examples robustly[C]//IEEE International Conference on Computer Vision (ICCV),2017:446-454
    [22] Metzen J H,Genewein T,Fischer V,et al.On detecting adversarial perturbations[J].arXiv Preprint,2017,arXiv:1702.04267
    [23] Grosse K,Manoharan P,Papernot N,et al.On the (statistical) detection of adversarial examples[J].Algorithms,2017,11(26):27-38
    [24] Akhtar N,Liu J,Mian A.Defense against universal adversarial perturbations[C]//31th IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2018:3389-3398
    [25] Lee H,Han S,Lee J.Generative adversarial trainer:defense to adversarial perturbations with GAN[J].arXiv Preprint,2017,arXiv:1705.03387
    [26] Xu W L,Evans D,Qi Y J.Feature squeezing:detecting adversarial examples in deep neural networks[J].arXiv Preprint,2017,arXiv:1704.01155
    [27] Meng D Y,Chen H.MagNet:a two-pronged defense against adversarial examples[C]//2017 ACM Conference on Computer and Communications Security (CCS),2017:135-147
    [28] Liang B,Li H C,Su M Q,et al.Detecting adversarial image examples in deep neural networks with adaptive noise reduction[J].IEEE Transactions on Dependable and Secure Computing,2018,25(12):111-127
    [29] Hinton G,Vinyals O,Dean J.Distilling the knowledge in a neural network[J].Computer Science,2015,14(7):38-39
    [30] Goodfellow I J,Pouget-Abadie J,Mirza M,et al.Generative adversarial networks[J].Advances in Neural Information Processing Systems,2014,3:2672-2680
    [31] Mcallister R,Gal Y,Kendall A,et al.Concrete problems for autonomous vehicle safety:advantages of Bayesian deep learning[C]//26th International Joint Conference on Artificial Intelligence(IJCAI),2017:4745-4753
    [32] Eykholt K,Evtimov I,Fernandes E,et al.Physical adversarial examples for object detectors[J].arXiv Preprint,2018,arXiv:1807.07769
    [33] Thys S,van Ranst W,Goedemé T.Fooling automated surveillance cameras:adversarial patches to attack person detection[C]//32th IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2019:821-828
    [34] Redmon J,Farhadi A.YOLO9000:better,faster,stronger[C]//30th IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2017:7263-7271
    [35] Evtimov I,Eykholt K,Fernandes E,et al.Robust physical-world attacks on machine learning models[J].arXiv Preprint,2017,arXiv:1707.08945
    [36] Xie C H,Wang J,Zhang Z,et al.Adversarial examples for semantic segmentation and object detection[C]//IEEE International Conference on Computer Vision (ICCV),2017:1369-1378
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

杨弋鋆,邵文泽,王力谦,葛琦,鲍秉坤,邓海松,李海波.面向智能驾驶视觉感知的对抗样本攻击与防御方法综述[J].南京信息工程大学学报(自然科学版),2019,11(6):651-659
YANG Yijun, SHAO Wenze, WANG Liqian, GE Qi, BAO Bingkun, DENG Haisong, LI Haibo. A survey of adversarial attacks and defenses on visual perception in automatic driving[J]. Journal of Nanjing University of Information Science & Technology, 2019,11(6):651-659

复制
分享
文章指标
  • 点击次数:647
  • 下载次数: 2020
  • HTML阅读次数: 0
  • 引用次数: 0
历史
  • 收稿日期:2019-10-10
  • 在线发布日期: 2020-01-19

地址:江苏省南京市宁六路219号    邮编:210044

联系电话:025-58731025    E-mail:nxdxb@nuist.edu.cn

南京信息工程大学学报 ® 2025 版权所有  技术支持:北京勤云科技发展有限公司