首页 | 本学科首页   官方微博 | 高级检索  
     检索      

面向智能驾驶视觉感知的对抗样本攻击与防御方法综述
引用本文:杨弋鋆,邵文泽,王力谦,葛琦,鲍秉坤,邓海松,李海波.面向智能驾驶视觉感知的对抗样本攻击与防御方法综述[J].南京气象学院学报,2019,11(6):651-659.
作者姓名:杨弋鋆  邵文泽  王力谦  葛琦  鲍秉坤  邓海松  李海波
作者单位:南京邮电大学 通信与信息工程学院, 南京, 210003,南京邮电大学 通信与信息工程学院, 南京, 210003,南京邮电大学 通信与信息工程学院, 南京, 210003,南京邮电大学 通信与信息工程学院, 南京, 210003,南京邮电大学 通信与信息工程学院, 南京, 210003,南京审计大学 统计与数学学院, 南京, 211815,南京邮电大学 通信与信息工程学院, 南京, 210003;瑞典皇家理工学院 计算机科学与通信学院, 斯德哥尔摩, 10044
基金项目:国家自然科学基金(61771250,61602257,61972213,11901299,61872424,6193000388)
摘    要:现如今,深度学习已然成为机器学习领域最热门的研究方向之一,其在图像识别、目标检测、语音处理、问答系统等诸多领域都取得了巨大成功.然而通过附加经过特殊设计的细微扰动而构造出的对抗样本,能够破坏深度模型的原有性能,其存在使许多对安全性能指标具有极高要求的技术领域,特别是以视觉感知为主要技术优先的智能驾驶系统,面临新的威胁和挑战.因此,对对抗样本的生成攻击和主动防御研究,成为深度学习和计算机视觉领域极为重要的交叉性研究课题.本文首先简述了对抗样本的相关概念,在此基础上详细介绍了一系列典型的对抗样本攻击和防御算法.随后,列举了针对视觉感知系统的多个物理世界攻击实例,探讨了其对智能驾驶领域的潜在影响.最后,对对抗样本的攻击与防御研究进行了技术展望.

关 键 词:对抗样本  目标检测  语义分割  智能驾驶
收稿时间:2019/10/10 0:00:00

A survey of adversarial attacks and defenses on visual perception in automatic driving
YANG Yijun,SHAO Wenze,WANG Liqian,GE Qi,BAO Bingkun,DENG Haisong and LI Haibo.A survey of adversarial attacks and defenses on visual perception in automatic driving[J].Journal of Nanjing Institute of Meteorology,2019,11(6):651-659.
Authors:YANG Yijun  SHAO Wenze  WANG Liqian  GE Qi  BAO Bingkun  DENG Haisong and LI Haibo
Institution:College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003,College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003,College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003,College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003,College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003,School of Statistics and Mathematics, Nanjing Audit University, Nanjing 211815 and College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003;School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm Sweden 10044
Abstract:Nowadays,deep learning has become one of the hottest research directions in the field of machine learning.It has achieved great success in a wide range of fields such as image recognition,target detection,voice processing,and question answering system.However,the emergence of adversarial examples has triggered new thinking on deep learning.The performance of deep learning models can be destroyed by adversarial examples constructed by adding specially designed subtle disturbance.The existence of adversarial examples makes many technical fields with high requirements on safety performance face new threats and challenges,especially the automatic driving system which uses visual perception as the main technology priority.Therefore,the research on adversarial attack and active defense has become an extremely important cross-cutting research topic in the field of deep learning and computer vision.In this paper,relevant concepts on adversarial examples are summarized firstly,and then a series of typical adversarial attack methods and defense algorithms are introduced in detail.Subsequently,a number of physical world attacks against visual perception are introduced along with discussions on their potential impact on the field of automatic driving.Finally,we give a technical outlook on the future study of adversarial attacks and defenses.
Keywords:adversarial examples  object detection  semantic segmentation  automatic driving
点击此处可从《南京气象学院学报》浏览原始摘要信息
点击此处可从《南京气象学院学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号