首页 | 本学科首页   官方微博 | 高级检索  
     

影像解译中的深度学习可解释性分析方法
引用本文:龚健雅,宦麟茜,郑先伟. 影像解译中的深度学习可解释性分析方法[J]. 测绘学报, 2022, 51(6): 873-884. DOI: 10.11947/j.AGCS.2022.20220106
作者姓名:龚健雅  宦麟茜  郑先伟
作者单位:武汉大学测绘遥感信息工程国家重点实验室,武汉 430079;武汉大学遥感信息工程学院,武汉 430079;武汉大学测绘遥感信息工程国家重点实验室,武汉 430079
基金项目:国家自然科学基金(42090010; 42071370)~~;
摘    要:深度学习的迅速发展极大地推进了多种图像解译任务的精度提升,然而深度学习网络模型的“黑箱”性质让使用者难以理解其决策机理,这不仅不利于模型结构优化和安全增强等,还会极大地增加训练调参成本。对此,本文围绕影像智能解译任务,对深度学习可解释性国内外研究进展进行了综合评述与对比分析。首先,将当前可解释性分析方法分为激活值最大化分析法、代理模型分析方法、归因分析法、扰动分析法、类激活图分析法及样例分析法等6类方法,并对方法的原理、侧重点及优缺点进行了回顾。其次,对8种衡量各类分析方法所提供解释的可靠性的评估指标进行了回顾,并梳理了当前公开可用的可解释性开源算法库。在当前开源算法库的基础上,以遥感影像智能解译任务中的可解释性分析为例,验证了当前深度学习可解释性方法对遥感影像的适用性;试验结果表明当前可解释性方法在遥感解译中还存在一定的局限性。最后,总结了现有基于自然影像的可解释性算法在遥感影像解译分析中存在的问题,并展望了设计面向遥感影像特性的可解释性分析方法的发展前景,旨在为相关研究者提供参考,推动面向遥感影像解译的可解释性方法研究,从而为深度学习技术在遥感影像解译任务中的应用提供可靠的理论支持与算法设计指导。

关 键 词:深度学习  人工智能  遥感解译  综述  可解释性
收稿时间:2022-02-18
修稿时间:2022-04-17

Deep learning interpretability analysis methods in image interpretation
GONG Jianya,HUAN Linxi,ZHENG Xianwei. Deep learning interpretability analysis methods in image interpretation[J]. Acta Geodaetica et Cartographica Sinica, 2022, 51(6): 873-884. DOI: 10.11947/j.AGCS.2022.20220106
Authors:GONG Jianya  HUAN Linxi  ZHENG Xianwei
Affiliation:1. State Key Laboratory of Information Engineering in Surveying, Mapping and Remoto Sensing, Wuhan University, Wuhan 430079, China;2. School of Remote Sensing and Engineering, Wuhan University, Wuhan 430079, China
Abstract:The rapid development of deep learning has greatly improved the performance of various computer vision tasks. However, the black box nature of deep learning network models makes it difficult for users to understand its decision-making mechanism, which is not conductive to model structure optimization and security enhancement and also greatly increases the training cost. Focusing on the task of intelligent image interpretation, this paper makes a comprehensive review and comparison of the research progress of deep learning interpretability. Firstly, we group the current interpretability analysis methods into six categories: activation maximization method, surrogate model, attribution method, perturbation-based method, class activation map based method and example-based method, and review the principle, focus, advantages, and disadvantages of existing related works. Secondly, we introduce eight evaluation metrics that measure the reliability of the explanations provided by the various interpretability analysis methods, and sort out the current publicly available open source libraries for deep learning interpretability analysis. Based on the open source library, we verify the applicability of the current deep learning interpretability analysis methods to the interpretation of remote sensing images. The experimental results show that the current interpretability methods are applicable to the analysis of remote sensing interpretation, but have certain limitations. Finally, we summarize the open challenges of using existing interpretability algorithms for remote sensing data analysis, and look forward to the prospect of designing interpretability analysis methods oriented to remote sensing images. We hope this review can promote the research on interpretability methods for remote sensing image interpretation, so as to provide reliable theoretical support and algorithm design guidance for the application of deep learning technology in remote sensing image interpretation tasks.
Keywords:
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《测绘学报》浏览原始摘要信息
点击此处可从《测绘学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号