首页 | 本学科首页   官方微博 | 高级检索  
     检索      

深度融合网结合条件随机场的遥感图像语义分割
引用本文:肖春姣,李宇,张洪群,陈俊.深度融合网结合条件随机场的遥感图像语义分割[J].遥感学报,2020,24(3):254-264.
作者姓名:肖春姣  李宇  张洪群  陈俊
作者单位:1.中国科学院遥感与数字地球研究所, 北京 100094;2.中国科学院大学, 北京 100049
基金项目:国家自然科学基金(编号: 61501460); 广东省现代视听信息工程技术研究中心开放基金
摘    要:为了充分利用遥感图像中丰富的细节信息和上下文信息,提高图像语义分割精度,提出一种深度融合网结合条件随机场模型的遥感图像语义分割方法。方法在全卷积神经网络框架中增加反卷积融合结构,搭建深度融合DFN (Deep Fusion Networks)网络,通过深层网络自动获取多尺度特征,避免人工设计和选择特征,提高模型的泛化能力;同时借助反卷积融合结构,利用多尺度信息,将浅层细节信息和深层语义信息相融合,提高模型的处理精度。由全连接条件随机场引入空间上下文信息,更好地定位边界,得到最终的语义分割结果。在遥感图像数据集上的实验结果显示:(1)随着不同尺度细节信息的融入,结果的边缘轮廓越精确、接近标签图像;(2)增加了空间上下文信息后,语义分割结果边缘更细化、准确,精度更高。实验表明,该方法可以有效提高遥感图像语义分割的精度,改善结果的过平滑现象。

关 键 词:遥感图像语义分割  全卷积网络  条件随机场  融合结构  反卷积
收稿时间:2018/7/20 0:00:00

Semantic segmentation of remote sensing image based on deep fusion networks and conditional random field
XIAO Chunjiao,LI Yu,ZHANG Hongqun,CHEN Jun.Semantic segmentation of remote sensing image based on deep fusion networks and conditional random field[J].Journal of Remote Sensing,2020,24(3):254-264.
Authors:XIAO Chunjiao  LI Yu  ZHANG Hongqun  CHEN Jun
Institution:1.Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100094, China;2.University of Chinese Academy of Sciences, Beijing 100049, China
Abstract:Image semantic segmentation refers to segmenting an image into several groups of pixel regions with different specific semantic meanings and identifying the categories of each region. In recent years, the common semantic segmentation methods that are based on Convolutional Neural Networks (CNN) have realised the pixel-to-pixel image semantic segmentation. They can avoid the problems of artificial design and selection of features in traditional image semantic segmentation methods. As a result of the pooling operation and lack of context information, the detailed information of images is neglected, the precision of the final image semantic segmentation result is low and the segmentation edge is inaccurate. Therefore, this study proposes a semantic segmentation method for remote sensing image on the basis of Deep Fusion Networks (DFN) combined with a conditional random field model. The method initially builds a DFN model in a Fully Convolutional Network (FCN) framework with a deconvolutional fusion structure. On the one hand, the multiscale features can be extracted through the deep networks, which can avoid the artificial design and selection of features to improve the generalisation ability of the model. On the other hand, the multiscale information is used in the model with the help of the deconvolutional fusion structure. The processing accuracy of the model is also improved by fusing the shallow detail information and deep semantic information. Fundamentally, the fully connected conditional random field is introduced to supplement the spatial context information towards precisely locating the boundary and obtaining final semantic segmentation results. (1)With the increase in the depth of the fusion layer, detailed information becomes abundant, the semantic segmentation results become refined and the edge contour becomes close to the label image; (2) The fully connected conditional random field model synthesises the global and local information of the remote sensing image and further improves the efficiency and accuracy of the final semantic segmentation results. From this study, we can draw the following conclusions
Keywords:remote sensing image semantic segmentation  fully convolutional networks  conditional random field  fusion structure  deconvolution
本文献已被 CNKI 维普 等数据库收录!
点击此处可从《遥感学报》浏览原始摘要信息
点击此处可从《遥感学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号