首页 | 本学科首页   官方微博 | 高级检索  
     检索      

密集卷积残差网络的遥感图像融合
引用本文:陈毛毛,郭擎,刘明亮,李安.密集卷积残差网络的遥感图像融合[J].遥感学报,2021,25(6):1270-1283.
作者姓名:陈毛毛  郭擎  刘明亮  李安
作者单位:1.黑龙江大学 电子工程黑龙江省高校重点实验室, 哈尔滨 150080;2.中国科学院空天信息创新研究院, 北京 100094;3.黑龙江大学 信息融合估计与检测黑龙江省重点实验室, 哈尔滨 150080
基金项目:国家自然科学基金(编号:61771470);黑龙江省黑龙江大学基础研究基金(编号:kjcx201806)
摘    要:针对传统的遥感图像融合方法通常会引起光谱失真的问题和大多数基于深度学习的融合方法忽略充分利用每个卷积层信息的不足,本文结合密集连接卷积网络和残差网络的特性,提出了一个新的融合网络。该网络通过建立多个密集卷积块来充分利用卷积层的分级特征,同时块与块之间通过过渡层加快信息流动,从而最大程度地对特征进行极致利用并提取到丰富的特征。该网络应用残差学习拟合深层特征与浅层特征之间的残差,加快网络的收敛速度。实验中利用GaoFen-1(GF-1)和WorldView-2/3(WV-2/3)的多光谱图像MS (Multispectral Image)和全色图像PAN(Panchromatic Image)(MS与PAN的空间分辨率之比为4)评估本文提出方法的有效性。从视觉效果和定量评估结果两个方面来看,本文方法得到的融合结果要优于所对比的传统方法和深度学习方法,并且该网络具有鲁棒性,能够泛化到不需要预训练的其他卫星图像。本文方法通过特征的重复利用实现了光谱信息的高保真并提高了空间细节分辨能力,有利于遥感图像的应用研究。

关 键 词:遥感图像融合  深度学习  密集连接卷积网络  密集卷积块  残差学习
收稿时间:2019/11/22 0:00:00

Pan-sharpening by residual network with dense convolution for remote sensing images
CHEN Maomao,GUO Qing,LIU Mingliang,LI An.Pan-sharpening by residual network with dense convolution for remote sensing images[J].Journal of Remote Sensing,2021,25(6):1270-1283.
Authors:CHEN Maomao  GUO Qing  LIU Mingliang  LI An
Institution:1.Key Laboratory of Electronics Engineering, College of Heilongjiang Province, Heilongjiang University, Harbin 150080, China;2.Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China;3.Key Laboratory of Information Fusion Estimation and Detection, Heilongjiang Province, Heilongjiang University, Harbin 150080, China
Abstract:Pan-sharpening (also known as remote sensing image fusion) aims to generate Multi-Spectral (MS) images with high spatial resolution and high spectral resolution by fusing high spatial resolution panchromatic (PAN) images and high spectral resolution MS images with low spatial resolution. Traditional pan-sharpening methods mainly include the component substitution method, the multiresolution analysis method, and the model-based optimization method. These fusion methods involve linear models, which are difficult to use in achieving the appropriate trade-off between spatial improvement and spectral preservation. In addition, they often introduce spectral or spatial distortion. Recently, many fusion methods based on deep learning have been proposed. However, their network depth is relatively shallow, and detailed information is inevitably lost during feature transfer. Hence, we propose a deep residual network with dense convolution for pan-sharpening.As the network becomes deep, the features of different levels become complementary to one other. However, most fusion methods based on deep learning ignore making full use of the information of each convolution layer. The densely connected convolutional network allows the features of all previous layers to be used as input for each layer in one densely connected block. To fully utilize the features learned from all convolution layers, we establish the multiple densely convolutional blocks to reuse features. Moreover, the information flow is accelerated by the transition layer between every two blocks. These maximize the use of features and extract rich features. Given the great correlation between deep features and shallow features, residual learning is used to supervise the densely convolutional structure to learn the difference between them, that is, residual features. Thus, residual learning combines shallow features and residual features to obtain further advanced information from MS and PAN images, which prepares for obtaining fused images with high spatial and spectral resolution.To evaluate the effectiveness of the proposed method, we conduct simulated and real-image experiments on the 4-band GaoFen-1 data and 8-band WorldView-2 data with multiple land types. The trained network is generalized well to WorldView-3 images without pre-training. The visual and the quantitative assessment results show that the high-resolution fused images obtained by using the proposed method are superior to the results produced by the traditional and deep learning methods. The proposed approach achieves high spectral fidelity and enhances spatial details by reusing features.The proposed method makes comprehensive use of the advantages of densely convolutional blocks and residual learning. In the feature extraction stage, different levels of features are connected in series through the densely convolutional blocks. This characteristic makes the transmission of features and gradients effective in alleviating the gradient disappearance problem and provides rich spatial and spectral feature for fusion results. In the feature fusion stage, residual learning is used to learn the difference between deep features and shallow features, that is, residual feature. Hence, the convergence speed of the network is accelerated. The experiment result shows that our network has good fusion and generalization abilities.
Keywords:pan-sharpening  remote sensing image fusion  deep learning  densely connected convolutional network  densely convolutional blocks  residual learning
点击此处可从《遥感学报》浏览原始摘要信息
点击此处可从《遥感学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号