首页 | 本学科首页   官方微博 | 高级检索  
     检索      

设施农业典型地物改进Faster R-CNN识别方法
引用本文:王兴,康俊锋,刘学军,王美珍,张超.设施农业典型地物改进Faster R-CNN识别方法[J].地球信息科学,2019,21(9):1444-1454.
作者姓名:王兴  康俊锋  刘学军  王美珍  张超
作者单位:1. 南京师范大学虚拟地理环境教育部重点实验室,南京 210023;2. 江苏省地理信息资源开发与利用协同创新中心,南京 210023;3. 江苏省地理环境演化国家重点实验室培育建设点,南京 210023;4. 江西理工大学建筑与测绘工程学院,赣州 341000
基金项目:国家自然科学基金项目(41771420);国家高技术研究发展计划项目(2015AA123901);江苏高校优势学科建设工程资助项目
摘    要:深度学习方法可有效提高传统基于遥感影像的设施农业典型地物识别与提取方法的结果精度,对传统农业的转型和发展意义重大。本文针对遥感影像大背景小目标的特点,以及设施农业典型地物的图像特征,结合深度残差思想和Faster R-CNN提出DRTOMA算法:首先,使用深度残差网络作为其基础特征提取网络,以此获得更深层次的图像特征,并抑制网络退化和衰退问题;然后在残差单元和全连接层之间加入改进的空间金字塔池化层,从而去除输入图像固定大小的限制,增加网络对图像尺度的敏感度;最后,在全连接层间添加dropout层,减少网络计算的复杂度,提升抗过拟合效果。仿真结果表明:同部分已有的检测算法相比,DRTOMA算法的平均识别准确率和召回率均取为最优,分别为91.87%和90.63%;在最优识别精度近似的情况下,DRTOMA算法比Faster R-CNN算法的召回率高约2%,网络更易收敛,训练难度较低。综上所述,DRTOMA算法是一种有效可行的设施农业典型地物检测方法。

关 键 词:设施农业  遥感影像  目标检测  Faster  R-CNN  深度残差  
收稿时间:2018-12-29

Improving the Faster R-CNN Method for Recognizing Typical Objects of Modern Agriculture based on Remote Sensing Imagery
WANG Xing,KANG Junfeng,LIU Xuejun,WANG Meizhen,ZHANG Chao.Improving the Faster R-CNN Method for Recognizing Typical Objects of Modern Agriculture based on Remote Sensing Imagery[J].Geo-information Science,2019,21(9):1444-1454.
Authors:WANG Xing  KANG Junfeng  LIU Xuejun  WANG Meizhen  ZHANG Chao
Abstract:The development of modern agriculture is directly related to the transformation of the traditional agriculture. The recognition and extraction of typical objects of modern agriculture (TOMA) through remote sensing imagery has many advantages and has become the mainstream of current applications. Since traditional recognition methods are easily affected by external environmental factors (e.g., the shape, size, color, and texture of TOMA, and the distance, angle, and weather conditions for obtaining the remote sensing imagery), the accuracy of recognition results is usually difficult to meet application requirements. In the recent years, deep learning methods have seen wide applications in many fields, which greatly promote the advancement of artificial intelligence. Convolutional Neural Network (CNN) has acquired breakthrough research results in image classification, object detection, semantic segmentation, and so on. Based on the structure of CNN, many excellent network structures have been developed, such as Regions with CNN, Fast R-CNN, Mask R-CNN, etc. In particula, Faster R-CNN is one of the mainstream algorithms for target detection. However, when directly applied to the recognition of TOMA, the Faster R-CNN still has some drawbacks to be improved, especially the problem of small targets with large background. By taking the image features of TOMA into account, a DRTOMA (Deep Residual TOMA) algorithm was proposed in this paper based on the idea of deep residual network and Faster R-CNN. Firstly, the deep residual network was used as the basic feature extraction network to obtain deeper features and suppress the network degenerate problems. Secondly, an improved spatial pyramid pooling layer was added between the residual unit and the fully connected layer to remove the fixed size limit of the input image while increasing the sensitivity to the scale of the network. Lastly, a dropout layer was added between the fully connected layers to reduce the complexity of the network and improve the over-fitting effect. Simulation results showed that compared with some existing algorithms, the average recognition accuracy and recall rate of the DRTOMA algorithm were optimal, being 91.87% and 90.643%, respectively. The recognition accuracy of the DRTOMA algorithm and that of Faster R-CNN were similar. However, the DRTOMA algorithm had a recall rate of about 2% higher than the Faster R-CNN algorithm, and the network was easier to converge and can be trained for a shorter time. Our findings suggest that the DRTOMA algorithm is an effective and feasible TOMA detection method.
Keywords:modern agriculture  remote sensing imagery  object recognition  Faster R-CNN  deep residual  
点击此处可从《地球信息科学》浏览原始摘要信息
点击此处可从《地球信息科学》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号