首页 | 本学科首页   官方微博 | 高级检索  
     检索      

一种移动视频与地理场景的融合方法
引用本文:赵维淞,钱建国,汤圣君,王伟玺,李晓明,郭晗.一种移动视频与地理场景的融合方法[J].测绘通报,2020,0(12):11-16.
作者姓名:赵维淞  钱建国  汤圣君  王伟玺  李晓明  郭晗
作者单位:1. 辽宁工程技术大学测绘与地理科学学院, 辽宁 阜新 123000;2. 深圳大学智慧城市研究院, 广东 深圳 518061;3. 国土资源部城市土地资源监测与仿真重点实验室, 广东 深圳 518061;4. 深圳市数字城市工程研究中心, 广东 深圳 518034
基金项目:国家自然科学基金青年基金(41801392);深圳市科技创新委员会自由探索基金(JCTJ20180305125131482);自然资源部城市自然资源监测与仿真重点实验室开放基金(KF-2019-04-010)
摘    要:快速准确地了解灾害现场状况是救灾过程中的重中之重。通常发生灾害都会使用无人机进行现场勘察,但是无人机视频难以与实际的地理场景关联起来,为此本文提出了一种移动视频与地理场景的融合方法。该方法首先采用具有仿射不变性的ASIFT算法检测特征点,将匹配后的特征点采用RANSAC算法进行迭代剔除噪点,计算视频与地理场景最优的透视变换矩阵模型参数;然后将计算得到的透视变换参数应用到视频数据,恢复视频角点坐标;最后通过内插得出所有视频帧的角点坐标,实现视频与DOM的精确融合。试验结果表明,对视频数据匹配的间隔帧越短,其整体融合精度越高,通过本文方法进行视频与地理场景融合的误差标准差低于10 m。

关 键 词:ASIFT算法  随机抽样一致性算法  图像匹配  透视变换  图像融合  
收稿时间:2020-01-13

A fusion method of mobile video and geographic scene
ZHAO Weisong,QIAN Jianguo,TANG Shengjun,WANG Weixi,LI Xiaoming,GUO Han.A fusion method of mobile video and geographic scene[J].Bulletin of Surveying and Mapping,2020,0(12):11-16.
Authors:ZHAO Weisong  QIAN Jianguo  TANG Shengjun  WANG Weixi  LI Xiaoming  GUO Han
Institution:1. School of Geomatics, Liaoning Technical University, Fuxin 123000, China;2. Research Institute for Smart Cities, Shenzhen University, Shenzhen 518061, China;3. Key Laboratory of Urban Land Resource Monitoring and Simulation, Land Resources Department, Shenzhen 518061, China;4. Shenzhen Digital City Engineering Research Center, Shenzhen 518034, China
Abstract:Quick and accurate understanding of the disaster scene situation is the top priority in the disaster relief process. Usually, drones are used for site surveys when disasters occur, but it is difficult to associate the drone video with the actual geographic scene. To this end, this paper proposes a fusion method of mobile video and geographic scene. This method first uses ASift algorithm with affine invariance to detect feature points, and uses the RANSAC algorithm to iteratively remove noise points to calculate video and geography, optimal scene transformation matrix model parameters. Then the calculated perspective transformation parameters are then applied to the video data to restore the coordinates of the corners of the video. Finally, the corner coordinates of all video frames are obtained by interpolation to achieve an accurate fusion of video and DOM. The experimental results show that the shorter the interval frame that matches the video data, the higher the overall fusion accuracy, the standard deviation of the video and geographic scene fusion by this method is less than 10 m.
Keywords:ASIFT algorithm  random sampling consistency algorithm  image matching  perspective transformation  image fusion  
点击此处可从《测绘通报》浏览原始摘要信息
点击此处可从《测绘通报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号