首页 | 本学科首页   官方微博 | 高级检索  
     检索      


Visually Augmented Navigation for Autonomous Underwater Vehicles
Abstract: As autonomous underwater vehicles (AUVs) are becoming routinely used in an exploratory context for ocean science, the goal of visually augmented navigation (VAN) is to improve the near-seafloor navigation precision of such vehicles without imposing the burden of having to deploy additional infrastructure. This is in contrast to traditional acoustic long baseline navigation techniques, which require the deployment, calibration, and eventual recovery of a transponder network. To achieve this goal, VAN is formulated within a vision-based simultaneous localization and mapping (SLAM) framework that exploits the systems-level complementary aspects of a camera and strap-down sensor suite. The result is an environmentally based navigation technique robust to the peculiarities of low-overlap underwater imagery. The method employs a view-based representation where camera-derived relative-pose measurements provide spatial constraints, which enforce trajectory consistency and also serve as a mechanism for loop closure, allowing for error growth to be independent of time for revisited imagery. This article outlines the multisensor VAN framework and demonstrates it to have compelling advantages over a purely vision-only approach by: 1) improving the robustness of low-overlap underwater image registration; 2) setting the free gauge scale; and 3) allowing for a disconnected camera-constraint topology.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号