首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
A method of compressing LAGEOS laser range measurements into normal points is described. First, the raw range measurements are screened, on a pass-by-pass basis, by a filter which utilizes the fact that the range varies with time very nearly like a quadratic curve. Second, the time series of the filtered range data over each pass is divided into small segments, typically of 150 s duration. Third, each separate segment is modelled by a unique low-order Chebyshev polvnomial, typically of order four. Fourth, the polynomial is interpolated at some instant, typically that corresponding to the mean time of the segment, to vield the normal point. Baseline measurements across Australia obtained from full rate data and from data compressed into 150 s normal points, agree to within the measurement error. These differences are mainly due to the effects of data distribution and pass geometry. Computer costs are substantially reduced.  相似文献   

2.
介绍了Airborne Laser Scanning遥感信息获取系统数据的处理方法和原理。该系统获取的地面目标的3维坐标是随机分布的、不规则的、离散的,它们分布在不同的地形表面目标上。对这种系统获取的数据进行处理的目的,就是将上述分布在不同地面目标上的点进行分离。即将分布在地形表面上的点与落在非地形表面上的点进行有效而准确的分离。最后还介绍了该系统的应用领域以及此技术的发展前景。  相似文献   

3.
Laser scanning systems have been established as leading tools for the collection of high density three-dimensional data over physical surfaces. The collected point cloud does not provide semantic information about the characteristics of the scanned surfaces. Therefore, different processing techniques have been developed for the extraction of useful information from this data which could be applied for diverse civil, industrial, and military applications. Planar and linear/cylindrical features are among the most important primitive information to be extracted from laser scanning data, especially those collected in urban areas. This paper introduces a new approach for the identification, parameterization, and segmentation of these features from laser scanning data while considering the internal characteristics of the utilized point cloud – i.e., local point density variation and noise level in the dataset. In the first step of this approach, a Principal Component Analysis of the local neighborhood of individual points is implemented to identify the points that belong to planar and linear/cylindrical features and select their appropriate representation model. For the detected planar features, the segmentation attributes are then computed through an adaptive cylinder neighborhood definition. Two clustering approaches are then introduced to segment and extract individual planar features in the reconstructed parameter domain. For the linear/cylindrical features, their directional and positional parameters are utilized as the segmentation attributes. A sequential clustering technique is proposed to isolate the points which belong to individual linear/cylindrical features through directional and positional attribute subspaces. Experimental results from simulated and real datasets demonstrate the feasibility of the proposed approach for the extraction of planar and linear/cylindrical features from laser scanning data.  相似文献   

4.
三维激光扫描用于获取开采沉陷盆地研究   总被引:1,自引:0,他引:1  
陈冉丽  吴侃 《测绘工程》2012,21(3):67-70
针对常规地表观测站需要布设大量的测点,占用农田、测点保护困难等缺点,提出采用三维激光扫描获取开采沉陷盆地的新思路。研究三维激光扫描获取开采沉陷盆地的原理、数据处理步骤和方法,并选择了某矿工作面做一个实测案例研究。研究结果表明,该方法能达到较高的精度,具有工程应用价值。  相似文献   

5.
针对隧道类似圆柱的形态特征以及三维点云法向对噪声的敏感性,设计一种剔除由通行车辆和隧道内壁悬挂物所造成噪声点的方法。该方法利用点云的大量法向量鲁棒地估计出精确的隧道轴向,并根据点云法向与隧道轴向的偏差识别出可靠的隧道表面点,然后参照可靠点完成噪声点的进一步确认。利用仿真数据和真实的高速公路隧道扫描点云实验结果证明该方法的有效性和精确性。  相似文献   

6.
利用实时车载激光点云,实现城市环境下的多目标快速检测与跟踪。动态目标跟踪是实现城市环境下自动驾驶的关键,是三维城市场景感知的研究难点。相比于图像,三维激光点云数据更适合用于目标三维形状估计和运动预测,所以广泛应用于无人驾驶方案中。使用基于目标模型和卡尔曼滤波的目标跟踪框架,针对稀疏点云数据中常见的过分割和欠分割问题,提出一种关联历史跟踪结果和目标检测的快速跟踪算法。将跟踪结果作为先验知识,与下一时刻的目标检测关联,增强目标检测的稳定性。该算法已经应用到搭载三维激光扫描仪的自动驾驶汽车中,实验证明,该算法适用于城市交通场景,且满足实时解算需求,单帧处理平均耗时58 ms。  相似文献   

7.
Starlette was launched in 1975 in order to study temporal variations in the Earth’s gravity field; in particular, tidal and Earth rotation effects. For the period April 1983 to April 1984 over12,700 normal points of laser ranging data to Starlette have been sub-divided into49 near consecutive 5–6 day arcs. Normal equations for each arc as obtained from a least-squares data reduction procedure, were solved for ocean tidal parameters along with other geodetic and geodynamic parameters. The tidal parameters are defined relative to Wahr’s body tides and Wahr’s nutation model and show fair agreement with other satellite derived results and those obtained from spherical harmonic decomposition of global ocean tidal models.  相似文献   

8.
Discriminating laser scanner data points belonging to ground from points above-ground (vegetation or buildings) is a key issue in research. Methods for filtering points into ground and non-ground classes have been widely studied mostly on datasets derived from airborne laser scanners, less so for terrestrial laser scanners. Recent developments in terrestrial laser sensors (longer ranges, faster acquisition and multiple return echoes) has aroused greater interest for surface modelling applications. The downside of TLS is that a typical dataset has high variability in point density, with evident side-effects on processing methods and CPU-time. In this work we use a scan dataset from a sensor which returns multiple target echoes, in this case providing more than 70 million points on our study site. The area presents low, medium and high vegetation, undergrowth with varying density, as well as bare ground with varying morphology (i.e. very steep slopes as well as flat areas). We test an integrated work-flow for defining a terrain and surface model (DTM and DSM) and successively for extracting information on vegetation density and height distribution on such a complex environment. Attention was given to efficiency and speed of processing. The method consists on a first step which subsets the original points to define ground candidates by taking into account the ordinal return number and the amplitude. A custom progressive morphological filter (opening operation) is applied next, on ground candidate points using a multidimensional grid to account for the fallout in point density as a function of distance from scanner. Vegetation density mapping over the area is then estimated using a weighted ratio of point counts in the tri-dimensional space over each cell. The overall result is a pipeline for processing TLS points clouds with minimal user interaction, producing a Digital Terrain Model (DTM), a Digital Surface Model (DSM), a vegetation density map and a derived Canopy Height Model (CHM). These products are of high importance for many applications ranging from forestry to hydrology and geomorphology.  相似文献   

9.
我国天文大地网与GPS2000网联合平差是一项大规模的测量数据处理工程,解算的未知数众多,为了便于检查原始观测数据粗差以及验证地面网与空间网的权匹配关系,需要进行单区平差数据处理研究和计算。本文主要讨论单区平差数据处理中的主要模型、大规模稀疏矩阵有效的解算方法并用软件加以实现,用此软件对5000点的实验区数据进行平差计算,得到了有益的结论。  相似文献   

10.
Collecting data to make an accurate representation for roads is an expensive process. Nevertheless, there is a collaborative alternative for this endeavour. Many drivers, bicyclists and even pedestrians have consumer-grade GPS (low precision) on their smartphones or electronic devices. Those users could transfer their road or track itineraries to a large database in order to compute the accurate geometry of any route. For each road or track, the large database would have many traces from which to infer an accurate representative 3D axis. Several inference methods have been proposed but most of them to fit the 2D trace data set. We propose to create a set of ordered points from the 3D trace data set and then using the least-squares method, to fit a B-spline curve to those points. The resulting parameterized curve will be a good representative 3D axis of the traces. Our method considers the nodes to be evenly separated and allows the system to recommend the number limit of nodes necessary to reach the convergence.  相似文献   

11.
拼接是地面激光点云数据处理的必要步骤,但基于同名点的点云拼接方式已成为阻碍点云处理效率提升的长期瓶颈,而直接匹配点云识别同名特征的方法亦对点云重叠区域具有较高的要求。本文提出一种融合语义特征与GPS位置的地面激光点云拼接方法,通过语义知识自动识别出原始三维点云中所包含的地面特征与建筑物立面特征,并使用这两种面状特征结合点云测站中心的GPS位置作为同名标靶进行点云初始拼接,随后使用点到面最小距离约束下的ICP进行点云精确拼接。实验表明,本方法可以有效提高地面激光点云拼接的整体效率,尤其对于包含平面结构(如马路、建筑物)的场景具有良好的拼接效果。  相似文献   

12.
1 IntroductionRemotesensinghasbeenappliedinmanyfieldsinthepastdecades ,butthemodetoacquireandpro cesstheremotesensingdatadoesnotchangeradi cally .Theremotesensingimagemustbegeo_refer encedthroughongroundcontrolpoints (GCPs) ,andstereomatchingmustbeappliedi…  相似文献   

13.
提出一种基于等高线的滤波方法,它先由LIDAR数据生成数字表面模型,并内插出等高线,再根据DSM等高线的特征,如闭合性、首尾点距离、等高线的长度及等高线间距离等,通过设定阈值自动提取出属于自然地面的等高线线段,以获得初始的自然地面点,然后内插生成初始数字地面模型,最后使用迭代逼近法生成最终的(精确的)数字地面模型,即比较初始DTM与DSM,差值小于预设阈值的点视为DTM点,而差值大于预设阈值的点则标记为无数据点,最后,这些无数据点由选择的DTM点内插出.通过与现有表面估计的滤波方法的对比实验以及所提取地物轮廓线与航片的叠加对比试验,证明新方法可适用于地表起伏较大的地形,地物提取精度高、计算量小、效率高.  相似文献   

14.
张永军  王博  黄旭  段延松 《测绘学报》2014,43(7):717-723
提出了一套适用于影像匹配粗差的剔除方法。首先消除匹配像对间的系统性差异;然后通过构建匹配同名点的三角网结构,实现匹配结果的局部面元分割;在局部面元上进行矢量统计,引入针对局部敏感的矢量描述子指标,根据误差分布满足正态分布规律的假设设定合理的阈值,并最终实现影像匹配粗差的快速剔除。多组数据的实验验证了所提方法的可行性。该方法已成功应用于卫星影像与低空影像的自动化处理系统。  相似文献   

15.
The variance component estimation (VCE) method as developed by Helmert has been applied to the global SLR data set for the year 1987. In the first part of this study the observations have been divided into two groups: those from ruby and YAG laser systems, and their weights estimated over several months. It was found that the weights of both sets of stations altered slightly from month to month, but that, not surprisingly, the YAG systems consistently outperformed those based on ruby lasers. The major part of this paper then considers the estimation of the variance components (i.e. weights) at each SLR station from month to month. These were tested using the F-statistic and, although it indicated that most stations had significant temporal variations, they were generally small compared with the differences between the stations themselves, i.e. the method has been shown to be capable of discriminating between the precision with which the various laser stations are operating. The station coordinates and baseline lengths computed using both a priori, and estimated, weights where also compared and this showed that changes in the weights can have significant effects on the estimation of the station positions, particularly in the z component, and on the baseline lengths - so proving the importance of proper stochastic modelling when processing SLR data.  相似文献   

16.
改进的EM模型及其在激光雷达全波形数据分解中的应用   总被引:1,自引:0,他引:1  
随着数据存储能力和处理速度的提高,小光斑机载激光雷达系统已经可以通过数字化采样来存储整个反射波形,而不仅仅是由系统提取出来的三维坐标(即离散点云).分析波形数据最重要的优点之一是可以在后处理过程中让使用者自己来提取三维坐标.一般的分解方法基于非线性最小二乘的多项式拟合,或者有设备厂商提供的简单阈值法,无法获得高精度的分解结果.本文使用改进的EM脉冲检测算法得到回波脉冲的位置和宽度,证明是一种性能可靠、精度较高的波形分解算法.  相似文献   

17.
基于移动曲面拟合算法和加权平均算法的DEM内插算法改进   总被引:2,自引:0,他引:2  
李胤  杨武年  杨容浩  曾涛 《四川测绘》2010,33(4):168-171
分析了移动曲面拟合与加权平均算法的特点,提出了移动曲面拟合法与加权平均法相结合的方法,使它们能够相互取长补短,并在复杂的曲面拟合运算前对搜索到的参考点进行判断和预处理。通过从一组原始数据中随机提取的部分数据对不同内插方法进行测试,得出了各内插算法的精度,结果表明本文提出的内插算法有一定的优越性。  相似文献   

18.
This paper proposes robust methods for local planar surface fitting in 3D laser scanning data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and/or outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g. surface normals), necessary for many point cloud processing tasks. Consider one example data set used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20° and 0.24° respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49°, 39.55° and 0.79° respectively. In terms of speed, DetRD-PCA takes 0.033 s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature preservation and segmentation in 3D point clouds obtained from laser scanners. The results are significantly better and more efficiently computed than those obtained by existing methods.  相似文献   

19.
Simultaneous observations of the GEOS-I and II flashing lamps by the NASA MOTS and SPEOPT cameras on the North American Datum (NAD) have been analyzed using geometrical techniques to provide an adjustment of the station coordinates. Two separate adjustments have been obtained. An optical data—only solution has been computed in which the solution scale was provided by the Rosman-Mojave distance obtained from a dynamic station solution. In a second adjustment, scaling was provided by processing simultaneous laser ranging data from Greenbelt and Wallops Island in a combined optical-laser solution. Comparisons of these results with previous GSFC dynamical solutions indicate an rms agreement on the order of 4 meters or better in each coordinate. Comparison with a detailed gravimetric geoid of North America yields agreement of 3 meters or better for mainland U.S. stations and 7 and 3 meters, respectively, for Bermuda and Puerto Rico.  相似文献   

20.
测绘成果质量检查与验收是测绘生产项目全过程中非常重要的一环。现有的检验手段主要是利用全站仪和GNSS RTK外业采点,外业工作量大、采点数量少。随着新技术不断涌现,传统手段无法满足数据快速更新、精度不断提高的要求。车载激光扫描测量技术作为现代测绘领域比较前沿的技术之一,具有数据采集速度快、处理自动化程度高、成果直观、精度高、机动性强的特点,非常适用于各类测绘数据成果的质量检查。本文利用SSW车载激光建模测量系统扫描采集了15个样本区的数据,经过特征点全自动提取、筛选,得到用于检查的数据成果,与常规方法获取的数据进行比较,验证了该技术用于质检工作的可行性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号