首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
针对基于车载LiDAR点云数据的道路边界提取存在的问题,该文提出一种基于联合特征且能适应多种道路环境的道路边界提取方法。首先依据移动测量系统的航迹,按照设定宽度对道路数据进行分段,排除道路外侧无用数据;再对每段数据采用布料模拟滤波(CSF)算法分离地面点和非地面点,通过强度中值滤波去除地面点的椒盐噪声;然后计算点云局部邻域高差梯度和回波强度梯度构成的联合特征,依据设置阈值提取道路边界;最后通过欧氏距离聚类剔除部分非道路边缘点,细化道路边界,合并各段道路边界点云,得到完整的道路边界。选用代表性的城区道路、高速公路、乡村道路3种实验环境,验证了算法的鲁棒性。该研究对于扩展车载LiDAR在道路场景中的应用具有重要价值。  相似文献   

2.
基于浮动车轨迹数据的路网快速提取   总被引:1,自引:0,他引:1  
浮动车轨迹数据包含丰富的路网信息,随着浮动车轨迹数据的逐渐公开,从中提取路网信息已成为可能。目前,大多数算法提取路网时,使用统一的阈值忽略了轨迹数据的密度差异,且只考虑了轨迹的形态没有考虑轨迹的方向,严重影响了提取结果的几何精确度和拓扑正确度。为此,该文提出了一种自适应半径质心漂移聚类方法,能根据轨迹密度、道路宽度自动调整聚类参数和利用轨迹方向实现道路拓扑连接。首先,通过自适应半径质心漂移聚类方法计算路网骨架点,采用小波聚类算法获取路网骨架点的方向集;然后,根据聚类半径和方向对骨架点进行递归连接,生成路网数据。利用深圳市福田区一天的浮动车轨迹数据进行了算法实验验证,将实验结果与栅格化方法、约束三角网方法的结果进行了定性定量评价分析。实验结果表明,该文算法提取的路网数据在几何精确度及拓扑正确度上都有明显的提高,且算法适合大数据处理。  相似文献   

3.
针对复杂建筑物立面中窗户精细提取的难题,该文基于车载LiDAR数据提出一套完整的不规则窗户边界提取方法:先通过RANSAC算法探测建筑物主墙面点云,借助语义特征分离墙面和非墙面点云;再通过坐标变换,将非墙面点云转换到水平面,采用格网对水平面内点集的邻域关系进行判断,使用平面八邻域连通性探测方法对二维平面内窗户点进行聚类,分别存储聚类后的每一个窗户点云;然后采用改进的动态椭圆凸壳算法,检测聚类后各窗户边界轮廓点;最后对窗户点云坐标逆变换,得到建筑物立面中的窗户边界。通过实验验证了该算法的准确性。  相似文献   

4.
针对车载LiDAR点云分割存在人工干预多、分割效果不稳定等问题,该文提出一种基于车载LiDAR点云的路边地上物多阶段聚类分割算法。首先使用三维格网和广度优先搜索算法进行点云粗聚类,然后对相连地物进行欧氏聚类,生成若干边界完整的粗聚类点云,最后使用多段式近邻搜索逐步得到聚类结果,根据聚类主体和结果的体积比值评估聚合速度,以此自适应调整聚类阈值或输出结果,实现对道路场景中各类路边地上物的聚类分割。实验结果表明,该算法对行道树的正确提取率为87.0%,对路灯、指示牌的正确提取率为91.9%,且过分割/欠分割现象较少,相连地物的聚类结果仍保有完整的边缘轮廓,可保证后续点云处理的有效性。  相似文献   

5.
由于基于密度峰值的聚类算法对簇的形状不敏感,其聚类结果表现出良好的抗噪性。然而,当密度定义中变量难以反映簇的结构时,该算法性能下降明显,其主要原因在于聚类的非监督性。为此,该文在此算法的基础上提出了一种基于密度峰值的半监督聚类算法。该算法通过增加must-link和cannot-link约束作为先验知识,并在must-link约束集中叠加数据点的密度,以此产生新的聚类中心从而实现对数据点的吸引;对于cannot-link约束集中的数据点,通过将其n级最近邻居分离的方式找到其所属聚类中心,实现簇的归属。实验表明,基于密度峰值的半监督聚类算法利用先验知识来约束和引导聚类结果,在一定程度上改善了聚类的效果,并可应用于任意形状数据集的聚类问题中。  相似文献   

6.
经济统计信息多包含多维度的属性,在研究数据内在结构时,需要采用降维方法将多维信息转换到三维以内的空间以实现多维信息可视化和聚类。支持向量机(Support Vector Machine,SVM)在解决小样本、非线性及高维模式识别中表现出许多特有的优势,但SVM是一种监督分类方法,需要已知样本集来训练分类过程。由于高维经济统计数据中往往缺少已知聚类中心,从其他方法的聚类结果选择小样本集作为聚类中心具有很大的主观性;空间自相关分析能揭示出高空间聚集区域和随机离散区域,并能分析出各区域的空间聚集模式,这为已知小样本的选择提供了可行的方法。该文以四川2007年统计年鉴的经济数据为例,通过主成分分析法和非线性映射法进行聚类,将各类中心和空间自相关分析揭示的高空间聚集目标作为已知样本集导入SVM,得到的结论是:采集于主成分分析法和非线性映射法的两个不同已知样本集的SVM分类结果之间的差异较大,已知样本集的选择具有很大主观性;空间自相关分析结果能大量减少特征样本集的数目,这不仅简化了SVM算法分类过程,并且结果也能准确反映四川发展实际情况。  相似文献   

7.
地貌形态分类对于地貌研究、土壤制图、滑坡防治等诸多地学研究及应用领域都有着重要作用。数字地形分析领域的研究者将地貌形态分类知识与地理计算相结合,已开发出许多地貌形态自动分类方法。该文将现有的地貌形态自动分类方法分为3类进行讨论:基于聚类的方法、基于规则知识的自动分类方法和基于典型样点的自动分类方法。基于聚类的方法对地貌形态分类专家知识缺乏考虑,聚类结果常难以明确对应到目标地貌形态类型;基于规则知识的方法常需要显式给出分类规则,应用难度较大,扩展性较差;基于典型样点的方法具有较好发展前景,但还有待改善对隐含专家知识的利用程度。各类方法目前对复合形态类型均难以有效提取。对于可能的方法改进方向,该文从空间结构特征信息、隐性专家分类知识的来源两方面进行了讨论。  相似文献   

8.
空间轨迹中的停留点提取是将空间轨迹转换到语义轨迹的关键步骤。该文将速度变量引入停留点的提取,提出基于速度的时间聚类算法和速度聚类算法解决现有方法中的"伪停留点"和停留点丢失的问题。基于速度的时间聚类算法首先沿时间轴将轨迹点进行聚类得到候选停留点,然后利用速度阈值过滤候选停留点,得出实际停留点。速度聚类算法首先通过对速度的判断选取候选停留点,然后根据空间距离阈值对候选停留点的空间距离进行过滤,得出实际停留点,解决了停留点判断中的漏判问题。实验表明,基于速度的时间聚类算法对出租车轨迹数据(稳定时间间隔、不存在长时间轨迹点缺失)的空间轨迹停留点识别效果较好,而速度聚类算法更适用于步行轨迹(可能存在长时间轨迹点缺失)的分析。  相似文献   

9.
矢量地图叠加分析在实际场景使用中经常需要处理各种大规模复杂空间数据,因此算法整体分析效率的提升尤其重要.该文重点针对较大多边形对象和大量较小多边形对象的叠加分析使用场景,提出了一种有较强针对性的基于非均匀多级网格索引的矢量地图叠加分析(Non-uniform Multi-level Grid Index Overlay,NMGIO)算法,包括索引构建、网格过滤、叠加计算、拓扑构面4个步骤,通过对待分析数据集和叠加对象双向建立非均匀多级网格索引,利用数据的空间分布特点从根本上提升叠加分析效率.同时给出了算法整体时间复杂度和由C++语言实现的原型系统叠加分析效果验证.  相似文献   

10.
DBSCAN是一种基于密度的聚类算法,其能从包含噪声点的数据集中发现任意形状的聚类并且无需预先设定聚类个数,因此得到了广泛应用。但随着数据规模的增大,迭代式的点间距离计算导致经典单机串行DBSCAN算法的性能显著下降,使之无法满足实际应用的效率需求。为此,该文提出一种性能改进的分布式并行聚类算法——KDSG-DBSCAN。该算法利用K-D Tree邻域查询减少点间距离计算次数,利用图连通算法优化局部类簇合并过程,并基于Apache Spark MapReduce平台实现了计算过程的并行化。通过4组对比实验,分析了KDSGDBSCAN、经典DBSCAN与未使用图连通的KDS-DBSCAN算法的执行效率、KDSG-DBSCAN各子阶段执行时间占比、不同数据规模下KDSG-DBSCAN的扩展性以及不同计算节点数量和CPU核数下KDSG-DBSCAN的扩展性。结果表明,KDSG-DBSCAN算法具有良好的可扩展性和加速比。  相似文献   

11.
从机载LiDAR点云数据中分离地面点与非地面点生成城区DEM,是构建数字城市的首要工作。该文对机载LiDAR数据处理中的滤波关键算法进行了研究。首先,利用常用的区域增长方法对机载LiDAR点云数据进行了滤波处理,然后基于正交多项式分带滤波方法进行了点云滤波,比较了两种算法的运行效率,分析了二者的特点,并通过试验对比进行了阈值参数优选,试验表明该算法具有较好的滤波效果,在平原城区的数字城市三维建模中具有很好的应用价值。  相似文献   

12.
ABSTRACT

Point cloud classification, which provides meaningful semantic labels to the points in a point cloud, is essential for generating three-dimensional (3D) models. Its automation, however, remains challenging due to varying point densities and irregular point distributions. Adapting existing deep-learning approaches for two-dimensional (2D) image classification to point cloud classification is inefficient and results in the loss of information valuable for point cloud classification. In this article, a new approach that classifies point cloud directly in 3D is proposed. The approach uses multi-scale features generated by deep learning. It comprises three steps: (1) extract single-scale deep features using 3D convolutional neural network (CNN); (2) subsample the input point cloud at multiple scales, with the point cloud at each scale being an input to the 3D CNN, and combine deep features at multiple scales to form multi-scale and hierarchical features; and (3) retrieve the probabilities that each point belongs to the intended semantic category using a softmax regression classifier. The proposed approach was tested against two publicly available point cloud datasets to demonstrate its performance and compared to the results produced by other existing approaches. The experiment results achieved 96.89% overall accuracy on the Oakland dataset and 91.89% overall accuracy on the Europe dataset, which are the highest among the considered methods.  相似文献   

13.
In this paper an approach to the automatic quality assessment of existing geo‐spatial data is presented. The necessary reference information is derived automatically from up‐to‐date digital remotely sensed images using image analysis methods. The focus is on the quality assessment of roads as these are among the most frequently changing objects in the landscape. In contrast to existing approaches for quality control of road data, the data to be assessed and the objects extracted from the images are modelled and processed together. A geometric‐topologic relationship model for the roads and their surroundings is defined. Context objects such as rows of trees support the quality assessment of road vector data as they may explain gaps in road extraction. The extraction and explicit incorporation of these objects in the assessment of a given road database give stronger support for or against its correctness.

During the assessment existing relations between road objects from the database and extracted objects are compared to the modelled relations. The certainty measures of the objects are integrated into this comparison. Normally, more than one extracted object gives evidence for a road database object; therefore, a reasoning algorithm which combines evidence given by the extracted objects is used. If the majority of the total evidence argues for the database object and if a certain amount of this database object is covered by extracted objects, the database object is assumed to be correct, i.e. it is accepted, otherwise it is rejected. The procedure is embedded into a two‐stage graph‐based approach which exploits the connectivity of roads and results in a reduction of false alarms. The algorithms may be incorporated into a semi‐automatic environment, where a human operator only checks those objects that have been rejected.

The experimental results confirm the importance of the employed advanced statistical modelling. The overall approach can reliably assess the roads from the given database, using road and context objects which have been automatically extracted from remotely sensed imagery. Sensitivity analysis shows that in most cases the chosen two‐stage graph‐approach reduces the number of false decisions. Approximately 66% of the road objects have been accepted by the developed approach in an extended test area, 1% has been accepted though incorrect. Those false decisions are mainly related to the lack of modelling road junction areas.  相似文献   

14.
Land-cover classification using only remotely sensed spectral features does not provide accurate information on all urban-fringe classes. Four texture measures used in conjunction with LANDSAT spectral features were empirically evaluated to determine their utility in Level II and III land-cover mapping. The contrast and high-frequency measures improved land-cover classification at the urban fringe. However, the decision to use texture measures should be weighed carefully because they yield only a small, yet important, increment in absolute classification accuracy and entail additional expense for data preprocessing.  相似文献   

15.
This paper presents an automated classification system of landform elements based on object-oriented image analysis. First, several data layers are produced from Digital Terrain Models (DTM): elevation, profile curvature, plan curvature and slope gradient. Second, relatively homogenous objects are delineated at several levels through image segmentation. These object primatives are classified as landform elements using a relative classification model, built both on the surface shape and on the altitudinal position of objects. So far, slope aspect was not used in classification. The classification has nine classes: peaks and toe slopes (defined by the altitudinal position or the degree of dominance), steep slopes and flat/gentle slopes (defined by slope gradients), shoulders and negative contacts (defined by profile curvatures), head slopes, side slopes and nose slopes (defined by plan curvatures). Classes are defined using flexible fuzzy membership functions. Results are visually analyzed by draping them over DTMs. Specific fuzzy classification options were used to obtain an assessment of output accuracy. Two implementations of the methodology are compared using (1) Romanian datasets and (2) Berchtesgaden National Park, Germany. The methodology has proven to be reproducible; readily adaptable for diverse landscapes and datasets; and useful in respect to providing additional information for geomorphological and landscape studies. A major advantage of this new methodology is its transferability, given that it uses only relative values and relative positions to neighboring objects. The methodology introduced in this paper can be used for almost any application where relationships between topographic features and other components of landscapes are to be assessed.  相似文献   

16.
遥感专题制图背景参数的应用分析*   总被引:3,自引:2,他引:1  
该文重点论述了图像识别中背景参数的应用,包括图像信息的地学、生态特性与其制图对象、尺度的分析拟定;地物识别分类最佳时相图像的物候历的研究;专题制图图像波段优化组合分析以及地物识别的目标背景因素的辅助应用.该类背景参数与区域参数的应用研究一样,对空间图像数字专题制图的精度的根本改进与提高,具有重要的作用.  相似文献   

17.
Moving objects produce trajectories, which are typically observed in a finite sample of time‐stamped locations. Between sample points, we are uncertain about the moving objects's location. When we assume extra information about an object, for instance, a (possibly location‐dependent) speed limit, we can use space–time prisms to model the uncertainty of an object's location.

Until now, space–time prisms have been studied for unconstrained movement in the 2D plane. In this paper, we study space–time prisms for objects that are constrained to travel on a road network. Movement on a road network can be viewed as essentially one‐dimensional. We describe the geometry of a space–time prism on a road network and give an algorithm to compute and visualize space–time prisms. For experiments and illustration, we have implemented this algorithm in MATHEMATICA.

Furthermore, we study the alibi query, which asks whether two moving objects could have possibly met or not. This comes down to deciding if the chains of space–time prisms produced by these moving objects intersect. We give an efficient algorithm to answer the alibi query for moving objects on a road network. This algorithm also determines where and when two moving objects may have met.  相似文献   

18.
Point cloud classification plays a critical role in many applications of airborne light detection and ranging (LiDAR) data. In this paper, we present a deep feature-based method for accurately classifying multiple ground objects from airborne LiDAR point clouds. With several selected attributes of LiDAR point clouds, our method first creates a group of multi-scale contextual images for each point in the data using interpolation. Taking the contextual images as inputs, a multi-scale convolutional neural network (MCNN) is then designed and trained to learn the deep features of LiDAR points across various scales. A softmax regression classifier (SRC) is finally employed to generate classification results of the data with a combination of the deep features learned from various scales. Compared with most of traditional classification methods, which often require users to manually define a group of complex discriminant rules or extract a set of classification features, the proposed method has the ability to automatically learn the deep features and generate more accurate classification results. The performance of our method is evaluated qualitatively and quantitatively using the International Society for Photogrammetry and Remote Sensing benchmark dataset, and the experimental results indicate that our method can effectively distinguish eight types of ground objects, including low vegetation, impervious surface, car, fence/hedge, roof, facade, shrub and tree, and achieves a higher accuracy than other existing methods.  相似文献   

19.

Land-cover classification using only remotely sensed spectral features does not provide accurate information on all urban-fringe classes. Four texture measures used in conjunction with LANDSAT spectral features were empirically evaluated to determine their utility in Level II and III land-cover mapping. The contrast and high-frequency measures improved land-cover classification at the urban fringe. However, the decision to use texture measures should be weighed carefully because they yield only a small, yet important, increment in absolute classification accuracy and entail additional expense for data preprocessing.  相似文献   

20.
张睎伟  王磊  汪西原 《干旱区地理》2019,42(5):1133-1140
为研究沙地信息提取的方法,采用基于CART决策树的面向对象方法,提取中卫市沙坡头区的沙地信息。首先对研究区进行多尺度分割和光谱差异分割得到对象层,然后选择合适的提取特征和训练样本点,最后输入选择的提取特征和样本点生成CART规则树,并对地物进行分类,提取出沙地信息。结果表明:采用面向对象的CART决策树方法提取沙地信息具有较高自动化程度和精确度,依此构建的CART决策树总体分类精度可达到77%,是最近邻分类结果的1.12倍,支持向量机分类结果的1.57倍,此外,NDBI(归一化裸露指数)、GSI(粒度指数)和SWIR 2(第七波段)均值可以成功的将沙地、戈壁和裸岩石砾地三个易混地物区分开来,是沙地提取过程中三个重要的特征指数。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号