首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
3D indoor navigation in multi‐story buildings and under changing environments is still difficult to perform. 3D models of buildings are commonly not available or outdated. 3D point clouds turned out to be a very practical way to capture 3D interior spaces and provide a notion of an empty space. Therefore, pathfinding in point clouds is rapidly emerging. However, processing of raw point clouds can be very expensive, as these are semantically poor and unstructured data. In this article we present an innovative octree‐based approach for processing of 3D indoor point clouds for the purpose of multi‐story pathfinding. We semantically identify the construction elements, which are of importance for the indoor navigation of humans (i.e., floors, walls, stairs, and obstacles), and use these to delineate the available navigable space. To illustrate the usability of this approach, we applied it to real‐world data sets and computed paths considering user constraints. The structuring of the point cloud into an octree approximation improves the point cloud processing and provides a structure for the empty space of the point cloud. It is also helpful to compute paths sufficiently accurate in their consideration of the spatial complexity. The entire process is automatic and able to deal with a large number of multi‐story indoor environments.  相似文献   

2.
The 3D perception of the human eye is more impressive in irregular land surfaces than in flat land surfaces. The quantification of this perception would be very useful in many applications. This article presents the first approach to determining the visible volume, which we call the 3D‐viewshed, in each and all the points of a DEM (Digital Elevation Model). Most previous visibility algorithms in GIS (Geographic Information Systems) are based on the concept of a 2D‐viewshed, which determines the number of points that can be seen from an observer in a DEM. Extending such a 2D‐viewshed to 3D space, then to all the DEM‐points, is too expensive computationally since the viewshed computation per se is costly. In this work, we propose the first approach to compute a new visibility metric that quantifies the visible volume from every point of a DEM. In particular, we developed an efficient algorithm with a high data and calculation re‐utilization. This article presents the first total‐3D‐viewshed maps together with validation results and comparative analysis. Using our highly scalable parallel algorithm to compute the total‐3D‐viewshed of a DEM with 4 million points on a Xeon Processor E5‐2698 takes only 1.3 minutes.  相似文献   

3.
Landscape illustration, a core visualization technique for field geologists and geomorphologists, employs the parsimonious use of linework to represent surface structure in a straightforward and intuitive manner. Under the rubric of non‐photorealistic rendering (NPR), automated procedures in this vein render silhouettes and creases to represent, respectively, view‐dependent and view‐independent landscape features. This article presents two algorithms and implementations for rendering silhouettes from adaptive tessellations of point‐normal (PN) triangles at speeds approaching those suitable for animation. PN triangles use cubic polynomial models to provide a surface that appears smooth at any required resolution. The first algorithm, drawing on standard silhouette detection techniques in surface meshes, builds object space facet adjacencies and image space pixel adjacencies in the graphics pipeline following adaptive tessellation. The second makes exclusive use of image space analysis without referencing the underlying scene world geometry. Other than initial pre‐processing operations, recent advances in the OpenGL API allow implementations for both algorithms to be hosted entirely on the graphics processing unit (GPU), eliminating slowdowns through data transfer across the system memory bus. We show that both algorithms provide viable paths to real‐time animation of pen and ink style landscape illustrations but that the second demonstrates superior performance over the first.  相似文献   

4.
In this article we present a heuristic map simplification algorithm based on a novel topology‐inferred graph model. Compared with the existing algorithms, which only focus either on geometry simplification or on topological consistency, our algorithm simplifies the map composed of series of polylines and constraint points while maintaining the topological relationships in the map, maximizing the number of removal points, and minimizing error distance efficiently. Unlike some traditional geometry simplification algorithms, such as Douglas and Peucker's, which add points incrementally, we remove points sequentially based on a priority determined by heuristic functions. In the first stage, we build a graph to model the topology of points in the map from which we determine whether a point is removable or not. As map generalization is needed in different applications with different requirements, we present two heuristic functions to determine the priority of points removal for two different purposes: to save storage space and to reduce computation time. The time complexity of our algorithm is which is efficient enough to be considered for real‐time applications. Experiments on real maps were conducted and the results indicate that our algorithm produces high quality results; one heuristic function results in higher removal points saving storage space and the other improves the time performance significantly.  相似文献   

5.
面向室内弱纹理三维重建需求,本文以RGB-D摄影测量技术获取室内点云为基础,提出了四元组标靶辅助的点云配准方法。该方法首先通过阈值筛选大曲率点,自动识别邻接点云中的辅助标靶,然后采用随机采样一致性表达方法,拟合标靶参数及其中心坐标,并根据拟合参数匹配同名标靶中心,通过刚性转换完成邻接点云粗配准。在此基础上,迭代估算邻接点云间的重叠区域,优化点云间的配准参数,从而实现点云精配准。利用Kinect相机获取两类室内场景各12站点云对本文方法进行测试,试验结果表明,配准后的多站点云间距最大均方根误差优于一个采样间隔,证明了该方法在弱纹理室内点云配准中的可靠性。  相似文献   

6.
Rendering large volumes of vector data is computationally intensive and therefore time consuming, leading to lower efficiency and poorer interactive experience. Graphics processing units (GPUs) are powerful tools in data parallel processing but lie idle most of the time. In this study, we propose an approach to improve the performance of vector data rendering by using the parallel computing capability of many‐core GPUs. Vertex transformation, largely a mathematical calculation that does not require communication with the host storage device, is a time‐consuming procedure because all coordinates of each vector feature need to be transformed to screen vertices. Use of a GPU enables optimization of a general‐purpose mathematical calculation, enabling the procedure to be executed in parallel on a many‐core GPU and optimized effectively. This study mainly focuses on: (1) an organization and storage strategy for vector data based on equal pitch alignment, which can adapt to the GPU's calculating characteristics; (2) a paging‐coalescing transfer and memory access strategy for vector data between the CPU and the GPU; and (3) a balancing allocation strategy to take full advantage of all processing cores of the GPU. Experimental results demonstrate that the approach proposed can significantly improve the efficiency of vector data rendering.  相似文献   

7.
Three‐dimensional (3D) terrain modeling based on digital elevation models (DEMs) with the use of orthographic and perspective projections is a standard procedure implemented in many commercial and open‐source geoinformation systems. However, standard tools may be insufficient for 3D scientific visualization. In particular, single‐source illumination of 3D models may be deficient for topographically complex terrains. We present an approach for 3D terrain modeling with multiple‐source illumination in the virtual environment of the Blender free and open‐source software. The approach includes the following key stages: (1) automatic creation of a polygonal object; (2) selecting an algorithm to model the 3D geometry; (3) selecting a vertical exaggeration scale; (4) selecting types, parameters, a number, and positions of light sources; (5) selecting methods for generating shadows; (6) selecting a shading method for the 3D model; (7) selecting a material for the 3D model surface; (8) overlaying a texture on the 3D model; (9) setting a virtual camera; and (10) rendering the 3D model. To illustrate the approach, we processed a test DEM extracted from the International Bathymetric Chart of the Arctic Ocean version 3.0 (IBCAO 3.0). The approach is currently being used to develop a system for geomorphometric modeling of the Arctic Ocean floor.  相似文献   

8.
提出一种基于三维有限元分析的LIDAR点云典型噪声剔除算法。该算法首先采用空间六面体模型对原始LIDAR点云进行有限单元剖分;其次依据基于邻接关系的推理规则进行噪声单元与非噪声单元聚类;最后进一步选择更精细剖分阈值迭代剔除低矮噪声。采用国际主流机载LIDAR系统所获取的点云数据进行相关算法对比实验,结果表明三维有限元分析剔噪算法具有更好效果。  相似文献   

9.
Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive “scanning lines”, which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech’s Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.  相似文献   

10.
Mobility and spatial interaction data have become increasingly available due to the wide adoption of location‐aware technologies. Examples of mobility data include human daily activities, vehicle trajectories, and animal movements, among others. In this article we focus on a special type of mobility data, i.e. origin‐destination pairs, and present a new approach to the discovery and understanding of spatio‐temporal patterns in the movements. Specifically, to extract information from complex connections among a large number of point locations, the approach involves two steps: (1) spatial clustering of massive GPS points to recognize potentially meaningful places; and (2) extraction and mapping of the flow measures of clusters to understand the spatial distribution and temporal trends of movements. We present a case study with a large dataset of taxi trajectories in Shenzhen, China to demonstrate and evaluate the methodology. The contribution of the research is two‐fold. First, it presents a new methodology for detecting location patterns and spatial structures embedded in origin‐destination movements. Second, the approach is scalable to large data sets and can summarize massive data to facilitate pattern extraction and understanding.  相似文献   

11.
With the wide use of laser scanning technology, point cloud data collected from airborne sensors and terrestrial sensors are often integrated to depict a complete scenario from the top and ground views, even though points from different platforms and sensors have quite different densities. These massive point clouds with various structures create many problems for both data management and visualization. In this article, a hybrid spatial index method is proposed and implemented to manage and visualize integrated point cloud data from airborne and terrestrial scanners. This hybrid spatial index structure combines an extended quad‐tree model at the global level to manage large area airborne sensor data, with a 3‐D R‐tree to organize high density local area terrestrial point clouds. These massive point clouds from different platforms have diverse densities, but this hybrid spatial index system has the capability to organize the data adaptively and query efficiently, satisfying the requirements for fast visualization. Experiments using point cloud data collected from the Dunhuang area were conducted to evaluate the efficiency of our proposed method.  相似文献   

12.
Weather radar data play an important role in meteorological analysis and forecasting. In particular, web‐based real‐time 3D visualization will enable and enhance various meteorological applications by avoiding the dissemination of a large amount of data over the internet. Despite that, most existing studies are either limited to 2D or small‐scale data analytics due to methodological limitations. This article proposes a new framework to enable web‐based real‐time 3D visualization of large‐scale weather radar data using 3D tiles and WebGIS technology. The 3D tiles technology is an open specification for online streaming massive heterogeneous 3D geospatial datasets, which is designed to improve rendering performance and reduce memory consumption. First, the weather radar data from multiple single‐radar sites across a large coverage area are organized into a spliced grid data (i.e., weather radar composing data, WRCD). Next, the WRCD is converted into a widely used 3D tile data structure in four steps: data preprocessing, data indexing, data transformation, and 3D tile generation. Last, to validate the feasibility of the proposed strategy, a prototype, namely Meteo3D at https://202.195.237.252:82 , is implemented to accommodate the WRCD collected from all the weather radar sites over the whole of China. The results show that near real‐time and accurate visualization for the monitoring and early warning of strong convective weather can be achieved.  相似文献   

13.
LiDAR点云的分类提取是点云数据处理中的首要步骤。为了提高复杂场景中点云数据分类提取方法的适用性,文中根据三维数学形态学思想,提出一种基于地物空间形状特征的点云提取方法。方法首先建立网格索引,划分网格空间,进行点云数据组织,然后根据地物在网格空间中的形状特征设计出四种参数可控的空间网格算子,最后结合点云反射强度信息自动提取特定地物点云。通过对复杂场景中的铁路地物要素LiDAR点云中建筑、电力杆线、铁路轨道的提取和郊区机载LiDAR点云中的地面与建筑屋顶的提取,验证提取算法的适用性,为点云分类提取功能模块的程序设计提供便捷方法。  相似文献   

14.
We present a new procedure to compute dense 3D point clouds from a sequential set of images. This procedure is considered as a second step of a three-step algorithm for 3D reconstruction from image sequences, whose first step consists of image orientation and the last step is shape reconstruction. We assume that the camera matrices as well as a sparse set of 3D points are available and we strive for obtaining a dense and reliable 3D point cloud. Three novel ideas are presented: (1) for sparse tracking and triangulation, the search space for correspondences is reduced to a line segment by means of known camera matrices and disparity ranges are provided by triangular meshes from the already available points; (2) triangular meshes from extended sets of points are used for dense matching, because these meshes help to reconstruct points in weakly textured areas and present a natural way to obtain subpixel accuracy; (3) two non-local optimization methods, namely, 1D dynamic programming along horizontal lines and semi-global optimization were employed for refinement of local results obtained from an arbitrary number of images. All methods were extensively tested on a benchmark data set and an infrared video sequence. Both visual and quantitative results demonstrate the effectiveness of our algorithm.  相似文献   

15.
This article presents a new character‐level convolutional neural network model that can classify multilingual text written using any character set that can be encoded with UTF‐8, a standard and widely used 8‐bit character encoding. For geographic classification of text, we demonstrate that this approach is competitive with state‐of‐the‐art word‐based text classification methods. The model was tested on four crowdsourced data sets made up of Wikipedia articles, online travel blogs, Geonames toponyms, and Twitter posts. Unlike word‐based methods, which require data cleaning and pre‐processing, the proposed model works for any language without modification and with classification accuracy comparable to existing methods. Using a synthetic data set with introduced character‐level errors, we show it is more robust to noise than word‐level classification algorithms. The results indicate that UTF‐8 character‐level convolutional neural networks are a promising technique for georeferencing noisy text, such as found in colloquial social media posts and texts scanned with optical character recognition. However, word‐based methods currently require less computation time to train, so currently are preferable for classifying well‐formatted and cleaned texts in single languages.  相似文献   

16.
With the rapid advance of geospatial technologies, the availability of geospatial data from a wide variety of sources has increased dramatically. It is beneficial to integrate / conflate these multi‐source geospatial datasets, since the integration of multi‐source geospatial data can provide insights and capabilities not possible with individual datasets. However, multi‐source datasets over the same geographical area are often disparate. Accurately integrating geospatial data from different sources is a challenging task. Among the subtasks of integration/conflation, the most crucial one is feature matching, which identifies the features from different datasets as presentations of the same real‐world geographic entity. In this article we present a new relaxation‐based point feature matching approach to match the road intersections from two GIS vector road datasets. The relaxation labeling algorithm utilizes iterated local context updates to achieve a globally consistent result. The contextual constraints (relative distances between points) are incorporated into the compatibility function employed in each iteration's updates. The point‐to‐point matching confidence matrix is initialized using the road connectivity information at each point. Both the traditional proximity‐based approach and our relaxation‐based point matching approach are implemented and experiments are conducted over 18 test sites in rural and suburban areas of Columbia, MO. The test results show that our relaxation labeling approach has much better performance than the proximity matching approach in both simple and complex situations.  相似文献   

17.
车载激光点云道路边界提取的Snake方法   总被引:2,自引:0,他引:2  
针对车载激光点云中道路边界提取困难,自动化程度低的问题,提出一种基于离散点Snake的车载激光点云道路边界提取方法。不同于传统基于图像建立Snake,本文直接基于离散点建立Snake模型。先利用伪轨迹点数据,确定初始轮廓位置,参数化不同类型的道路边界初始轮廓;然后基于离散点构建适合多类型道路边界的Snake模型,定义模型内部、外部和约束能量,通过能量函数最小化推动轮廓曲线移动到显著道路边界特征点处,实现不同道路边界的精细提取。本文试验采用3份不同城市场景的车载激光点云数据验证本文方法的有效性,道路边界提取结果的准确率达到97.62%,召回率达到98.04%,F1-Measure值达到97.83%以上,且提取的道路边界结果与软件交互提取的结果有较好的吻合度。试验结果表明,本文方法能够修正噪声、断裂等数据质量对道路边界提取的影响,能够实现各类复杂城市环境中不同形状道路边界的提取,具有较强的稳健性和适用性。  相似文献   

18.
The LiDAR point clouds captured with airborne laser scanning provide considerably more information about the terrain surface than most data sources in the past. This rich information is not simply accessed and convertible to a high quality digital elevation model (DEM) surface. The aim of the study is to generate a homogeneous and high quality DEM with the relevant resolution, as a 2.5D surface. The study is focused on extraction of terrain (bare earth) points from a point cloud, using a number of different filtering techniques accessible by selected freeware. The proposed methodology consists of: (1) assessing advantages/disadvantages of different filters across the study area, (2) regionalization of the area according to the most suitable filtering results, (3) data fusion considering differently filtered point clouds and regions, and (4) interpolation with a standard algorithm. The resulting DEM is interpolated from a point cloud fused from partial point clouds which were filtered with multiscale curvature classification (MCC), hierarchical robust interpolation (HRI), and the LAStools filtering. An important advantage of the proposed methodology is that the selected landscape and datasets properties have been more holistically studied, with applied expert knowledge and automated techniques. The resulting highly applicable DEM fulfils geometrical (numerical), geomorphological (shape), and semantic quality properties.  相似文献   

19.
无人机倾斜摄影直接生产的成果通常包括三维模型、TDOM、DSM等,然而规划设计通常不能直接利用倾斜数据输出的DEM,需要辅以人工编辑。作为倾斜摄影影像处理的过程成果,密集匹配点云未得到充分利用。其与激光雷达点云具备相似的结构,且点云密度可自由选择,在不考虑数据量的情况下,密集匹配点云的点密度可数倍于激光雷达点云。此外,密集匹配点云无需单独赋色,即具有纹理信息,对人工目视编辑自动分类后的地面点具有一定的辅助作用。本文对比分析了同一测区的密集匹配点云与激光雷达点云,验证了密集匹配点云用于房屋建筑区及稀疏林区地面点滤波并生产DEM的可行性。  相似文献   

20.
现有地面三维激光扫描点云数据滤波算法较少,针对地形复杂区域的点云滤波效果更是不甚理想,因此对二维聚类算法进行改进,提出三维点云聚类滤波算法,并对其在地形复杂区域的TLS数据滤波中的应用进行研究。以重庆鸡冠岭危岩体的TLS数据为例,分别采用曲率平滑滤波方法和文中提出的点云聚类滤波方法处理,并对两种方法处理过的数据进行形变量计算和分析。实验证明,针对植被覆盖茂密、地形复杂的山体,该方法的点云滤波效果较好,且处理速度有较大提升,能为点云后期形变量计算提供较好的基础。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号