首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 62 毫秒
1.
地图叠加分析是一种计算密集型算法,并行化计算是加快算法执行速度的一种有效方法。该文研究分布式环境下的点面图层并行化叠加分析方法与实现。首先根据点面叠加的特点设置并行数据分解的方式,基于分治法分解空间数据,在并行系统下将地理要素分而治之。然后引入双层索引的并行叠加机制,一是对面图层根据Hilbert空间索引的排序方式分发数据,二是对点图层建立四叉树索引,对每一个进行相交运算的多边形进行快速过滤和求交。最后在Linux集群系统下实现该并行算法,其一利用MPI分布式计算环境实现在整体计算框架下的消息通讯模式的并行,其二在每个子节点中实现基于多核OpenMP工具的本地并行化。结果表明,利用双层空间索引分治的方法可实现并行数据分块,各子节点实现独立计算,减少并行系统中的I/O冲突,并行加速比明显。该方法对矢量地图运算的并行化进行了有益的尝试,为大数据时代的空间数据分析提供一种有效的途径。  相似文献   

2.
简单要素模型下多边形叠置分析算法   总被引:1,自引:0,他引:1  
现有的矢量空间叠置分析多采用拓扑模型,要求建立完整的数据拓扑关系。该文采用简单要素模型,以多边形叠置交运算为例,介绍简单要素模型下空间叠置分析的具体实现,着重讨论多边形交运算的交替搜索算法,在线段求交中对连续出入点、重交点等特殊数据进行处理。在实际应用中,该算法可较好解决大规模复杂数据层的叠置交运算,比同规模的拓扑叠置运算效率高。  相似文献   

3.
矢量地图叠加分析在实际场景使用中经常需要处理各种大规模复杂空间数据,因此算法整体分析效率的提升尤其重要.该文重点针对较大多边形对象和大量较小多边形对象的叠加分析使用场景,提出了一种有较强针对性的基于非均匀多级网格索引的矢量地图叠加分析(Non-uniform Multi-level Grid Index Overlay,NMGIO)算法,包括索引构建、网格过滤、叠加计算、拓扑构面4个步骤,通过对待分析数据集和叠加对象双向建立非均匀多级网格索引,利用数据的空间分布特点从根本上提升叠加分析效率.同时给出了算法整体时间复杂度和由C++语言实现的原型系统叠加分析效果验证.  相似文献   

4.
通过生成遗址点的Thiessen多边形,结合应用小多边形的概念及理论,综合分析了安徽淮北平原地区史前文化遗址的聚集状况以及该地区在石山子文化时期、大汶口文化时期和龙山文化时期的农业状况、生存环境及交通等特征。结果表明:用Thiessen多边形空间分析方法可以有效确定聚落中心,并合理推测史前时期农业、生存环境和交通等人地关系状况。从石山子到龙山文化时期,小多边形数量不断增加反映出聚落聚集程度和文化交流水平的提高,大汶口和龙山文化时期淮北平原中部明显是区域内聚落的中心,且多边形中心到边的距离逐渐减小说明先民对农业的依赖程度及农业发展水平也在不断提高。淮北平原史前文化发展受环境变化影响,从石山子到龙山文化时期,气候条件逐渐趋于干旱,沼泽萎缩,气温变化不大,这种环境条件十分有利于先民的生产生活,不仅遗址和小多边形数量不断增多,而且在平原中部和淮河干流沿岸地区出现了多个遗址群和聚落中心,史前文化蓬勃发展,人类适应自然和改造自然的能力提高。从小多边形的分布方向也可以推断出西北—东南向古水系对淮北平原史前遗址之间的交通起控制作用。  相似文献   

5.
在叠加分析、缓冲区分析、拓扑分析等各种矢量数据分析过程中,首要面对的便是矢量数据拓扑一致性问题。拓扑一致性处理是对GIS矢量数据中由于采集、存储、压缩、转换导致的空间拓扑关系不一致问题进行的拓扑处理,其使得待处理数据在容限范围内具有拓扑一致性,从而便于后续相关分析功能的进行。该文在分析和总结已有拓扑一致性处理算法的基础上,提出了一种更为高效的拓扑一致性处理改进算法,包括弧段间拓扑处理、节点与弧段间拓扑处理、节点间邻近搜索等核心过程。对比实验表明,该算法在保证拓扑一致性处理效果的基础上具有较高的处理性能,是一种实用性较强的拓扑一致性处理算法。  相似文献   

6.
作为GIS的核心功能之一,空间分析逐步向处理数据海量化及分析过程复杂化方向发展,以往的串行算法渐渐不能满足人们对空间分析在计算效率、性能等方面的需求,并行空间分析算法作为解决目前问题的有效途径受到越来越多的关注。该文在简要介绍空间分析方法和并行计算技术的基础上,着重从矢量算法与栅格算法两方面阐述了目前并行空间分析算法的研究进展,评述了在空间数据自身特殊性的影响下并行空间分析算法的发展方向及存在的问题,探讨了在计算机软硬件技术高速发展的新背景下并行空间分析算法设计面临的机遇与挑战。  相似文献   

7.
基于PRISM和泰森多边形的地形要素日降水量空间插值研究   总被引:20,自引:5,他引:20  
以黑河流域河西走廊中段地区为例,利用该研究区年、月降水与地形间较强的相关性特点,在PRISM方法的基础上对该地区日降水量进行了空间插值计算。文章提出了以月降水量的PRISM空间插值结果为该月逐日降水空间分布的参考本底,利用泰森多边形方法确定空间日降水的概率,从而实现黑河流域河西走廊中段地区日降水的空间制图方法,并对该方法得到的日降水时空数据集进行了误差分析和评估。分析结果表明,这一方法简单可靠,满足分布式水文模型或相关陆表过程分布式模拟对分布式日降水数据时空精度的要求。  相似文献   

8.
多边形是GIS研究和应用中使用最频繁的几何对象,该文描述了基于简单要素模型的任意多边形分割算法。从计算几何出发,结合GIS空间数据的特点,将基于简单要素模型的多边形分割算法设计为:1)对多边形及分割线的边界排序,基于扫描线及外包矩形检测查找可能相交的线段,提高相交线段的搜索效率;2)计算交点生成结点信息(包括交点坐标、线号及交点的出入),并存储在独立的单向链表中;3)根据结点链表和原多边形坐标搜索结果多边形。该算法能够分割任意简单多边形(凹凸、曲线边界和带洞的多边形)以及有共享边的多边形。最后在MapGIS7.0平台上,实现了基于简单要素类的多边形分割功能。  相似文献   

9.
多边形顶点的凹凸性是其重要的形状特征,常被应用于制图综合、模式识别等方面.该文利用多边形特有的面积属性,将辛普森面积计算公式引入多边形顶点的凹凸性识别算法中,通过计算多边形中待判断顶点与其相邻两顶点所构成三角形的辛普森面积与整个多边形的辛普森面积的符号异同来判断顶点凹凸性.经推算证明,该算法对于复杂多边形的顶点凹凸性识别同样有效.  相似文献   

10.
多边形主骨架线提取算法的设计与实现   总被引:1,自引:0,他引:1  
在Delaunay三角网的基础上对骨架线节点进行了分类,通过确定主骨架线的两个端点,运用回溯法提取了多边形的主骨架线,同时给出了详细的算法步骤,并在Visual C++2003环境下实现了该算法。较之其他算法,该算法思路简捷,易于编程,生成的主骨架线形态优良,较好地反映了多边形的主体形状特征和主延伸方向。  相似文献   

11.
In this paper, we propose a new graphics processing unit (GPU) method able to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight-line graph consisting of points and segments. All existing methods compute the Delaunay triangulation of the given point set, insert all the segments, and then finally transform the resulting triangulation into the CDT. To the contrary, our novel approach simultaneously inserts points and segments into the triangulation, taking special care to avoid conflicts during retriangulations due to concurrent insertion of points or concurrent edge flips. Our implementation using the Compute Unified Device Architecture programming model on NVIDIA GPUs improves, in terms of running time, the best known GPU-based approach to the CDT problem.  相似文献   

12.
Polygon intersection is an important spatial data-handling process, on which many spatial operations are based. However, this process is computationally intensive because it involves the detection and calculation of polygon intersections. We addressed this computation issue based on two perspectives. First, we improved a method called boundary algebra filling to efficiently rasterize the input polygons. Polygon intersections were subsequently detected in the cells of the raster. Owing to the use of a raster data structure, this method offers advantages of reduced task dependence and improved performance. Based on this method, we developed parallel strategies for different procedures in terms of workload decomposition and task scheduling. Thus, the workload across different parallel processes can be balanced. The results suggest that our method can effectively accelerate the process of polygon intersection. When addressing datasets with 1,409,020 groups of overlapping polygons, our method could reduce the total execution time from 987.82 to 53.66 s, thereby obtaining an optimal speedup ratio of 18.41 while consistently balancing the workloads. We also tested the effect of task scheduling on the parallel efficiency, showing that reducing the total runtime is effective, especially for a lower number of processes. Finally, the good scalability of the method is demonstrated.  相似文献   

13.
Viewshed analysis, often supported by geographic information system, is widely used in many application domains. However, as terrain data continue to become increasingly large and available at high resolutions, data-intensive viewshed analysis poses significant computational challenges. General-purpose computation on graphics processing units (GPUs) provides a promising means to address such challenges. This article describes a parallel computing approach to data-intensive viewshed analysis of large terrain data using GPUs. Our approach exploits the high-bandwidth memory of GPUs and the parallelism of massive spatial data to enable memory-intensive and computation-intensive tasks while central processing units are used to achieve efficient input/output (I/O) management. Furthermore, a two-level spatial domain decomposition strategy has been developed to mitigate a performance bottleneck caused by data transfer in the memory hierarchy of GPU-based architecture. Computational experiments were designed to evaluate computational performance of the approach. The experiments demonstrate significant performance improvement over a well-known sequential computing method, and an enhanced ability of analyzing sizable datasets that the sequential computing method cannot handle.  相似文献   

14.
As an important spatiotemporal simulation approach and an effective tool for developing and examining spatial optimization strategies (e.g., land allocation and planning), geospatial cellular automata (CA) models often require multiple data layers and consist of complicated algorithms in order to deal with the complex dynamic processes of interest and the intricate relationships and interactions between the processes and their driving factors. Also, massive amount of data may be used in CA simulations as high-resolution geospatial and non-spatial data are widely available. Thus, geospatial CA models can be both computationally intensive and data intensive, demanding extensive length of computing time and vast memory space. Based on a hybrid parallelism that combines processes with discrete memory and threads with global memory, we developed a parallel geospatial CA model for urban growth simulation over the heterogeneous computer architecture composed of multiple central processing units (CPUs) and graphics processing units (GPUs). Experiments with the datasets of California showed that the overall computing time for a 50-year simulation dropped from 13,647 seconds on a single CPU to 32 seconds using 64 GPU/CPU nodes. We conclude that the hybrid parallelism of geospatial CA over the emerging heterogeneous computer architectures provides scalable solutions to enabling complex simulations and optimizations with massive amount of data that were previously infeasible, sometimes impossible, using individual computing approaches.  相似文献   

15.
基于R树的分布式并行空间索引机制研究   总被引:2,自引:0,他引:2  
为提高分布式并行计算环境下海量空间数据管理与并行化处理的效率,基于并行空间索引机制的研究,设计一种多层并行R树空间索引结构。该索引结构以高效率的并行空间数据划分策略为基础,以经典的并行计算方法论为依据,使其结构设计在保证能够获得较好的负载平衡性能的前提下,更适合于海量空间数据的并行化处理。以空间范围查询并行处理的系统响应时间为性能评估指标,通过实验证明并行空间索引结构具有设计合理、性能高效的特点。  相似文献   

16.
关雪峰  曾宇媚 《地理科学进展》2018,37(10):1314-1327
随着互联网、物联网和云计算的高速发展,与时间、空间相关的数据呈现出“爆炸式”增长的趋势,时空大数据时代已经来临。时空大数据除具备大数据典型的“4V”特性外,还具备丰富的语义特征和时空动态关联特性,已经成为地理学者分析自然地理环境、感知人类社会活动规律的重要资源。然而在具体研究应用中,传统数据处理和分析方法已无法满足时空大数据高效存取、实时处理、智能挖掘的性能需求。因此,时空大数据与高性能计算/云计算融合是必然的发展趋势。在此背景下,本文首先从大数据的起源出发,回顾了大数据概念的发展历程,以及时空大数据的特有特征;然后分析了时空大数据研究应用产生的性能需求,总结了底层平台软硬件的发展现状;进而重点从时空大数据的存储管理、时空分析和领域挖掘3个角度对并行化现状进行了总结,阐述了其中存在的问题;最后指出了时空大数据研究发展趋势。  相似文献   

17.
Geographically Weighted Regression (GWR) is a widely used tool for exploring spatial heterogeneity of processes over geographic space. GWR computes location-specific parameter estimates, which makes its calibration process computationally intensive. The maximum number of data points that can be handled by current open-source GWR software is approximately 15,000 observations on a standard desktop. In the era of big data, this places a severe limitation on the use of GWR. To overcome this limitation, we propose a highly scalable, open-source FastGWR implementation based on Python and the Message Passing Interface (MPI) that scales to the order of millions of observations. FastGWR optimizes memory usage along with parallelization to boost performance significantly. To illustrate the performance of FastGWR, a hedonic house price model is calibrated on approximately 1.3 million single-family residential properties from a Zillow dataset for the city of Los Angeles, which is the first effort to apply GWR to a dataset of this size. The results show that FastGWR scales linearly as the number of cores within the High-Performance Computing (HPC) environment increases. It also outperforms currently available open-sourced GWR software packages with drastic speed reductions – up to thousands of times faster – on a standard desktop.  相似文献   

18.
ABSTRACT

Geographically Weighted Regression (GWR) has been broadly used in various fields to model spatially non-stationary relationships. Multi-scale Geographically Weighted Regression (MGWR) is a recent advancement to the classic GWR model. MGWR is superior in capturing multi-scale processes over the traditional single-scale GWR model by using different bandwidths for each covariate. However, the multiscale property of MGWR brings additional computation costs. The calibration process of MGWR involves iterative back-fitting under the additive model (AM) framework. Currently, MGWR can only be applied on small datasets within a tolerable time and is prohibitively time-consuming to run with moderately large datasets (greater than 5,000 observations). In this paper, we propose a parallel implementation that has crucial computational improvements to the MGWR calibration. This improved computational method reduces both memory footprint and runtime to allow MGWR modelling to be applied to moderate-to-large datasets (up to 100,000 observations). These improvements are integrated into the mgwr python package and the MGWR 2.0 software, both of which are freely available to download.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号