首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
2.
Abstract

This paper explores parallel programming issues that are relevant to the efficient implementation of spatial data handling procedures on current parallel computers through sample implementations of the Douglas line simplification procedure. Using source code-equivalent implementations of the Douglas procedure, this paper analyses the performance characteristics of two parallel implementations, compares their performance characteristics to those of a sequential implementation, and identifies critical components of the parallel implementations that enhance or inhibit their overall performance values. The results of this work show that the selection of appropriate interprocessor communication and load balancing strategies are crucial to obtaining large speedup values over comparable sequential implementations.  相似文献   

3.
Abstract

Large spatial interpolation problems present significant computational challenges even for the fastest workstations. In this paper we demonstrate how parallel processing can be used to reduce computation times to levels that are suitable for interactive interpolation analyses of large spatial databases. Though the approach developed in this paper can be used with a wide variety of interpolation algorithms, we specifically contrast the results obtained from a global ‘brute force’ inverse–distance weighted interpolation algorithm with those obtained using a much more efficient local approach. The parallel versions of both implementations are superior to their sequential counterparts. However, the local version of the parallel algorithm provides the best overall performance.  相似文献   

4.
Abstract

Abstract. To achieve high levels of performance in parallel geoprocessing, the underlying spatial structure and relations of spatial models must be accounted for and exploited during decomposition into parallel processes. Spatial models are classified from two perspectives, the domain of modelling and the scope of operations, and a framework of strategies is developed to guide the decomposition of models with different characteristics into parallel processes. Two models are decomposed using these strategies: hill-shading on digital elevation models and the construction of Delaunay Triangulations. Performance statistics are presented for implementations of these algorithms on a MIMD computer.  相似文献   

5.
In this paper, we report efforts to develop a parallel implementation of the p-compact regionalization problem suitable for multi-core desktop and high-performance computing environments. Regionalization for data aggregation is a key component of many spatial analytical workflows that are known to be NP-Hard. We utilize a low communication cost parallel implementation technique that provides a benchmark for more complex implementations of this algorithm. Both the initialization phase, utilizing a Memory-based Randomized Greedy and Edge Reassignment (MERGE) algorithm, and the local search phase, utilizing Simulated Annealing, are distributed over available compute cores. Our results suggest that the proposed parallelization strategy is capable of solving the compactness-driven regionalization problem both efficiently and effectively. We expect this work to advance CyberGIS research by extending its application areas into the regionalization world and to make a contribution to the spatial analysis community by proposing this parallelization strategy to solve large regionalization problems efficiently.  相似文献   

6.
Kernel density estimation (KDE) is a classic approach for spatial point pattern analysis. In many applications, KDE with spatially adaptive bandwidths (adaptive KDE) is preferred over KDE with an invariant bandwidth (fixed KDE). However, bandwidths determination for adaptive KDE is extremely computationally intensive, particularly for point pattern analysis tasks of large problem sizes. This computational challenge impedes the application of adaptive KDE to analyze large point data sets, which are common in this big data era. This article presents a graphics processing units (GPUs)-accelerated adaptive KDE algorithm for efficient spatial point pattern analysis on spatial big data. First, optimizations were designed to reduce the algorithmic complexity of the bandwidth determination algorithm for adaptive KDE. The massively parallel computing resources on GPU were then exploited to further speed up the optimized algorithm. Experimental results demonstrated that the proposed optimizations effectively improved the performance by a factor of tens. Compared to the sequential algorithm and an Open Multiprocessing (OpenMP)-based algorithm leveraging multiple central processing unit cores for adaptive KDE, the GPU-enabled algorithm accelerated point pattern analysis tasks by a factor of hundreds and tens, respectively. Additionally, the GPU-accelerated adaptive KDE algorithm scales reasonably well while increasing the size of data sets. Given the significant acceleration brought by the GPU-enabled adaptive KDE algorithm, point pattern analysis with the adaptive KDE approach on large point data sets can be performed efficiently. Point pattern analysis on spatial big data, computationally prohibitive with the sequential algorithm, can be conducted routinely with the GPU-accelerated algorithm. The GPU-accelerated adaptive KDE approach contributes to the geospatial computational toolbox that facilitates geographic knowledge discovery from spatial big data.  相似文献   

7.
As geospatial researchers' access to high-performance computing clusters continues to increase alongside the availability of high-resolution spatial data, it is imperative that techniques are devised to exploit these clusters' ability to quickly process and analyze large amounts of information. This research concentrates on the parallel computation of A Multidirectional Optimal Ecotope-Based Algorithm (AMOEBA). AMOEBA is used to derive spatial weight matrices for spatial autoregressive models and as a method for identifying irregularly shaped spatial clusters. While improvements have been made to the original ‘exhaustive’ algorithm, the resulting ‘constructive’ algorithm can still take a significant amount of time to complete with large datasets. This article outlines a parallel implementation of AMOEBA (the P-AMOEBA) written in Java utilizing the message passing library MPJ Express. In order to account for differing types of spatial grid data, two decomposition methods are developed and tested. The benefits of using the new parallel algorithm are demonstrated on an example dataset. Results show that different decompositions of spatial data affect the computational load balance across multiple processors and that the parallel version of AMOEBA achieves substantially faster runtimes than those reported in related publications.  相似文献   

8.
??This article discusses the integration of two models, namely, the Physical Forest Fire Spread (PhFFS) and the High Definition Wind Model (HDWM), into a Geographical Information System-based interface. The resulting tool automates data acquisition, preprocesses spatial data, launches the aforementioned models and displays the corresponding results in a unique environment. Our implementation uses the Python language and Esri’s ArcPy library to extend the functionality of ArcMap 10.4. The PhFFS is a simplified 2D physical wildland fire spread model based on conservation equations, with convection and radiation as heat transfer mechanisms. It also includes some 3D effects. The HDWM arises from an asymptotic approximation of the Navier–Stokes equations, and provides a 3D wind velocity field in an air layer above the terrain surface. Both models can be run in standalone or coupled mode. Finally, the simulation of a real fire in Galicia (Spain) confirms that the tool developed is efficient and fully operational.  相似文献   

9.
With the increasing sizes of digital elevation models (DEMs), there is a growing need to design parallel schemes for existing sequential algorithms that identify and fill depressions in raster DEMs. The Priority-Flood algorithm is the fastest sequential algorithm in the literature for depression identification and filling of raster DEMs, but it has had no parallel implementation since it was proposed approximately a decade ago. A parallel Priority-Flood algorithm based on the fastest sequential variant is proposed in this study. The algorithm partitions a DEM into stripes, processes each stripe using the sequential variant in many rounds, and progressively identifies more slope cells that are misidentified as depression cells in previous rounds. Both Open Multi-Processing (OpenMP)- and Message Passing Interface (MPI)-based implementations are presented. The speed-up ratios of the OpenMP-based implementation over the sequential algorithm are greater than four for all tested DEMs with eight computing threads. The mean speed-up ratio of our MPI-based implementation is greater than eight over TauDEM, which is a widely used MPI-based library for hydrologic information extraction. The speed-up ratios of our MPI-based implementation generally become larger with more computing nodes. This study shows that the Priority-Flood algorithm can be implemented in parallel, which makes it an ideal algorithm for depression identification and filling on both single computers and computer clusters.  相似文献   

10.
Moving object databases are designed to store and process spatial and temporal object data. An especially useful moving object type is a moving region, which consists of one or more moving polygons suitable for modeling the spread of forest fires, the movement of clouds, spread of diseases and many other real-world phenomena. Previous implementations usually allow a changing shape of the region during the movement; however, the necessary restrictions on this model result in an inaccurate interpolation of rotating objects. In this paper, we present an alternative approach for moving and rotating regions of fixed shape, called Fixed Moving Regions, which provide a significantly better model for a wide range of applications like modeling the movement of oil tankers, icebergs and other rigid structures. Furthermore, we describe and implement several useful operations on this new object type to enable a database system to solve many real-world problems, as for example collision tests, projections and intersections, much more accurate than with other models. Based on this research, we also implemented a library for easy integration into moving objects database systems, as for example the DBMS Secondo (1) (2) developed at the FernUniversität in Hagen.  相似文献   

11.
In a pilot classification of 282 10-km squares in Great Britain, data on physiography, climate and geology were extracted. Parallel classifications were run using these variables and also using spatial location. Two classification methods were compared: minimum within-group variance and indicator species analysis. Similarities between the resulting classifications were considered, and the groups were assessed for geographic coherence. Their validity for use as stratifications were tested using analysis of variance and also by matching the classifications with known distributions of a number of bird and plant species.Classifications using spatial variables were geographically more coherent than those without. The different methods resulted in different groupings of the squares which were partly a result of the differences in weightings applied to the four types of variable. However, the analysis of variance showed that either classification method provided a good stratification of the country, in particular with respect to altitude and rainfall. Some bird and plant species distributions correlated well with the classifications, but others did not, dependent on the factors limiting those distributions.  相似文献   

12.
This study presents a methodology for conducting sensitivity and uncertainty analysis of a GIS-based multi-criteria model used to assess flood vulnerability in a case study in Brazil. The paper explores the robustness of model outcomes against slight changes in criteria weights. One criterion was varied at-a-time, while others were fixed to their baseline values. An algorithm was developed using Python and a geospatial data abstraction library to automate the variation of weights, implement the ANP (analytic network process) tool, reclassify the raster results, compute the class switches, and generate an uncertainty surface. Results helped to identify highly vulnerable areas that are burdened by high uncertainty and to investigate which criteria contribute to this uncertainty. Overall, the criteria ‘houses with improper building material’ and ‘evacuation drills and training’ are the most sensitive ones, thus, requiring more accurate measurements. The sensitivity of these criteria is explained by their weights in the base run, their spatial distribution, and the spatial resolution. These findings can support decision makers to characterize, report, and mitigate uncertainty in vulnerability assessment. The case study results demonstrate that the developed approach is simple, flexible, transparent, and may be applied to other complex spatial problems.  相似文献   

13.
Cellular automata (CA), which are a kind of bottom-up approaches, can be used to simulate urban dynamics and land use changes effectively. Urban simulation usually involves a large set of GIS data in terms of the extent of the study area and the number of spatial factors. The computation capability becomes a bottleneck of implementing CA for simulating large regions. Parallel computing techniques can be applied to CA for solving this kind of hard computation problem. This paper demonstrates that the performance of large-scale urban simulation can be significantly improved by using parallel computation techniques. The proposed urban CA is implemented in a parallel framework that runs on a cluster of PCs. A large region usually consists of heterogeneous or polarized development patterns. This study proposes a line-scanning method of load balance to reduce waiting time between parallel processors. This proposed method has been tested in a fast-growing region, the Pearl River Delta. The experiments indicate that parallel computation techniques with load balance can significantly improve the applicability of CA for simulating the urban development in this large complex region.  相似文献   

14.
小城镇土地定级是小城镇土地管理的基础性工作,以福建省天宝镇区土地定级为例,就地理信息系统技术在城镇土地定级中的应用进行了探讨。着重论述了GIS环境下土地定级基础图件库和基础属性数据库的建立、评价单元的生成、单元分值和单元总分值的计算及成果图的生成等属性数据和空间数据的处理过程。  相似文献   

15.
This study presents a massively parallel spatial computing approach that uses general-purpose graphics processing units (GPUs) to accelerate Ripley’s K function for univariate spatial point pattern analysis. Ripley’s K function is a representative spatial point pattern analysis approach that allows for quantitatively evaluating the spatial dispersion characteristics of point patterns. However, considerable computation is often required when analyzing large spatial data using Ripley’s K function. In this study, we developed a massively parallel approach of Ripley’s K function for accelerating spatial point pattern analysis. GPUs serve as a massively parallel platform that is built on many-core architecture for speeding up Ripley’s K function. Variable-grained domain decomposition and thread-level synchronization based on shared memory are parallel strategies designed to exploit concurrency in the spatial algorithm of Ripley’s K function for efficient parallelization. Experimental results demonstrate that substantial acceleration is obtained for Ripley’s K function parallelized within GPU environments.  相似文献   

16.
Geographically weighted spatial statistical methods are a family of spatial statistical methods developed to address the presence of non-stationarity in geographical processes, the so-called spatial heterogeneity. While these methods have recently become popular for analysis of spatial data, one of their characteristics is that they produce outputs that in themselves form complex multi-dimensional spatial data sets. Interpretation of these outputs is therefore not easy, but is of high importance, since spatial and non-spatial patterns in the results of these methods contain clues to causes of underlying non-stationarity. In this article, we focus on one of the geographically weighted methods, the geographically weighted discriminant analysis (GWDA), which is a method for prediction and analysis of categorical spatial data. It is an extension of linear discriminant analysis (LDA) that allows the relationship between the predictor variables and the categories to vary spatially. This produces a very complex data set of GWDA results, which include on top of the already complex discriminant analysis outputs (e.g. classifications and posterior probabilities) also spatially varying outputs (e.g. classification function parameters). In this article, we suggest using geovisual analytics to visualise results from LDA and GWDA to facilitate comparison between the global and local method results. For this, we develop a bespoke visual methodology that allows us to examine the performance of global and local classification method in terms of quality of classification. Furthermore, we are also interested in identifying the presence (or absence) of non-stationarity through comparison of the outputs of both methods. We do this in two ways. First, we visually explore spatial autocorrelation in both LDA and GWDA misclassifications. Second, we focus on relationships between the classification result and the independent variables and how they vary over space. We describe our visual analytic system for exploration of LDA and GWDA outputs and demonstrate our approach on a case study using a data set linking election results with a selection of socio-economic variables.  相似文献   

17.
分布式水文模型的并行计算研究进展   总被引:3,自引:1,他引:2  
大流域、高分辨率、多过程耦合的分布式水文模拟计算量巨大,传统串行计算技术不能满足其对计算能力的需求,因此需要借助于并行计算的支持。本文首先从空间、时间和子过程三个角度对分布式水文模型的可并行性进行了分析,指出空间分解的方式是分布式水文模型并行计算的首选方式,并从空间分解的角度对水文子过程计算方法和分布式水文模型进行了分类。然后对分布式水文模型的并行计算研究现状进行了总结。其中,在空间分解方式的并行计算方面,现有研究大多以子流域作为并行计算的基本调度单元;在时间角度的并行计算方面,有学者对时空域双重离散的并行计算方法进行了初步研究。最后,从并行算法设计、流域系统综合模拟的并行计算框架和支持并行计算的高性能数据读写方法3个方面讨论了当前存在的关键问题和未来的发展方向。  相似文献   

18.
With the technological improvements of satellite sensors, we will acquire more information about the earth so that we have reached a new application epoch of observation on earth environmental change and cartography. But with the enhancement of spatial resolution, some questions have arisen in the application of using traditional image processing and classification methods. Aiming for such questions, we studied the application of IKONOS very high resolution image (1 m) in Xiamen City on Urban Vegetation Cover Investigation and discussed the difference between the very high resolution image and traditional low spatial resolution image at classification,information abstraction etc. It is an advantageous test for the large-scale application of very high resolution data in the future.  相似文献   

19.
混合像元的存在不仅影响了基于高光谱影像的地物识别和分类精度,而且已成为遥感科学向定量化发展的主要障碍。本文以扎龙湿地为试验区,以环境一号卫星采集的高光谱影像为数据源,分别采用传统的全约束最小二乘光谱解混算法(fully constrained least squares spectral unmixing algorithm, FCLS)与基于稀疏约束最小二乘光谱解混算法(sparse constrained least squares spectral unmixing algorithm, SUFCLS)实现了试验区湿地的精细分类,并对两种分类结果的表现及其分类精度进行了对比分析。研究结果表明:SUFCLS算法能够自适应的从光谱库中选择场景中所占比例最高的一组端元,并将此端元的组合应用于传统的全约束最小二乘光谱解混中实现不同湿地类型丰度的提取,该算法充分考虑了端元的空间异质性,弥补了FCLS算法在端元选取过程中的不足。精度验证结果表明与FCLS算法相比,SUFCLS算法分类结果的均方根误差更小,丰度的相关系数更高,因此该方法对于提高湿地解混精度以及实现湿地精细化分类具有重要意义。  相似文献   

20.
With the technological improvements of satellite sensors, we will acquire more information about the earth so that we have reached a new application epoch of observation on earth environmental change and cartography. But with the enhancement of spatial resolution, some questions have arisen in the application of using traditional image processing and classification methods. Aiming for such questions, we studied the application of IKONOS very high resolution image (1 m) in Xiamen City on Urban Vegetation Cover Investigation and discussed the difference between the very high resolution image and traditional low spatial resolution image at classification, information abstraction etc. It is an advantageous test for the large-scale application of very high resolution data in the future.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号