首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Abstract

Abstract. To achieve high levels of performance in parallel geoprocessing, the underlying spatial structure and relations of spatial models must be accounted for and exploited during decomposition into parallel processes. Spatial models are classified from two perspectives, the domain of modelling and the scope of operations, and a framework of strategies is developed to guide the decomposition of models with different characteristics into parallel processes. Two models are decomposed using these strategies: hill-shading on digital elevation models and the construction of Delaunay Triangulations. Performance statistics are presented for implementations of these algorithms on a MIMD computer.  相似文献   

2.
With the wide adoption of big spatial data and the emergence of CyberGIS, the nontrivial computational intensity introduced by massive amount of data poses great challenges to the performance of vector map visualization. The parallel computing technologies provide promising solutions to such problems. Evenly decomposing the visualization task into multiple subtasks is one of the key issues in parallel visualization of vector data. This study focuses on the decomposition of polyline and polygon data for parallel visualization. Two key factors impacting the computational intensity were identified: the number of features and the number of vertices of each feature. The computational intensity transform functions (CITFs) were constructed based on the linear relationships between the factors and the computing time. The computational intensity grid (CIG) can then be constructed using the CITFs to represent the spatial distribution of computational intensity. A noninterlaced continuous space-filling curve is used to group the lattices of CIG into multiple sub-domains such that each sub-domain entails the same amount of computational intensity as others. The experiments demonstrated that the approach proposed in this paper was able to effectively estimate and spatially represent the computational intensity of visualizing polylines and polygons. Compared with the regular domain decomposition methods, the new approach generated much more balanced decomposition of computational intensity for parallel visualization and achieved near-linear speedups, especially when the data is greatly heterogeneously distributed in space.  相似文献   

3.
Viewshed analysis, often supported by geographic information system, is widely used in many application domains. However, as terrain data continue to become increasingly large and available at high resolutions, data-intensive viewshed analysis poses significant computational challenges. General-purpose computation on graphics processing units (GPUs) provides a promising means to address such challenges. This article describes a parallel computing approach to data-intensive viewshed analysis of large terrain data using GPUs. Our approach exploits the high-bandwidth memory of GPUs and the parallelism of massive spatial data to enable memory-intensive and computation-intensive tasks while central processing units are used to achieve efficient input/output (I/O) management. Furthermore, a two-level spatial domain decomposition strategy has been developed to mitigate a performance bottleneck caused by data transfer in the memory hierarchy of GPU-based architecture. Computational experiments were designed to evaluate computational performance of the approach. The experiments demonstrate significant performance improvement over a well-known sequential computing method, and an enhanced ability of analyzing sizable datasets that the sequential computing method cannot handle.  相似文献   

4.
Abstract

Large spatial interpolation problems present significant computational challenges even for the fastest workstations. In this paper we demonstrate how parallel processing can be used to reduce computation times to levels that are suitable for interactive interpolation analyses of large spatial databases. Though the approach developed in this paper can be used with a wide variety of interpolation algorithms, we specifically contrast the results obtained from a global ‘brute force’ inverse–distance weighted interpolation algorithm with those obtained using a much more efficient local approach. The parallel versions of both implementations are superior to their sequential counterparts. However, the local version of the parallel algorithm provides the best overall performance.  相似文献   

5.
Seabed sediment textural parameters such as mud, sand and gravel content can be useful surrogates for predicting patterns of benthic biodiversity. Multibeam sonar mapping can provide near-complete spatial coverage of high-resolution bathymetry and backscatter data that are useful in predicting sediment parameters. Multibeam acoustic data collected across a ~1000 km2 area of the Carnarvon Shelf, Western Australia, were used in a predictive modelling approach to map eight seabed sediment parameters. Four machine learning models were used for the predictive modelling: boosted decision tree, random forest decision tree, support vector machine and generalised regression neural network. The results indicate overall satisfactory statistical performance, especially for %Mud, %Sand, Sorting, Skewness and Mean Grain Size. The study also demonstrates that predictive modelling using the combination of machine learning models has provided the ability to generate prediction uncertainty maps. However, the single models were shown to have overall better prediction performance than the combined models. Another important finding was that choosing an appropriate set of explanatory variables, through a manual feature selection process, was a critical step for optimising model performance. In addition, machine learning models were able to identify important explanatory variables, which are useful in identifying underlying environmental processes and checking predictions against the existing knowledge of the study area. The sediment prediction maps obtained in this study provide reliable coverage of key physical variables that will be incorporated into the analysis of covariance of physical and biological data for this area.  相似文献   

6.
分布式水文模型的并行计算研究进展   总被引:3,自引:1,他引:2  
大流域、高分辨率、多过程耦合的分布式水文模拟计算量巨大,传统串行计算技术不能满足其对计算能力的需求,因此需要借助于并行计算的支持。本文首先从空间、时间和子过程三个角度对分布式水文模型的可并行性进行了分析,指出空间分解的方式是分布式水文模型并行计算的首选方式,并从空间分解的角度对水文子过程计算方法和分布式水文模型进行了分类。然后对分布式水文模型的并行计算研究现状进行了总结。其中,在空间分解方式的并行计算方面,现有研究大多以子流域作为并行计算的基本调度单元;在时间角度的并行计算方面,有学者对时空域双重离散的并行计算方法进行了初步研究。最后,从并行算法设计、流域系统综合模拟的并行计算框架和支持并行计算的高性能数据读写方法3个方面讨论了当前存在的关键问题和未来的发展方向。  相似文献   

7.
8.
This study presents a massively parallel spatial computing approach that uses general-purpose graphics processing units (GPUs) to accelerate Ripley’s K function for univariate spatial point pattern analysis. Ripley’s K function is a representative spatial point pattern analysis approach that allows for quantitatively evaluating the spatial dispersion characteristics of point patterns. However, considerable computation is often required when analyzing large spatial data using Ripley’s K function. In this study, we developed a massively parallel approach of Ripley’s K function for accelerating spatial point pattern analysis. GPUs serve as a massively parallel platform that is built on many-core architecture for speeding up Ripley’s K function. Variable-grained domain decomposition and thread-level synchronization based on shared memory are parallel strategies designed to exploit concurrency in the spatial algorithm of Ripley’s K function for efficient parallelization. Experimental results demonstrate that substantial acceleration is obtained for Ripley’s K function parallelized within GPU environments.  相似文献   

9.
Abstract

The current research focuses upon the development of a methodology for undertaking real-time spatial analysis in a supercomputing environment, specifically using massively parallel SIMD computers. Several approaches that can be used to explore the parallelization characteristics of spatial problems are introduced. Within the focus of a methodology directed toward spatial data parallelism, strategies based on both location-based data decomposition and object-based data decomposition are proposed and a programming logic for spatial operations at local, neighborhood and global levels is also recommended. An empirical study of real-time traffic flow analysis shows the utility of the suggested approach for a complex, spatial analysis situation. The empirical example demonstrates that the proposed methodology, especially when combined with appropriate programming strategies, is preferable in situations where critical, real-time, spatial analysis computations are required. The implementation of this example in a parallel environment also points out some interesting theoretical questions with respect to the theoretical basis underlying the analysis of large networks.  相似文献   

10.
ABSTRACT

Spatial interpolation is a traditional geostatistical operation that aims at predicting the attribute values of unobserved locations given a sample of data defined on point supports. However, the continuity and heterogeneity underlying spatial data are too complex to be approximated by classic statistical models. Deep learning models, especially the idea of conditional generative adversarial networks (CGANs), provide us with a perspective for formalizing spatial interpolation as a conditional generative task. In this article, we design a novel deep learning architecture named conditional encoder-decoder generative adversarial neural networks (CEDGANs) for spatial interpolation, therein combining the encoder-decoder structure with adversarial learning to capture deep representations of sampled spatial data and their interactions with local structural patterns. A case study on elevations in China demonstrates the ability of our model to achieve outstanding interpolation results compared to benchmark methods. Further experiments uncover the learned spatial knowledge in the model’s hidden layers and test the potential to generalize our adversarial interpolation idea across domains. This work is an endeavor to investigate deep spatial knowledge using artificial intelligence. The proposed model can benefit practical scenarios and enlighten future research in various geographical applications related to spatial prediction.  相似文献   

11.
In the context of OpenStreetMap (OSM), spatial data quality, in particular completeness, is an essential aspect of its fitness for use in specific applications, such as planning tasks. To mitigate the effect of completeness errors in OSM, this study proposes a methodological framework for predicting by means of OSM urban areas in Europe that are currently not mapped or only partially mapped. For this purpose, a machine learning approach consisting of artificial neural networks and genetic algorithms is applied. Under the premise of existing OSM data, the model estimates missing urban areas with an overall squared correlation coefficient (R 2) of 0.589. Interregional comparisons of European regions confirm spatial heterogeneity in the model performance, whereas the R 2 ranges from 0.129 up to 0.789. These results show that the delineation of urban areas by means of the presented methodology depends strongly on location.  相似文献   

12.
Wildlife ecologists frequently make use of limited information on locations of a species of interest in combination with readily available GIS data to build models to predict space use. In addition to a wide range of statistical data models that are more commonly used, machine learning approaches provide another means to develop predictive spatial models. However, comparison of output from these two families of models for the same data set is not often carried out. It is important that wildlife managers understand the pitfalls and limitations when a single set of models is used with limited GIS data to try to predict and understand species distribution. To illustrate this, we carried out two sets of models (generalized linear mixed models (GLMMs) and boosted regression trees (BRTs)) to predict geographic occupancy of the eastern coyote (Canis latrans) on the island of Newfoundland, Canada. This exercise is illustrative of common spatial questions in wildlife research and management. Our results show that models vary depending on the approach (GLMM vs. BRT) and that, overall, BRT had higher predictive ability. Although machine learning has been criticized because it is not explicitly hypothesis-driven, it has been used in other areas of spatial modelling with success. Here, we demonstrate that it may be a useful approach for predicting wildlife space use and to generate hypotheses when data are limited. The results of this comparison can help to improve other models for species distributions and also guide future sampling and modelling initiatives.  相似文献   

13.
ABSTRACT

Short-term traffic forecasting on large street networks is significant in transportation and urban management, such as real-time route guidance and congestion alleviation. Nevertheless, it is very challenging to obtain high prediction accuracy with reasonable computational cost due to the complex spatial dependency on the traffic network and the time-varying traffic patterns. To address these issues, this paper develops a residual graph convolution long short-term memory (RGC-LSTM) model for spatial-temporal data forecasting considering the network topology. This model integrates a new graph convolution operator for spatial modelling on networks and a residual LSTM structure for temporal modelling considering multiple periodicities. The proposed model has few parameters, low computational complexity, and a fast convergence rate. The framework is evaluated on both the 10-min traffic speed data from Shanghai, China and the 5-min Caltrans Performance Measurement System (PeMS) traffic flow data. Experiments show the advantages of the proposed approach over various state-of-the-art baselines, as well as consistent performance across different datasets.  相似文献   

14.
ABSTRACT

Crime often clusters in space and time. Near-repeat patterns improve understanding of crime communicability and their space–time interactions. Near-repeat analysis requires extensive computing resources for the assessment of statistical significance of space–time interactions. A computationally intensive Monte Carlo simulation-based approach is used to evaluate the statistical significance of the space-time patterns underlying near-repeat events. Currently available software for identifying near-repeat patterns is not scalable for large crime datasets. In this paper, we show how parallel spatial programming can help to leverage spatio-temporal simulation-based analysis in large datasets. A parallel near-repeat calculator was developed and a set of experiments were conducted to compare the newly developed software with an existing implementation, assess the performance gain due to parallel computation, test the scalability of the software to handle large crime datasets and assess the utility of the new software for real-world crime data analysis. Our experimental results suggest that, efficiently designed parallel algorithms that leverage high-performance computing along with performance optimization techniques could be used to develop software that are scalable with large datasets and could provide solutions for computationally intensive statistical simulation-based approaches in crime analysis.  相似文献   

15.
As geospatial researchers' access to high-performance computing clusters continues to increase alongside the availability of high-resolution spatial data, it is imperative that techniques are devised to exploit these clusters' ability to quickly process and analyze large amounts of information. This research concentrates on the parallel computation of A Multidirectional Optimal Ecotope-Based Algorithm (AMOEBA). AMOEBA is used to derive spatial weight matrices for spatial autoregressive models and as a method for identifying irregularly shaped spatial clusters. While improvements have been made to the original ‘exhaustive’ algorithm, the resulting ‘constructive’ algorithm can still take a significant amount of time to complete with large datasets. This article outlines a parallel implementation of AMOEBA (the P-AMOEBA) written in Java utilizing the message passing library MPJ Express. In order to account for differing types of spatial grid data, two decomposition methods are developed and tested. The benefits of using the new parallel algorithm are demonstrated on an example dataset. Results show that different decompositions of spatial data affect the computational load balance across multiple processors and that the parallel version of AMOEBA achieves substantially faster runtimes than those reported in related publications.  相似文献   

16.
The unprecedented availability of geospatial data and technologies is driving innovation and discovery but not without the risk of losing focus on the geographic foundations of space and place in this vast “cyber sea” of data and technology. There is a pressing need to educate a new generation of scientists and citizens who understand how space and place matter in the real world and who understand and can keep pace with technological advancements in the computational world. We define cyberliteracy for GIScience (cyberGIScience literacy) and outline eight core areas that serve as a framework for establishing the essential abilities and foundational knowledge necessary to navigate and thrive in this new technologically rich world. The core areas are arranged to provide multiple dimensions of learning ranging from a technological focus to a problem solving focus or a focus on GIScience or computational science. We establish a competency matrix as a means of assessing and evaluating levels of cyberGIScience literacy across the eight core areas. We outline plans to catalyze the collaborative development and sharing of instructional materials to embed cyberGIScience literacy in the classroom and begin to realize a cyberliterate citizenry and academe. Key Words: big data, computational thinking, geographic education, GIS, spatial thinking.  相似文献   

17.
基于R树的分布式并行空间索引机制研究   总被引:2,自引:0,他引:2  
为提高分布式并行计算环境下海量空间数据管理与并行化处理的效率,基于并行空间索引机制的研究,设计一种多层并行R树空间索引结构。该索引结构以高效率的并行空间数据划分策略为基础,以经典的并行计算方法论为依据,使其结构设计在保证能够获得较好的负载平衡性能的前提下,更适合于海量空间数据的并行化处理。以空间范围查询并行处理的系统响应时间为性能评估指标,通过实验证明并行空间索引结构具有设计合理、性能高效的特点。  相似文献   

18.
小波包分解与多个机器学习模型耦合在风速预报中的对比   总被引:1,自引:1,他引:0  
准确预报风速是提高风电利用率以及电力系统稳定性的有效方法。学者们提出了大量风速预报模型,但针对不同下垫面不同风速预报模型的对比研究较少。该研究主要探究小波包分解和12个机器学习模型耦合对3种下垫面(戈壁、绿洲和沙漠)风速预报能力,探索风速预报的优化耦合模型。设置3组模型实验进行对比:单一机器学习模型、小波包分解-机器学习混合模型和小波包分解-机器学习-卷积神经网络混合模型。结果表明:具有特征选择和记忆功能的深度学习模型(如卷积长短时记忆网络)以及极限学习机对风速具有较好的预报能力,小波包分解可以显著提高模型精度。小波包分解与卷积长短时记忆网络、卷积门控循环单元和极限学习机的耦合模型在风速预报中具有较好的表现。这表明信号分解和深度学习的耦合模型,能有效提高预报精度,值得推广。  相似文献   

19.
High performance computing has undergone a radical transformation during the past decade. Though monolithic supercomputers continue to be built with significantly increased computing power, geographically distributed computing resources are now routinely linked using high‐speed networks to address a broad range of computationally complex problems. These confederated resources are referred to collectively as a computational Grid. Many geographical problems exhibit characteristics that make them candidates for this new model of computing. As an illustration, we describe a spatial statistics problem and demonstrate how it can be addressed using Grid computing strategies. A key element of this application is the development of middleware that handles domain decomposition and coordinates computational functions. We also discuss the development of Grid portals that are designed to help researchers and decision makers access and use geographic information analysis tools.  相似文献   

20.
ABSTRACT

Terrain feature detection is a fundamental task in terrain analysis and landscape scene interpretation. Discovering where a specific feature (i.e. sand dune, crater, etc.) is located and how it evolves over time is essential for understanding landform processes and their impacts on the environment, ecosystem, and human population. Traditional induction-based approaches are challenged by their inefficiency for generalizing diverse and complex terrain features as well as their performance for scalable processing of the massive geospatial data available. This paper presents a new deep learning (DL) approach to support automatic detection of terrain features from remotely sensed images. The novelty of this work lies in: (1) a terrain feature database containing 12,000 remotely sensed images (1,000 original images and 11,000 derived images from data augmentation) that supports data-driven model training and new discovery; (2) a DL-based object detection network empowered by ensemble learning and deep and deeper convolutional neural networks to achieve high-accuracy object detection; and (3) fine-tuning the model’s characteristics and behaviors to identify the best combination of hyperparameters and other network factors. The introduction of DL into geospatial applications is expected to contribute significantly to intelligent terrain analysis, landscape scene interpretation, and the maturation of spatial data science.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号