首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 5 毫秒
1.
With the wide adoption of big spatial data and the emergence of CyberGIS, the nontrivial computational intensity introduced by massive amount of data poses great challenges to the performance of vector map visualization. The parallel computing technologies provide promising solutions to such problems. Evenly decomposing the visualization task into multiple subtasks is one of the key issues in parallel visualization of vector data. This study focuses on the decomposition of polyline and polygon data for parallel visualization. Two key factors impacting the computational intensity were identified: the number of features and the number of vertices of each feature. The computational intensity transform functions (CITFs) were constructed based on the linear relationships between the factors and the computing time. The computational intensity grid (CIG) can then be constructed using the CITFs to represent the spatial distribution of computational intensity. A noninterlaced continuous space-filling curve is used to group the lattices of CIG into multiple sub-domains such that each sub-domain entails the same amount of computational intensity as others. The experiments demonstrated that the approach proposed in this paper was able to effectively estimate and spatially represent the computational intensity of visualizing polylines and polygons. Compared with the regular domain decomposition methods, the new approach generated much more balanced decomposition of computational intensity for parallel visualization and achieved near-linear speedups, especially when the data is greatly heterogeneously distributed in space.  相似文献   

2.
ABSTRACT

High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning.  相似文献   

3.
Cellular automata (CA) models can simulate complex urban systems through simple rules and have become important tools for studying the spatio-temporal evolution of urban land use. However, the multiple and large-volume data layers, massive geospatial processing and complicated algorithms for automatic calibration in the urban CA models require a high level of computational capability. Unfortunately, the limited performance of sequential computation on a single computing unit (i.e. a central processing unit (CPU) or a graphics processing unit (GPU)) and the high cost of parallel design and programming make it difficult to establish a high-performance urban CA model. As a result of its powerful computational ability and scalability, the vectorization paradigm is becoming increasingly important and has received wide attention with regard to this kind of computational problem. This paper presents a high-performance CA model using vectorization and parallel computing technology for the computation-intensive and data-intensive geospatial processing in urban simulation. To transfer the original algorithm to a vectorized algorithm, we define the neighborhood set of the cell space and improve the operation paradigm of neighborhood computation, transition probability calculation, and cell state transition. The experiments undertaken in this study demonstrate that the vectorized algorithm can greatly reduce the computation time, especially in the environment of a vector programming language, and it is possible to parallelize the algorithm as the data volume increases. The execution time for the simulation of 5-m resolution and 3 × 3 neighborhood decreased from 38,220.43 s to 803.36 s with the vectorized algorithm and was further shortened to 476.54 s by dividing the domain into four computing units. The experiments also indicated that the computational efficiency of the vectorized algorithm is closely related to the neighborhood size and configuration, as well as the shape of the research domain. We can conclude that the combination of vectorization and parallel computing technology can provide scalable solutions to significantly improve the applicability of urban CA.  相似文献   

4.
High-performance simulation of flow dynamics remains a major challenge in the use of physical-based, fully distributed hydrologic models. Parallel computing has been widely used to overcome efficiency limitation by partitioning a basin into sub-basins and executing calculations among multiple processors. However, existing partition-based parallelization strategies are still hampered by the dependency between inter-connected sub-basins. This study proposed a particle-set strategy to parallelize the flow-path network (FPN) model for achieving higher performance in the simulation of flow dynamics. The FPN model replaced the hydrological calculations on sub-basins with the movements of water packages along the upstream and downstream flow paths. Unlike previous partition-based task decomposition approaches, the proposed particle-set strategy decomposes the computational workload by randomly allocating runoff particles to concurrent computing processors. Simulation experiments of the flow routing process were undertaken to validate the developed particle-set FPN model. The outcomes of hourly outlet discharges were compared with field gauged records, and up to 128 computing processors were tested to explore its speedup capability in parallel computing. The experimental results showed that the proposed framework can achieve similar prediction accuracy and parallel efficiency to that of the Triangulated Irregular Network (TIN)-based Real-Time Integrated Basin Simulator (tRIBS).  相似文献   

5.
The geospatial sensor web is set to revolutionise real-time geospatial applications by making up-to-date spatially and temporally referenced data relating to real-world phenomena ubiquitously available. The uptake of sensor web technologies is largely being driven by the recent introduction of the OpenGIS Sensor Web Enablement framework, a standardisation initiative that defines a set of web service interfaces and encodings to task and query geospatial sensors in near real time. However, live geospatial sensors are capable of producing vast quantities of data over a short time period, which presents a large, fluctuating and ongoing processing requirement that is difficult to adequately provide with the necessary computational resources. Grid computing appears to offer a promising solution to this problem but its usage thus far has primarily been restricted to processing static as opposed to real-time data sets. A new approach is presented in this work whereby geospatial data streams are processed on grid computing resources. This is achieved by submitting ongoing processing jobs to the grid that continually poll sensor data repositories using relevant OpenGIS standards. To evaluate this approach a road-traffic monitoring application was developed to process streams of GPS observations from a fleet of vehicles. Specifically, a Bayesian map-matching algorithm is performed that matches each GPS observation to a link on the road network. The results show that over 90% of observations were matched correctly and that the adopted approach is capable of achieving timely results for a linear time geoprocessing operation performed every 60 seconds. However, testing in a production grid environment highlighted some scalability and efficiency problems. Open Geospatial Consortium (OGC) data services were found to present an IO bottleneck and the adopted job submission method was found to be inefficient. Consequently, a number of recommendations are made regarding the grid job-scheduling mechanism, shortcomings in the OGC Web Processing Service specification and IO bottlenecks in OGC data services.  相似文献   

6.
As an important spatiotemporal simulation approach and an effective tool for developing and examining spatial optimization strategies (e.g., land allocation and planning), geospatial cellular automata (CA) models often require multiple data layers and consist of complicated algorithms in order to deal with the complex dynamic processes of interest and the intricate relationships and interactions between the processes and their driving factors. Also, massive amount of data may be used in CA simulations as high-resolution geospatial and non-spatial data are widely available. Thus, geospatial CA models can be both computationally intensive and data intensive, demanding extensive length of computing time and vast memory space. Based on a hybrid parallelism that combines processes with discrete memory and threads with global memory, we developed a parallel geospatial CA model for urban growth simulation over the heterogeneous computer architecture composed of multiple central processing units (CPUs) and graphics processing units (GPUs). Experiments with the datasets of California showed that the overall computing time for a 50-year simulation dropped from 13,647 seconds on a single CPU to 32 seconds using 64 GPU/CPU nodes. We conclude that the hybrid parallelism of geospatial CA over the emerging heterogeneous computer architectures provides scalable solutions to enabling complex simulations and optimizations with massive amount of data that were previously infeasible, sometimes impossible, using individual computing approaches.  相似文献   

7.
Viewshed analysis, often supported by geographic information system, is widely used in many application domains. However, as terrain data continue to become increasingly large and available at high resolutions, data-intensive viewshed analysis poses significant computational challenges. General-purpose computation on graphics processing units (GPUs) provides a promising means to address such challenges. This article describes a parallel computing approach to data-intensive viewshed analysis of large terrain data using GPUs. Our approach exploits the high-bandwidth memory of GPUs and the parallelism of massive spatial data to enable memory-intensive and computation-intensive tasks while central processing units are used to achieve efficient input/output (I/O) management. Furthermore, a two-level spatial domain decomposition strategy has been developed to mitigate a performance bottleneck caused by data transfer in the memory hierarchy of GPU-based architecture. Computational experiments were designed to evaluate computational performance of the approach. The experiments demonstrate significant performance improvement over a well-known sequential computing method, and an enhanced ability of analyzing sizable datasets that the sequential computing method cannot handle.  相似文献   

8.
Over recent years, massive geospatial information has been produced at a prodigious rate, and is usually geographically distributed across the Internet. Grid computing, as a recent development in the landscape of distributed computing, is deemed as a good solution for distributed geospatial data management and manipulation. Thus, the Grid computing technology can be applied to integrate various distributed resources into a ‘super-computer’ that enables efficient distributed geospatial query processing. In order to realize this vision, an effective mechanism for building the distributed geospatial query workflow in the Grid environment needs to be elaborately designed. The workflow-building technology aims to automatically transform the global geospatial query into an equivalent distributed query process in the Grid. In response to this goal, detailed steps and algorithms for building the distributed geospatial query workflow in the Grid environment are discussed in this article. Moreover, we develop corresponding software tools that enable Grid-based geospatial queries to be run against multiple data resources. Experimental results demonstrate that the proposed methodology is feasible and correct.  相似文献   

9.
As geospatial researchers' access to high-performance computing clusters continues to increase alongside the availability of high-resolution spatial data, it is imperative that techniques are devised to exploit these clusters' ability to quickly process and analyze large amounts of information. This research concentrates on the parallel computation of A Multidirectional Optimal Ecotope-Based Algorithm (AMOEBA). AMOEBA is used to derive spatial weight matrices for spatial autoregressive models and as a method for identifying irregularly shaped spatial clusters. While improvements have been made to the original ‘exhaustive’ algorithm, the resulting ‘constructive’ algorithm can still take a significant amount of time to complete with large datasets. This article outlines a parallel implementation of AMOEBA (the P-AMOEBA) written in Java utilizing the message passing library MPJ Express. In order to account for differing types of spatial grid data, two decomposition methods are developed and tested. The benefits of using the new parallel algorithm are demonstrated on an example dataset. Results show that different decompositions of spatial data affect the computational load balance across multiple processors and that the parallel version of AMOEBA achieves substantially faster runtimes than those reported in related publications.  相似文献   

10.
Geographic visualization tools with coordinated and multiple views (CMV) typically provide sets of visualization methods. Such configuration gives users the possibility of investigating data in various visual contexts; however, it can be confusing due to the multiplicity of visual components and interactive functions. We addressed this challenge and conducted an empirical study on how a CMV tool, consisting of a map, a parallel coordinate plot (PCP), and a table, is used to acquire information. We combined a task-based approach with eye-tracking and usability metrics since these methods provide comprehensive insights into users’ behaviour. Our empirical study revealed that the freedom to choose visualization components is appreciated by users. The individuals worked with all the available visualization methods and they often used more than one visualization method when executing tasks. Different views were used in different ways by various individuals, but in a similarly effective way. Even PCP, which is claimed to be problematic, was found to be a handy way of exploring data when accompanied by interactive functions.  相似文献   

11.
Kernel density estimation (KDE) is a classic approach for spatial point pattern analysis. In many applications, KDE with spatially adaptive bandwidths (adaptive KDE) is preferred over KDE with an invariant bandwidth (fixed KDE). However, bandwidths determination for adaptive KDE is extremely computationally intensive, particularly for point pattern analysis tasks of large problem sizes. This computational challenge impedes the application of adaptive KDE to analyze large point data sets, which are common in this big data era. This article presents a graphics processing units (GPUs)-accelerated adaptive KDE algorithm for efficient spatial point pattern analysis on spatial big data. First, optimizations were designed to reduce the algorithmic complexity of the bandwidth determination algorithm for adaptive KDE. The massively parallel computing resources on GPU were then exploited to further speed up the optimized algorithm. Experimental results demonstrated that the proposed optimizations effectively improved the performance by a factor of tens. Compared to the sequential algorithm and an Open Multiprocessing (OpenMP)-based algorithm leveraging multiple central processing unit cores for adaptive KDE, the GPU-enabled algorithm accelerated point pattern analysis tasks by a factor of hundreds and tens, respectively. Additionally, the GPU-accelerated adaptive KDE algorithm scales reasonably well while increasing the size of data sets. Given the significant acceleration brought by the GPU-enabled adaptive KDE algorithm, point pattern analysis with the adaptive KDE approach on large point data sets can be performed efficiently. Point pattern analysis on spatial big data, computationally prohibitive with the sequential algorithm, can be conducted routinely with the GPU-accelerated algorithm. The GPU-accelerated adaptive KDE approach contributes to the geospatial computational toolbox that facilitates geographic knowledge discovery from spatial big data.  相似文献   

12.
This study introduces a new Triangulated Irregular Network(TIN) compression method and a progressive visualization technique using Delaunay triangulation. The compression strategy is based on the assumption that most triangulated 2.5-dimensional terrains are very similar to their Delaunay triangulation. Therefore, the compression algorithm only needs to maintain a few edges that are not included in the Delaunay edges. An efficient encoding method is presented for the set of edges by using vertex reordering and a general bracketing method. In experiments, the compression method examined several sets of TIN data with various resolutions, which were generated by five typical terrain simplification algorithms. By exploiting the results, the connecting structures of common terrain data are compressed to 0.17 bits per vertex on average, which is superior to the results of previous methods. The results are shown by a progressive visualization method for web-based GIS.  相似文献   

13.
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master–worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation–communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the Geographic Resources Analysis Support System (GRASS) environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an Long Term Evolution (LTE) network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master–worker approach. We successfully tackled real-world-sized data sets, while greatly reducing the processing time and saturating the hardware utilization.  相似文献   

14.
针对当前大规模全球科学数据可视化中存在的单机可视化数据量有限、从底层开发并行可视化系统难度大等问题,该文基于分布式环境和VisIt,提出了一种简便、开放而又有效的大规模全球科学数据可视化方法。介绍了VisIt的体系结构及运行机制,给出了自定义数据的并行可视化方法;并基于NCEP数据集及全球空间格网,在小规模集群环境下实现了小粒度适应性球体退化八叉树格网(SDOG)下的全球大气温度场的并行可视化。VisIt的并行可视化性能测试结果表明:通过增加计算节点,VisIt能有效摆脱传统单机可视化对数据量的限制,可实现大规模全球科学数据的并行可视化。  相似文献   

15.
ABSTRACT

Videos embedded with spatial coordinates, especially when combined with additional expert insights, offer the potential to acquire fine-scale multi-time period contextualized data for a variety of different environments. However, while these geospatial multimedia (GSMM) data include abundant spatiotemporal, semantic and visual information, the means to fully leverage their potential using a suite of visual and interactive analysis techniques and tools has thus far been lacking. In this paper, we address this gap by first identifying the types of tasks required of GSMM data, and then presenting a solution platform. This GeoVisuals system utilizes a visual analysis approach built on semantic data points that can be integrated spatially, which in turn enables management in a unified database with combined spatio-temporal and text querying. A set of visualization functions are integrated in two investigation modes: geo-video analysis and geo-location analysis.  相似文献   

16.
总结了数字高程模型构建、特征提取等并行算法的研究进展,概述了不同并行算法的主要内容;探讨了DTA并行技术在海量地形数据可视化和高性能地学计算的应用,随着DEM的需求日益增大,高精度、高分辨率DEM产品及其附加服务也逐步产品化。最后,通过分析并行计算发展的关键问题,提出DTA并行技术的研究趋势及研究意义,合适的数据划分和结果融合策略、通用并行算法、容错机制和负载均衡策略的设计是今后研究的重要内容,尤其是如何在多种计算模式共同发展的背景下利用并行计算解决地学难题,从而得到更接近现实世界地理环境的模拟,并扩大数字地形分析的应用范围。  相似文献   

17.
The demand for parallel geocomputation based on raster data is constantly increasing with the increase of the volume of raster data for applications and the complexity of geocomputation processing. The difficulty of parallel programming and the poor portability of parallel programs between different parallel computing platforms greatly limit the development and application of parallel raster-based geocomputation algorithms. A strategy that hides the parallel details from the developer of raster-based geocomputation algorithms provides a promising way towards solving this problem. However, existing parallel raster-based libraries cannot solve the problem of the poor portability of parallel programs. This paper presents such a strategy to overcome the poor portability, along with a set of parallel raster-based geocomputation operators (PaRGO) designed and implemented under this strategy. The developed operators are compatible with three popular types of parallel computing platforms: graphics processing unit supported by compute unified device architecture, Beowulf cluster supported by message passing interface (MPI), and symmetrical multiprocessing cluster supported by MPI and open multiprocessing, which make the details of the parallel programming and the parallel hardware architecture transparent to users. By using PaRGO in a style similar to sequential program coding, geocomputation developers can quickly develop parallel raster-based geocomputation algorithms compatible with three popular parallel computing platforms. Practical applications in implementing two algorithms for digital terrain analysis show the effectiveness of PaRGO.  相似文献   

18.
The increasing research interest in global climate change and the rise of the public awareness have generated a significant demand for new tools to support effective visualization of big climate data in a cyber environment such that anyone from any location with an Internet connection and a web browser can easily view and comprehend the data. In response to the demand, this paper introduces a new web-based platform for visualizing multidimensional, time-varying climate data on a virtual globe. The web-based platform is built upon a virtual globe system Cesium, which is open-source, highly extendable and capable of being easily integrated into a web environment. The emerging WebGL technique is adapted to support interactive rendering of 3D graphics with hardware graphics acceleration. To address the challenges of transmitting and visualizing voluminous, complex climate data over the Internet to support real-time visualization, we develop a stream encoding and transmission strategy based on video-compression techniques. This strategy allows dynamic provision of scientific data in different precisions to balance the needs for scientific analysis and visualization cost. Approaches to represent, encode and decode processed data are also introduced in detail to show the operational workflow. Finally, we conduct several experiments to demonstrate the performance of the proposed strategy under different network conditions. A prototype, PolarGlobe, has been developed to visualize climate data in the Arctic regions from multiple angles.  相似文献   

19.
ABSTRACT

Studying and planning urban evolution is essential for understanding the past and designing the cities of the future and can be facilitated by providing means for sharing, visualizing, and navigating in cities, on the web, in space and in time. Standard formats, methods, and tools exist for visualizing large-scale 3D cities on the web. In this paper, we go further by integrating the temporal dimension of cities in geospatial web delivery standard formats. In doing so, we enable interactive visualization of large-scale time-evolving 3D city models on the web. A key characteristic of this paper lies in the proposed four-step generic approach. First, we design a generic conceptual model of standard formats for delivering 3D cities on the web. Then, we formalize and integrate the temporal dimension of cities into this generic conceptual model. Following which, we specify the conceptual model in the 3D Tiles standard at logical and technical specification levels, resulting in an extension of 3D Tiles for delivering time-evolving 3D city models on the web. Finally, we propose an open-source implementation, experiments, and an evaluation of the propositions and visualization rules. We also provide access to reproducibility notes allowing researchers to replicate all the experiments.  相似文献   

20.
Forecasting dust storms for large geographical areas with high resolution poses great challenges for scientific and computational research. Limitations of computing power and the scalability of parallel systems preclude an immediate solution to such challenges. This article reports our research on using adaptively coupled models to resolve the computational challenges and enable the computability of dust storm forecasting by dividing the large geographical domain into multiple subdomains based on spatiotemporal distributions of the dust storm. A dust storm model (Eta-8bin) performs a quick forecasting with low resolution (22 km) to identify potential hotspots with high dust concentration. A finer model, non-hydrostatic mesoscale model (NMM-dust) performs high-resolution (3 km) forecasting over the much smaller hotspots in parallel to reduce computational requirements and computing time. We also adopted spatiotemporal principles among computing resources and subdomains to optimize parallel systems and improve the performance of high-resolution NMM-dust model. This research enabled the computability of high-resolution, large-area dust storm forecasting using the adaptively coupled execution of the two models Eta-8bin and NMM-dust.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号