首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 230 毫秒
1.
Cellular automata (CA) models can simulate complex urban systems through simple rules and have become important tools for studying the spatio-temporal evolution of urban land use. However, the multiple and large-volume data layers, massive geospatial processing and complicated algorithms for automatic calibration in the urban CA models require a high level of computational capability. Unfortunately, the limited performance of sequential computation on a single computing unit (i.e. a central processing unit (CPU) or a graphics processing unit (GPU)) and the high cost of parallel design and programming make it difficult to establish a high-performance urban CA model. As a result of its powerful computational ability and scalability, the vectorization paradigm is becoming increasingly important and has received wide attention with regard to this kind of computational problem. This paper presents a high-performance CA model using vectorization and parallel computing technology for the computation-intensive and data-intensive geospatial processing in urban simulation. To transfer the original algorithm to a vectorized algorithm, we define the neighborhood set of the cell space and improve the operation paradigm of neighborhood computation, transition probability calculation, and cell state transition. The experiments undertaken in this study demonstrate that the vectorized algorithm can greatly reduce the computation time, especially in the environment of a vector programming language, and it is possible to parallelize the algorithm as the data volume increases. The execution time for the simulation of 5-m resolution and 3 × 3 neighborhood decreased from 38,220.43 s to 803.36 s with the vectorized algorithm and was further shortened to 476.54 s by dividing the domain into four computing units. The experiments also indicated that the computational efficiency of the vectorized algorithm is closely related to the neighborhood size and configuration, as well as the shape of the research domain. We can conclude that the combination of vectorization and parallel computing technology can provide scalable solutions to significantly improve the applicability of urban CA.  相似文献   

2.
Viewshed analysis, often supported by geographic information system, is widely used in many application domains. However, as terrain data continue to become increasingly large and available at high resolutions, data-intensive viewshed analysis poses significant computational challenges. General-purpose computation on graphics processing units (GPUs) provides a promising means to address such challenges. This article describes a parallel computing approach to data-intensive viewshed analysis of large terrain data using GPUs. Our approach exploits the high-bandwidth memory of GPUs and the parallelism of massive spatial data to enable memory-intensive and computation-intensive tasks while central processing units are used to achieve efficient input/output (I/O) management. Furthermore, a two-level spatial domain decomposition strategy has been developed to mitigate a performance bottleneck caused by data transfer in the memory hierarchy of GPU-based architecture. Computational experiments were designed to evaluate computational performance of the approach. The experiments demonstrate significant performance improvement over a well-known sequential computing method, and an enhanced ability of analyzing sizable datasets that the sequential computing method cannot handle.  相似文献   

3.
Performing point pattern analysis using Ripley’s K function on point events of large size is computationally intensive as it involves massive point-wise comparisons, time-consuming edge effect correction weights calculation, and a large number of simulations. This article presented two strategies to optimize the algorithm for point pattern analysis using Ripley’s K function and utilized cloud computing to further accelerate the optimized algorithm. The first optimization sorted the points on their x and y coordinates and thus narrowed the scope of searching for neighboring points down to a rectangular area around each point in estimating K function. Using the actual study area in computing edge effect correction weights is essential to estimate an unbiased K function, but is very computationally intensive if the study area is of complex shape. The second optimization reused the previously computed weights to avoid repeating expensive weights calculation. The optimized algorithm was then parallelized using Open Multi-Processing (OpenMP) and hybrid Message Passing Interface (MPI)/OpenMP on the cloud computing platform. Performance testing showed that the optimizations effectively accelerated point pattern analysis using K function by a factor of 8 using both the sequential version and the OpenMP-parallel version of the optimized algorithm. While the OpenMP-based parallelization achieved good scalability with respect to the number of CPU cores utilized and the problem size, the hybrid MPI/OpenMP-based parallelization significantly shortened the time for estimating K function and performing simulations by utilizing computing resources on multiple computing nodes. Computational challenge imposed by point pattern analysis tasks on point events of large size involving a large number of simulations can be addressed by utilizing elastic, distributed cloud resources.  相似文献   

4.
传统分布式水文模型采用串行计算模式,其计算能力无法满足大规模水文精细化、多要素、多过程耦合模拟的需求,亟需并行计算的支持。进入21世纪后,计算机技术的飞速发展和并行环境的逐步完善,为分布式水文模型并行计算提供了软硬件支撑。论文从并行环境、并行算法2个方面对已有研究进行总结概括,分析了不同并行环境和并行算法的优势与不足,并提出提高模型并行效率的手段,即合理分配进程/线程数缩减通信开销,采用混合并行环境增强模型可扩展性,空间或时空离散化提高模型的可并行性及动态分配计算任务、平衡工作负载等。最后,论文对高性能并行分布式模型的未来研究方向进行展望。  相似文献   

5.
Over recent years, massive geospatial information has been produced at a prodigious rate, and is usually geographically distributed across the Internet. Grid computing, as a recent development in the landscape of distributed computing, is deemed as a good solution for distributed geospatial data management and manipulation. Thus, the Grid computing technology can be applied to integrate various distributed resources into a ‘super-computer’ that enables efficient distributed geospatial query processing. In order to realize this vision, an effective mechanism for building the distributed geospatial query workflow in the Grid environment needs to be elaborately designed. The workflow-building technology aims to automatically transform the global geospatial query into an equivalent distributed query process in the Grid. In response to this goal, detailed steps and algorithms for building the distributed geospatial query workflow in the Grid environment are discussed in this article. Moreover, we develop corresponding software tools that enable Grid-based geospatial queries to be run against multiple data resources. Experimental results demonstrate that the proposed methodology is feasible and correct.  相似文献   

6.
The continually increasing size of geospatial data sets poses a computational challenge when conducting interactive visual analytics using conventional desktop-based visualization tools. In recent decades, improvements in parallel visualization using state-of-the-art computing techniques have significantly enhanced our capacity to analyse massive geospatial data sets. However, only a few strategies have been developed to maximize the utilization of parallel computing resources to support interactive visualization. In particular, an efficient visualization intensity prediction component is lacking from most existing parallel visualization frameworks. In this study, we propose a data-driven view-dependent visualization intensity prediction method, which can dynamically predict the visualization intensity based on the distribution patterns of spatio-temporal data. The predicted results are used to schedule the allocation of visualization tasks. We integrated this strategy with a parallel visualization system deployed in a compute unified device architecture (CUDA)-enabled graphical processing units (GPUs) cloud. To evaluate the flexibility of this strategy, we performed experiments using dust storm data sets produced from a regional climate model. The results of the experiments showed that the proposed method yields stable and accurate prediction results with acceptable computational overheads under different types of interactive visualization operations. The results also showed that our strategy improves the overall visualization efficiency by incorporating intensity-based scheduling.  相似文献   

7.
A general-purpose parallel raster processing programming library (pRPL) was developed and applied to speed up a commonly used cellular automaton model with known tractability limitations. The library is suitable for use by geographic information scientists with basic programming skills, but who lack knowledge and experience of parallel computing and programming. pRPL is a general-purpose programming library that provides generic support for raster processing, including local-scope, neighborhood-scope, regional-scope, and global-scope algorithms as long as they are parallelizable. The library also supports multilayer algorithms. Besides the standard data domain decomposition methods, pRPL provides a spatially adaptive quad-tree-based decomposition to produce more evenly distributed workloads among processors. Data parallelism and task parallelism are supported, with both static and dynamic load-balancing. By grouping processors, pRPL also supports data–task hybrid parallelism, i.e., data parallelism within a processor group and task parallelism among processor groups. pSLEUTH, a parallel version of a well-known cellular automata model for simulating urban land-use change (SLEUTH), was developed to demonstrate full utilization of the advanced features of pRPL. Experiments with real-world data sets were conducted and the performance of pSLEUTH measured. We conclude not only that pRPL greatly reduces the development complexity of implementing a parallel raster-processing algorithm, it also greatly reduces the computing time of computationally intensive raster-processing algorithms, as demonstrated with pSLEUTH.  相似文献   

8.
分布式水文模型的并行计算研究进展   总被引:3,自引:1,他引:2  
大流域、高分辨率、多过程耦合的分布式水文模拟计算量巨大,传统串行计算技术不能满足其对计算能力的需求,因此需要借助于并行计算的支持。本文首先从空间、时间和子过程三个角度对分布式水文模型的可并行性进行了分析,指出空间分解的方式是分布式水文模型并行计算的首选方式,并从空间分解的角度对水文子过程计算方法和分布式水文模型进行了分类。然后对分布式水文模型的并行计算研究现状进行了总结。其中,在空间分解方式的并行计算方面,现有研究大多以子流域作为并行计算的基本调度单元;在时间角度的并行计算方面,有学者对时空域双重离散的并行计算方法进行了初步研究。最后,从并行算法设计、流域系统综合模拟的并行计算框架和支持并行计算的高性能数据读写方法3个方面讨论了当前存在的关键问题和未来的发展方向。  相似文献   

9.
Cellular automata (CA), which are a kind of bottom-up approaches, can be used to simulate urban dynamics and land use changes effectively. Urban simulation usually involves a large set of GIS data in terms of the extent of the study area and the number of spatial factors. The computation capability becomes a bottleneck of implementing CA for simulating large regions. Parallel computing techniques can be applied to CA for solving this kind of hard computation problem. This paper demonstrates that the performance of large-scale urban simulation can be significantly improved by using parallel computation techniques. The proposed urban CA is implemented in a parallel framework that runs on a cluster of PCs. A large region usually consists of heterogeneous or polarized development patterns. This study proposes a line-scanning method of load balance to reduce waiting time between parallel processors. This proposed method has been tested in a fast-growing region, the Pearl River Delta. The experiments indicate that parallel computation techniques with load balance can significantly improve the applicability of CA for simulating the urban development in this large complex region.  相似文献   

10.
Group-user intensive access to WebGIS exhibits spatiotemporal behaviour patterns with aggregation features and regularity distributions when geospatial data are accessed repeatedly over time and aggregated in certain spatial areas. We argue that these observable group-user access patterns provide a foundation for improved optimization of WebGIS so that it can respond to volume intensive requests with a higher quality of service and improve performance. Subsequently, a measure of access popularity distribution must precisely reflect the access aggregation and regularity features found in group-user intensive access. In our research, we considered both the temporal distribution characteristics and spatial correlation in the access popularity of tiled geospatial data (tiles). Based on the observation that group-user access follows a Zipf-like law, we built a tile-access popularity distribution based on time-sequence, to express the access aggregation of group-users with heavy-tailed characteristics. Considering the spatial locality of user-browsed tiles, we built a quantitative expression for the correlation between tile-access popularities and the distances to hotspot tiles, reflecting the attenuation of tile-access popularity to distance. Moreover, given the geographical spatial dependency and scale attribute of tiles, and the time-sequence of tile-access popularity, we built a Poisson regression model to express the degree of correlation among the accesses to adjacent tiles at different scales, reflecting the spatiotemporal correlation in tile access patterns. Experiments verify the accuracy of our Poisson regression model, which we then applied to a cluster-based cache-prefetching scenario. The results show that our model successfully reflects the spatiotemporal aggregation features of group-user intensive access and group-user behaviour patterns in WebGIS. The refined mathematical method in our model represents a time-sequence distribution of intensive access to tiles and the spatial aggregation and correlation in access to tiles at different scales, quantitatively expressing group-user spatiotemporal behaviour patterns with aggregation features and a regular distribution. Our proposed model provides a precise and empirical basis for performance-optimization strategies in WebGIS services, such as planning computing resource allocation and utilization, distributed storage of geospatial data, and providing distributed services so as to respond rapidly to geospatial data requests, thus addressing the challenges of volume-intensive user access.  相似文献   

11.
The objective of this computational study was to investigate to which extent the availability and the way of use of historical maps may affect the quality of the calibration process of cellular automata (CA) urban models. The numerical experiments are based on a constrained CA applied to a case study. Since the model depends on a large number of parameters, we optimize the CA using cooperative coevolutionary particle swarms, which is an approach known for its ability to operate effectively in search spaces with a high number of dimensions. To cope with the relevant computational cost related to the high number of CA simulations required by our study, we use a parallelized CA model that takes advantage of the computing power of graphics processing units. The study has shown that the accuracy of simulations can be significantly influenced by both the number and position in time of the historical maps involved in the calibration.  相似文献   

12.
The geospatial sensor web is set to revolutionise real-time geospatial applications by making up-to-date spatially and temporally referenced data relating to real-world phenomena ubiquitously available. The uptake of sensor web technologies is largely being driven by the recent introduction of the OpenGIS Sensor Web Enablement framework, a standardisation initiative that defines a set of web service interfaces and encodings to task and query geospatial sensors in near real time. However, live geospatial sensors are capable of producing vast quantities of data over a short time period, which presents a large, fluctuating and ongoing processing requirement that is difficult to adequately provide with the necessary computational resources. Grid computing appears to offer a promising solution to this problem but its usage thus far has primarily been restricted to processing static as opposed to real-time data sets. A new approach is presented in this work whereby geospatial data streams are processed on grid computing resources. This is achieved by submitting ongoing processing jobs to the grid that continually poll sensor data repositories using relevant OpenGIS standards. To evaluate this approach a road-traffic monitoring application was developed to process streams of GPS observations from a fleet of vehicles. Specifically, a Bayesian map-matching algorithm is performed that matches each GPS observation to a link on the road network. The results show that over 90% of observations were matched correctly and that the adopted approach is capable of achieving timely results for a linear time geoprocessing operation performed every 60 seconds. However, testing in a production grid environment highlighted some scalability and efficiency problems. Open Geospatial Consortium (OGC) data services were found to present an IO bottleneck and the adopted job submission method was found to be inefficient. Consequently, a number of recommendations are made regarding the grid job-scheduling mechanism, shortcomings in the OGC Web Processing Service specification and IO bottlenecks in OGC data services.  相似文献   

13.
何青松  谭荣辉  杨俊 《地理学报》2021,76(10):2522-2535
元胞自动机(CA)作为城市时空动态模拟应用最广泛的模型,可以有效模拟填充式和边缘式城市扩张过程,但是在飞地式扩张模拟方面稍显不足。本文提出一种改进CA模型—APCA,在传统CA基础上利用近邻传播聚类(AP)搜寻城市扩散增长的“种子点”,实现城市增长扩散过程和聚合过程的同步模拟。以武汉市为研究区域,使用APCA模拟其在2005—2025年间城市扩张的时空过程。结果显示:① APCA在设置“种子点”数量为1~8个时模拟总体精度均高于Logistics-CA,当“种子点”数量为6时,模拟新增部分精度最高,达到0.5217;② 2015—2025年武汉市飞地型增长面积约为8.67 km2,占新增城市用地总面积比例为6.30%;③ 武汉市1995—2025年间“先扩散后聚合”的城市扩张过程符合城市增长相位理论。APCA在一定程度上了完善了传统二维平面CA框架,将城市扩张模拟维度由面维扩展到点维,为准确展现城市用地空间扩展规律提供参考。  相似文献   

14.
《The Journal of geography》2012,111(6):217-225
Abstract

This article situates geospatial technologies as a constructivist tool in the K-12 classroom and examines student experiences with real-time authentic geospatial data provided through a hybrid adventure learning environment. Qualitative data from seven student focus groups demonstrate the effectiveness of using real-time authentic data, peer collaboration, and geospatial technologies in learning geography. We conclude with recommendations about geospatial technology curricula, geospatial lesson design, providing preservice teachers with geographic technological pedagogical content knowledge, and encouraging further research to investigate the impact, affordances, and pedagogical implications of geospatial technologies and data in the K–12 classroom.  相似文献   

15.
Simulation and subsequent visualization in a network environment are important to glean insights into spatiotemporal processes. As computing systems become increasingly diverse in hardware architectures, operating systems, screen sizes, human–computer interactions and network capabilities, effective simulation and visualization must become adaptive to a wide range of diverse devices. This paper focuses on the optimization of simulation and visualization analysis of the dam-failure flood spatiotemporal process for diverse computing systems. First, an adaptive browser/server architecture of the dam-failure simulation application was designed to fill the hardware performance and visualization context gap that exists within diverse computing systems. Second, a data flow and an optimization method for multilevel time-series flood data were given to provide more support to network simulation, visualization and analysis on diversified terminals. Finally, a user interaction friendly and plugin-free prototype system was developed. The experiment results demonstrate that the methods addressed in this paper can cope with the challenge in simulation, visualization and interaction of a dam-failure simulation application on diversified terminals.  相似文献   

16.
地理空间元数据关联网络的构建   总被引:1,自引:1,他引:0  
利用资源描述框架(RDF)设计地理空间元数据关联模型,根据地理空间元数据之间的语义关系和语义相关度的计算,以构建以元数据为节点、元数据之间的语义关系为边、语义相关度为权重的关联网络。在这一网络中,一个节点是一个地理空间元数据的资源描述图,包含属性特征(数据来源、空间特征、时间特征、内容)及其关系特征(元数据之间的语义关系、语义相关度)。实验及其分析表明,地理空间元数据关联网络可以有效地支持地理空间数据语义关联检索、推荐等应用,这与传统的基于关键词的元数据检索方式相比,具有更高的准确度。  相似文献   

17.
空间数据立方体的多维数据组织及存储   总被引:1,自引:1,他引:0  
定义空间数据立方体地理空间维、专题维和时间维分别包含的数据种类和内容;设计它们的维和维层次数据结构;表述地理空间维、专题维和时间维在概念层次和物理层次上构成空间数据立方体的方法;确定地理空间维、专题维和时间维数据的多维数组组织方法,以及多维数据的数据文件和虚拟内存存储策略;表达多维数组中记录间的关联运算和多维数组的压缩方法。  相似文献   

18.
Cellular automata (CA) have been widely used to simulate complex urban development processes. Previous studies indicated that vector-based cellular automata (VCA) could be applied to simulate urban land-use changes at a realistic land parcel level. Because of the complexity of VCA, these studies were conducted at small scales or did not adequately consider the highly fragmented processes of urban development. This study aims to build an effective framework called dynamic land parcel subdivision (DLPS)-VCA to accurately simulate urban land-use change processes at the land parcel level. We introduce this model in urban land-use change simulations to reasonably divide land parcels and introduce a random forest algorithm (RFA) model to explore the transition rules of urban land-use changes. Finally, we simulate the land-use changes in Shenzhen between 2009 and 2014 via the proposed DLPS-VCA model. Compared to the advanced Patch-CA and RFA-VCA models, the DLPS-VCA model achieves the highest simulation accuracy (Figure-of-Merit = 0.232), which is 32.57% and 18.97% higher respectively, and is most similar to the actual land-use scenario (similarity = 94.73%) at the pattern level. These results indicate that the DLPS-VCA model can both accurately split the land during urban land-use changes and significantly simulate urban expansion and urban land-use changes at a fine scale. Furthermore, the land-use change rules that are based on DPLS-VCA mining and the simulation results of several future urban development scenarios can act as guides for future urban planning policy formulation.  相似文献   

19.
The reliability of raster cellular automaton (CA) models for fine-scale land change simulations has been increasingly questioned, because regular pixels/grids cannot precisely represent irregular geographical entities and their interactions. Vector CA models can address these deficiencies due to the ability of the vector data structure to represent realistic urban entities. This study presents a new land parcel cellular automaton (LP-CA) model for simulating urban land changes. The innovation of this model is the use of ensemble learning method for automatic calibration. The proposed model is applied in Shenzhen, China. The experimental results indicate that bagging-Naïve Bayes yields the highest calibration accuracy among a set of selected classifiers. The assessment of neighborhood sensitivity suggests that the LP-CA model achieves the highest simulation accuracy with neighbor radius r = 2. The calibrated LP-CA is used to project future urban land use changes in Shenzhen, and the results are found to be consistent with those specified in the official city plan.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号