首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 15 毫秒
1.
The availability of continental and global-scale spatio-temporal geographical data sets and the requirement to efficiently process, analyse and manage them led to the development of the temporally enabled Geographic Resources Analysis Support System (GRASS GIS). We present the temporal framework that extends GRASS GIS with spatio-temporal capabilities. The framework provides comprehensive functionality to implement a full-featured temporal geographic information system (GIS) based on a combined field and object-based approach. A significantly improved snapshot approach is used to manage spatial fields of raster, three-dimensional raster and vector type in time. The resulting timestamped spatial fields are organised in spatio-temporal fields referred to as space-time data sets. Both types of fields are handled as objects in our framework. The spatio-temporal extent of the objects and related metadata is stored in relational databases, thus providing additional functionalities to perform SQL-based analysis. We present our combined field and object-based approach in detail and show the management, analysis and processing of spatio-temporal data sets with complex spatio-temporal topologies. A key feature is the hierarchical processing of spatio-temporal data ranging from topological analysis of spatio-temporal fields over boolean operations on spatio-temporal extents, to single pixel, voxel and vector feature access. The linear scalability of our approach is demonstrated by handling up to 1,000,000 raster layers in a single space-time data set. We provide several code examples to show the capabilities of the GRASS GIS Temporal Framework and present the spatio-temporal intersection of trajectory data which demonstrates the object-based ability of our framework.  相似文献   

2.
Cellular automata (CA) models can simulate complex urban systems through simple rules and have become important tools for studying the spatio-temporal evolution of urban land use. However, the multiple and large-volume data layers, massive geospatial processing and complicated algorithms for automatic calibration in the urban CA models require a high level of computational capability. Unfortunately, the limited performance of sequential computation on a single computing unit (i.e. a central processing unit (CPU) or a graphics processing unit (GPU)) and the high cost of parallel design and programming make it difficult to establish a high-performance urban CA model. As a result of its powerful computational ability and scalability, the vectorization paradigm is becoming increasingly important and has received wide attention with regard to this kind of computational problem. This paper presents a high-performance CA model using vectorization and parallel computing technology for the computation-intensive and data-intensive geospatial processing in urban simulation. To transfer the original algorithm to a vectorized algorithm, we define the neighborhood set of the cell space and improve the operation paradigm of neighborhood computation, transition probability calculation, and cell state transition. The experiments undertaken in this study demonstrate that the vectorized algorithm can greatly reduce the computation time, especially in the environment of a vector programming language, and it is possible to parallelize the algorithm as the data volume increases. The execution time for the simulation of 5-m resolution and 3 × 3 neighborhood decreased from 38,220.43 s to 803.36 s with the vectorized algorithm and was further shortened to 476.54 s by dividing the domain into four computing units. The experiments also indicated that the computational efficiency of the vectorized algorithm is closely related to the neighborhood size and configuration, as well as the shape of the research domain. We can conclude that the combination of vectorization and parallel computing technology can provide scalable solutions to significantly improve the applicability of urban CA.  相似文献   

3.
Cellular automata (CA), which are a kind of bottom-up approaches, can be used to simulate urban dynamics and land use changes effectively. Urban simulation usually involves a large set of GIS data in terms of the extent of the study area and the number of spatial factors. The computation capability becomes a bottleneck of implementing CA for simulating large regions. Parallel computing techniques can be applied to CA for solving this kind of hard computation problem. This paper demonstrates that the performance of large-scale urban simulation can be significantly improved by using parallel computation techniques. The proposed urban CA is implemented in a parallel framework that runs on a cluster of PCs. A large region usually consists of heterogeneous or polarized development patterns. This study proposes a line-scanning method of load balance to reduce waiting time between parallel processors. This proposed method has been tested in a fast-growing region, the Pearl River Delta. The experiments indicate that parallel computation techniques with load balance can significantly improve the applicability of CA for simulating the urban development in this large complex region.  相似文献   

4.
As an important spatiotemporal simulation approach and an effective tool for developing and examining spatial optimization strategies (e.g., land allocation and planning), geospatial cellular automata (CA) models often require multiple data layers and consist of complicated algorithms in order to deal with the complex dynamic processes of interest and the intricate relationships and interactions between the processes and their driving factors. Also, massive amount of data may be used in CA simulations as high-resolution geospatial and non-spatial data are widely available. Thus, geospatial CA models can be both computationally intensive and data intensive, demanding extensive length of computing time and vast memory space. Based on a hybrid parallelism that combines processes with discrete memory and threads with global memory, we developed a parallel geospatial CA model for urban growth simulation over the heterogeneous computer architecture composed of multiple central processing units (CPUs) and graphics processing units (GPUs). Experiments with the datasets of California showed that the overall computing time for a 50-year simulation dropped from 13,647 seconds on a single CPU to 32 seconds using 64 GPU/CPU nodes. We conclude that the hybrid parallelism of geospatial CA over the emerging heterogeneous computer architectures provides scalable solutions to enabling complex simulations and optimizations with massive amount of data that were previously infeasible, sometimes impossible, using individual computing approaches.  相似文献   

5.
Polygon intersection is an important spatial data-handling process, on which many spatial operations are based. However, this process is computationally intensive because it involves the detection and calculation of polygon intersections. We addressed this computation issue based on two perspectives. First, we improved a method called boundary algebra filling to efficiently rasterize the input polygons. Polygon intersections were subsequently detected in the cells of the raster. Owing to the use of a raster data structure, this method offers advantages of reduced task dependence and improved performance. Based on this method, we developed parallel strategies for different procedures in terms of workload decomposition and task scheduling. Thus, the workload across different parallel processes can be balanced. The results suggest that our method can effectively accelerate the process of polygon intersection. When addressing datasets with 1,409,020 groups of overlapping polygons, our method could reduce the total execution time from 987.82 to 53.66 s, thereby obtaining an optimal speedup ratio of 18.41 while consistently balancing the workloads. We also tested the effect of task scheduling on the parallel efficiency, showing that reducing the total runtime is effective, especially for a lower number of processes. Finally, the good scalability of the method is demonstrated.  相似文献   

6.
A general-purpose parallel raster processing programming library (pRPL) was developed and applied to speed up a commonly used cellular automaton model with known tractability limitations. The library is suitable for use by geographic information scientists with basic programming skills, but who lack knowledge and experience of parallel computing and programming. pRPL is a general-purpose programming library that provides generic support for raster processing, including local-scope, neighborhood-scope, regional-scope, and global-scope algorithms as long as they are parallelizable. The library also supports multilayer algorithms. Besides the standard data domain decomposition methods, pRPL provides a spatially adaptive quad-tree-based decomposition to produce more evenly distributed workloads among processors. Data parallelism and task parallelism are supported, with both static and dynamic load-balancing. By grouping processors, pRPL also supports data–task hybrid parallelism, i.e., data parallelism within a processor group and task parallelism among processor groups. pSLEUTH, a parallel version of a well-known cellular automata model for simulating urban land-use change (SLEUTH), was developed to demonstrate full utilization of the advanced features of pRPL. Experiments with real-world data sets were conducted and the performance of pSLEUTH measured. We conclude not only that pRPL greatly reduces the development complexity of implementing a parallel raster-processing algorithm, it also greatly reduces the computing time of computationally intensive raster-processing algorithms, as demonstrated with pSLEUTH.  相似文献   

7.
ABSTRACT

High performance computing is required for fast geoprocessing of geospatial big data. Using spatial domains to represent computational intensity (CIT) and domain decomposition for parallelism are prominent strategies when designing parallel geoprocessing applications. Traditional domain decomposition is limited in evaluating the computational intensity, which often results in load imbalance and poor parallel performance. From the data science perspective, machine learning from Artificial Intelligence (AI) shows promise for better CIT evaluation. This paper proposes a machine learning approach for predicting computational intensity, followed by an optimized domain decomposition, which divides the spatial domain into balanced subdivisions based on the predicted CIT to achieve better parallel performance. The approach provides a reference framework on how various machine learning methods including feature selection and model training can be used in predicting computational intensity and optimizing parallel geoprocessing against different cases. Some comparative experiments between the approach and traditional methods were performed using the two cases, DEM generation from point clouds and spatial intersection on vector data. The results not only demonstrate the advantage of the approach, but also provide hints on how traditional GIS computation can be improved by the AI machine learning.  相似文献   

8.
Numerous methods have been proposed for landslide probability zonation of the landscape by means of a Geographic Information System (GIS). Among the multivariate methods, i.e. those methods which simultaneously take into account all the factors contributing to instability, the Conditional Analysis method applied to a subdivision of the territory into Unique Condition Units is particularly straightforward from a conceptual point of view and particularly suited to the use of a GIS. In fact, working on the principle that future landslides are more likely to occur under those conditions which led to past instability, landslide susceptibility is defined by computing the landslide density in correspondence with different combinations of instability factors. The conceptual simplicity of this method, however, does not necessarily imply that it is simple to implement, especially as it requires rather complex operations and a high number of GIS commands. Moreover, there is the possibility that, in order to achieve satisfactory results, the procedure has to be repeated a few times changing the factors or modifying the class subdivision. To solve this problem, we created a shell program which, by combining the shell commands, the GIS Geographical Research Analysis Support System (GRASS) commands and the gawk language commands, carries out the whole procedure automatically. This makes the construction of a Landslide Susceptibility Map easy and fast for large areas too, and even when a high spatial resolution is adopted, as shown by application of the procedure to the Parma River basin, in the Italian Northern Apennines.  相似文献   

9.
The demand for parallel geocomputation based on raster data is constantly increasing with the increase of the volume of raster data for applications and the complexity of geocomputation processing. The difficulty of parallel programming and the poor portability of parallel programs between different parallel computing platforms greatly limit the development and application of parallel raster-based geocomputation algorithms. A strategy that hides the parallel details from the developer of raster-based geocomputation algorithms provides a promising way towards solving this problem. However, existing parallel raster-based libraries cannot solve the problem of the poor portability of parallel programs. This paper presents such a strategy to overcome the poor portability, along with a set of parallel raster-based geocomputation operators (PaRGO) designed and implemented under this strategy. The developed operators are compatible with three popular types of parallel computing platforms: graphics processing unit supported by compute unified device architecture, Beowulf cluster supported by message passing interface (MPI), and symmetrical multiprocessing cluster supported by MPI and open multiprocessing, which make the details of the parallel programming and the parallel hardware architecture transparent to users. By using PaRGO in a style similar to sequential program coding, geocomputation developers can quickly develop parallel raster-based geocomputation algorithms compatible with three popular parallel computing platforms. Practical applications in implementing two algorithms for digital terrain analysis show the effectiveness of PaRGO.  相似文献   

10.
Attesting to the powerful capabilities and in technology trends, many scholars envisioned the consolidation of geographic information systems (GIS) into vital tools for disseminating spatial information. GIS are presently used to inform, advise and instruct users in several contexts and to further engage citizens in decision-making processes that can impact and sustain policy development. Interaction with these applications incorporates risk and uncertainty, which have been repeatedly identified as preconditions in nurturing trust perceptions and which instigate a user's decision to rely on a system and act on the provided information. Research studies consistently demonstrated that a trust-oriented interface design can facilitate the development of more trustworthy, mainly e-commerce, systems. Trust in the Web GIS context, despite its significance, has only relatively recently received some attention. A set of human–computer interaction (HCI) user-based studies revealed some Web GIS trustee attributes that influence non-experts' trust beliefs and found that when these are problematic or absent from interface design, users form irrational trust perceptions, which amplifies the risk and may impose dangers to the user. These Web GIS trustee attributes that influence non-experts' trust perceptions are formulated here into a set of trust guidelines. These are then evaluated using the PE-Nuclear tool, a Web GIS application, to inform the public about the site selection of a nuclear waste repository in the United Kingdom. Our preliminary results indicate that the proposed trust guidelines not only support the development of rational trust perceptions that protect non-experts from inappropriate use of Web GIS technology but also contribute towards improving interaction with such applications of public interest issue.  相似文献   

11.
ABSTRACT

Crime often clusters in space and time. Near-repeat patterns improve understanding of crime communicability and their space–time interactions. Near-repeat analysis requires extensive computing resources for the assessment of statistical significance of space–time interactions. A computationally intensive Monte Carlo simulation-based approach is used to evaluate the statistical significance of the space-time patterns underlying near-repeat events. Currently available software for identifying near-repeat patterns is not scalable for large crime datasets. In this paper, we show how parallel spatial programming can help to leverage spatio-temporal simulation-based analysis in large datasets. A parallel near-repeat calculator was developed and a set of experiments were conducted to compare the newly developed software with an existing implementation, assess the performance gain due to parallel computation, test the scalability of the software to handle large crime datasets and assess the utility of the new software for real-world crime data analysis. Our experimental results suggest that, efficiently designed parallel algorithms that leverage high-performance computing along with performance optimization techniques could be used to develop software that are scalable with large datasets and could provide solutions for computationally intensive statistical simulation-based approaches in crime analysis.  相似文献   

12.
Location siting is an important part of service provision, with much potential to impact operational efficiency, safety, security, system reliability, etc. A class of location models seeks to optimize coverage of demand for service that is continuously distributed across space. Decision-making and planning contexts include police/fire resource allocation for a community, siting cellular towers to support cell phone signal transmission, locating emergency warning sirens to alert the public of severe weather and other related dangers, and many others as well. When facilities can be sited anywhere in continuous space to provide coverage to an entire region, this is a very computationally challenging problem to solve because potential demand for service is everywhere and there are an infinite number of potential facility sites to consider. This article develops a new parallel solution approach for this location coverage optimization problem through an iterative bounding scheme on multi-core architectures. The developed approach is applied to site emergency warning sirens in Dublin, Ohio, and fire stations in Elk Grove, California. Results demonstrate the effectiveness and efficiency of the proposed approach, enabling real-time analysis and planning. This work illustrates that the integration of cyberinfrastructure can significantly improve computational efficiency in solving challenging spatial optimization problems, fitting the themes of this special issue: cyberinfrastructure, GIS, and spatial optimization.  相似文献   

13.
基于GIS的复杂地形洪水淹没区计算方法   总被引:64,自引:0,他引:64  
利用GIS技术计算洪水淹没区范围一直是灾害评估研究中的一个热点问题。通过给出两种情形下基于种子蔓延算法的淹没区计算方法,即有源 淹没和无源淹没。淹没区计算精度主要取决于空间数据精度的优劣;种子蔓延算法及探测分辨率决定了整个模型的效率。文中最后给出了该模型在“水利综合管理信息系统”中得到验证和实现的实例。  相似文献   

14.
Forecasting dust storms for large geographical areas with high resolution poses great challenges for scientific and computational research. Limitations of computing power and the scalability of parallel systems preclude an immediate solution to such challenges. This article reports our research on using adaptively coupled models to resolve the computational challenges and enable the computability of dust storm forecasting by dividing the large geographical domain into multiple subdomains based on spatiotemporal distributions of the dust storm. A dust storm model (Eta-8bin) performs a quick forecasting with low resolution (22 km) to identify potential hotspots with high dust concentration. A finer model, non-hydrostatic mesoscale model (NMM-dust) performs high-resolution (3 km) forecasting over the much smaller hotspots in parallel to reduce computational requirements and computing time. We also adopted spatiotemporal principles among computing resources and subdomains to optimize parallel systems and improve the performance of high-resolution NMM-dust model. This research enabled the computability of high-resolution, large-area dust storm forecasting using the adaptively coupled execution of the two models Eta-8bin and NMM-dust.  相似文献   

15.
This work shares a time-sensitive framework for teaching GIS to educators of all levels and disciplines. Existing relationships with teachers enabled the addition of GIS content in professional development activities. The amount of time devoted to GIS-related content varied depending on time made available for interaction with the audience. Content audiences included geography, history, social studies, science, agriculture, religion, and math teachers. The framework was developed, tested, and refined over a period of six years, during thirty-six trainings, and with 580 educators. Use of this framework emphasizes that not one size fits all in GIS education and that GIS can work for any teacher, their content, their classroom, and their time availability.  相似文献   

16.
《自然地理学》2013,34(5):457-472
Evaluating the geo-environmental suitability of land for urban construction is an important step in the analysis of urban land use potential. Using geo-environmental factors and the land use status of Hangzhou, China, a back-propagation (BP) neural network model for the evaluation of the geo-environmental suitability of land for urban construction was established with a geographic information system (GIS) and techniques of grid, geospatial, and BP neural network analysis. Four factor groups, comprising nine separate subfactors of geo-environmental features, were selected for the model: geomorphic type, slope, site soil type, stratum steadiness, Holocene saturated soft soil depth, groundwater abundance, groundwater salinization, geologic hazard type, and geologic hazard degree. With the support of the model, the geo-environmental suitability of Hangzhou land for urban construction was divided into four suitability zones: zone I, suitable for super high-rise and high-rise buildings; zone II, suitable for multi-story buildings; zone III, suitable for low-rise buildings; and zone IV, not suitable for buildings. The results showed that a BP neural network can capture the complex non-linear relationships between the evaluation factors and the suitability level, and these results will support scientific decision-making for urban-construction land planning, management, and rational land use in Hangzhou.  相似文献   

17.
Harris  J. R.  Wilkinson  L.  Heather  K.  Fumerton  S.  Bernier  M. A.  Ayer  J.  Dahn  R. 《Natural Resources Research》2001,10(2):91-124
A Geographic Information System (GIS) is used to prepare and process digital geoscience data in a variety of ways for producing gold prospectivity maps of the Swayze greenstone belt, Ontario, Canada. Data used to produce these maps include geologic, geochemical, geophysical, and remotely sensed (Landsat). A number of modeling methods are used and are grouped into data-driven (weights of evidence, logistic regression) and knowledge-driven (index and Boolean overlay) methods. The weights of evidence (WofE) technique compares the spatial association of known gold prospects with various indicators (evidence maps) of gold mineralization, to derive a set of weights used to produce the final gold prospectivity map. Logistic regression derives statistical information from evidence maps over each known gold prospect and the coefficients derived from regression analysis are used to weight each evidence map. The gold prospectivity map produced from the index overlay process uses a weighting scheme that is derived from input by the geologist, whereas the Boolean method uses equally weighted binary evidence maps.The resultant gold prospectivity maps are somewhat different in this study as the data comprising the evidence maps were processed purposely differently for each modeling method. Several areas of high gold potential, some of which are coincident with known gold prospects, are evident on the gold prospectivity maps produced using all modeling methods. The majority of these occur in mafic rocks within high strain zones, which is typical of many Archean greenstone belts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号