首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Investigators in many fields are analyzing temporal change in spatial data. Such analyses are typically conducted by comparing the value of some metric (e. g., area, contagion, or diversity indices) measured at time T1 with the value of the same metric measured at time T2 . These comparisons typically include the use of simple interpolation models to estimate the value of the metric of interest at points in time between observations, followed by applications of differential calculus to investigate the rates at which the metric is changing. Unfortunately, these techniques treat the values of the metrics being analyzed as if they were observed values, when in fact the metrics are derived from more fundamental spatial data. The consequence of treating metrics as observed values is a significant reduction in the degrees of freedom in spatial change over time. This results in an oversimplified view of spatio-temporal change. A more accurate view can be produced by (1) applying temporal interpolation models to observed spatial data rather than derived spatial metrics; (2) expanding the metric of interest's computational equation by replacing the terms relating to the observed spatial data with their temporal interpolation equations; and (3) differentiating the expanded computational equation. This alternative, three-step spatio-temporal analysis technique will be described and justified. The alternative technique will be compared to the conventional approach using common metrics and a sample data set.  相似文献   

2.
ABSTRACT

Light detection and ranging (LiDAR) data are essential for scientific discoveries such as Earth and ecological sciences, environmental applications, and responding to natural disasters. While collecting LiDAR data over large areas is quite possible the subsequent processing steps typically involve large computational demands. Efficiently storing, managing, and processing LiDAR data are the prerequisite steps for enabling these LiDAR-based applications. However, handling LiDAR data poses grand geoprocessing challenges due to data and computational intensity. To tackle such challenges, we developed a general-purpose scalable framework coupled with a sophisticated data decomposition and parallelization strategy to efficiently handle ‘big’ LiDAR data collections. The contributions of this research were (1) a tile-based spatial index to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, (2) two spatial decomposition techniques to enable efficient parallelization of different types of LiDAR processing tasks, and (3) by coupling existing LiDAR processing tools with Hadoop, a variety of LiDAR data processing tasks can be conducted in parallel in a highly scalable distributed computing environment using an online geoprocessing application. A proof-of-concept prototype is presented here to demonstrate the feasibility, performance, and scalability of the proposed framework.  相似文献   

3.
ABSTRACT

As an effective tool for simulating spatiotemporal urban processes in the real world, urban cellular automata (CA) models involve multiple data layers and complicated calibration algorithms, which make their computational capability become a bottleneck. Numerous approaches and techniques have been applied to the development of high-performance urban CA models, among which the integration of vectorization and parallel computing has broad application prospects due to its powerful computational ability and scalability. Unfortunately, this hybrid algorithm becomes inefficient when the axis-aligned bounding box (AABB) of study areas contains many unavailable cells. This paper presents a minimum-volume oriented bounding box (OBB) strategy to solve the above problem. Specifically, geometric transformation (i.e. translation and rotation) is applied to find the OBB of the study area before implementing the hybrid algorithm, and a set of functions are established to describe the spatial coordinate relationship between the AABB and OBB layers. Experiments conducted in this study demonstrate that the OBB strategy can further reduce the computational time of urban CA models after vectorization and parallelism. For example, when the cell size is 15 m and the neighborhood size is 3 × 3, an approximately 10-fold speedup in computational time can result from vectorization in the MATLAB environment, followed by an 18-fold speedup after implementing parallel computing in a quad-core processor and, finally, a speedup of 25-fold by further using an OBB strategy. We thus argue that OBB strategy can make the integration of vectorization and parallel computing more efficient and may provide scalable solutions for significantly improving the applicability of urban CA models.  相似文献   

4.
The discovery, interpretation, and presentation of multivariate spatial patterns are important for scientific understanding of complex geographic problems. This research integrates computational, visual, and cartographic methods together to detect and visualize multivariate spatial patterns. The integrated approach is able to: (1) perform multivariate analysis, dimensional reduction, and data reduction (summarizing a large number of input data items in a moderate number of clusters) with the Self-Organizing Map (SOM); (2) encode the SOM result with a systematically designed color scheme; (3) visualize the multivariate patterns with a modified Parallel Coordinate Plot (PCP) display and a geographic map (GeoMap); and (4) support human interactions to explore and examine patterns. The research shows that such "mixed initiative" methods (computational and visual) can mitigate each other's weakness and collaboratively discover complex patterns in large geographic datasets, in an effective and efficient way.  相似文献   

5.
Because eigenvector spatial filtering (ESF) provides a relatively simple and successful method to account for spatial autocorrelation in regression, increasingly it has been adopted in various fields. Although ESF can be easily implemented with a stepwise procedure, such as traditional stepwise regression, its computational efficiency can be further improved. Two major computational components in ESF are extracting eigenvectors and identifying a subset of these eigenvectors. This paper focuses on how a subset of eigenvectors can be efficiently and effectively identified. A simulation experiment summarized in this paper shows that, with a well-prepared candidate eigenvector set, ESF can effectively account for spatial autocorrelation and achieve computational efficiency. This paper further proposes a nonlinear equation for constructing an ideal candidate eigenvector set based on the results of the simulation experiment.  相似文献   

6.
7.
Learning knowledge graph (KG) embeddings is an emerging technique for a variety of downstream tasks such as summarization, link prediction, information retrieval, and question answering. However, most existing KG embedding models neglect space and, therefore, do not perform well when applied to (geo)spatial data and tasks. Most models that do consider space primarily rely on some notions of distance. These models suffer from higher computational complexity during training while still losing information beyond the relative distance between entities. In this work, we propose a location‐aware KG embedding model called SE‐KGE. It directly encodes spatial information such as point coordinates or bounding boxes of geographic entities into the KG embedding space. The resulting model is capable of handling different types of spatial reasoning. We also construct a geographic knowledge graph as well as a set of geographic query–answer pairs called DBGeo to evaluate the performance of SE‐KGE in comparison to multiple baselines. Evaluation results show that SE‐KGE outperforms these baselines on the DBGeo data set for the geographic logic query answering task. This demonstrates the effectiveness of our spatially‐explicit model and the importance of considering the scale of different geographic entities. Finally, we introduce a novel downstream task called spatial semantic lifting which links an arbitrary location in the study area to entities in the KG via some relations. Evaluation on DBGeo shows that our model outperforms the baseline by a substantial margin.  相似文献   

8.
In this paper, we extend the Bayesian methodology introduced by Beamonte et al. (Stat Modelling 8:285–311, 2008) for the estimation and comparison of spatio-temporal autoregressive models (STAR) with neighbourhood effects, providing a more general treatment that uses larger and denser nets for the number of spatial and temporal influential neighbours and continuous distributions for their smoothing weights. This new treatment also reduces the computational time and the RAM necessities of the estimation algorithm in Beamonte et al. (Stat Modelling 8:285–311, 2008). The procedure is illustrated by an application to the Zaragoza (Spain) real estate market, improving the goodness of fit and the outsampling behaviour of the model thanks to a more flexible estimation of the neighbourhood parameters.  相似文献   

9.
One of the fundamental issues of geographical information science is to design GIS interfaces and functionalities in a way that is easy to understand, teach, and use. Unfortunately, current geographical information systems (including ArcGIS) remains very difficult to use as spatial analysis tools, because they organize and expose functionalities according to GIS data structures and processing algorithms. As a result, GIS interfaces are conceptually confusing, cognitively complex, and semantically disconnected from the way human reason about spatial analytical activities. In this article, we propose an approach that structures GIS analytical functions based on the notion of “analytical intent”. We describe an experiment that replaces ArcGIS desktop interface with a conversational interface, to enable mixed‐initiative user‐system interactions at the level of analytical intentions. We initially focus on the subset of GIS functions that are relevant to “finding what's inside” as described by Mitchell, but the general principles apply to other types of spatial analysis. This work demonstrates the feasibility of delegating some spatial thinking tasks to computational agents, and also raises future research questions that are key to building a better theory of spatial thinking with GIS.  相似文献   

10.
Land cover products based on remotely sensed data are commonly investigated in terms of landscape composition and configuration; i.e. landscape pattern. Traditional landscape pattern indicators summarize an aspect of landscape pattern over the full study area. Increasingly, the advantages of representing the scale-specific spatial variation of landscape patterns as continuous surfaces are being recognized. However, technical and computational barriers hinder the uptake of this approach. This article reduces such barriers by introducing a computational framework for moving window analysis that separates the tasks of tallying pixels, patches and edges as a window moves over the map from the internal logic of landscape indicators. The framework is applied on data covering the UK and Ireland at 250 m resolution, evaluating a variety of indicators including mean patch size, edge density and Shannon diversity at window sizes ranging from 2.5 km to 80 km. The required computation time is in the order of seconds to minutes on a regular personal computer. The framework supports rapid development of indicators requiring little coding. The computational efficiency means that methods can be integrated in iterative computational tasks such as multi-scale analysis, optimization, sensitivity analysis and simulation modelling.  相似文献   

11.
Most existing spatial analytic techniques use a simplified, point-based representation of spatial objects. This facilitates tractability of the standard computational procedures for many spatial measures, for example, distances between spatial objects. The increasing spatial data handling and manipulation capabilities of geographic information systems (GIS), however, allow a re-examination of fundamental representations and measures in spatial analysis. This article develops exact computational procedures for calculating the average, minimum, and maximum distances between pairings of the three geometric primitives (points, lines, and polygons) when these objects are stored in the vector GIS data model. These procedures are “exact” in the sense that they are completely accurate, subject to the database representation. This article also provides example results from average distance calculations for a generic spatial database.  相似文献   

12.
This paper presents a study on the modeling of fuzzy topological relations between uncertain objects in Geographic Information Systems (GIS). Based on the recently developed concept of computational fuzzy topological space, topological relations between simple fuzzy spatial objects are modeled. The fuzzy spatial objects here cover simple fuzzy region, simple fuzzy line segment and fuzzy point. To compute the topological relations between the simple spatial objects, intersection concepts and integration methods are applied and a computational 9-intersection model are proposed and developed. There are different types of intersection, and we have proposed different integration methods for computation in different cases. For example, surface integration method is applied to the case of the fuzzy region-to-fuzzy region relation, while the line integration method is used in the case of fuzzy line segment-to-fuzzy line segment relation. Moreover, this study has discovered that there are (a) sixteen topological relations between simple fuzzy region to line segment; (b) forty-six topological relations between simple fuzzy line segments; (c) three topological relations between simple fuzzy region to fuzzy point; and (d) three topological relations between simple fuzzy line segment to fuzzy point.  相似文献   

13.
Geographic representation in spatial analysis   总被引:1,自引:0,他引:1  
Spatial analysis mostly developed in an era when data was scarce and computational power was expensive. Consequently, traditional spatial analysis greatly simplifies its representations of geography. The rise of geographic information science (GISci) and the changing nature of scientific questions at the end of the 20th century suggest a comprehensive re-examination of geographic representation in spatial analysis. This paper reviews the potential for improved representations of geography in spatial analysis. Existing tools in spatial analysis and new tools available from GISci have tremendous potential for bringing more sophisticated representations of geography to the forefront of spatial analysis theory and application.  相似文献   

14.
Adaptive zoning is a recently introduced method for improving computer modeling of spatial interactions and movements in the transport network. Unlike traditional zoning, where geographic locations are defined by one single universal plan of discrete land parcels or ‘zones’ for the study area, adaptive zoning establishes a compendium of different zone plans, each of which is applicable to one journey origin or destination only. These adaptive zone plans are structured to represent strong spatial interactions in proportionately more detail than weaker ones. In recent articles, it has been shown that adaptive zoning improves, by a large margin, the scalability of models of spatial interaction and road traffic assignment. This article confronts the method of adaptive zoning with an application of the scale and complexity for which it was intended, namely an application of mode choice modeling that at the same time requires a large study area and a fine‐grained zone system. Our hypothesis is that adaptive zoning can significantly improve the accuracy of mode choice modeling because of its enhanced sensitivity to the geographic patterns and scales of spatial interaction. We test the hypothesis by investigating the performance of three alternative models: (1) a spatially highly detailed model that is permissible to the maximum extent by available data, but requires a high computational load that is generally out of reach for rapid turnaround of policy studies; (2) a mode choice model for the same area, but reducing the computational load by 90% by using a traditional zone system consisting of fewer zones; and (3) a mode choice model that also reduces the computational load by 90%, but based on adaptive zoning instead. The tests are carried out on the basis of a case study that uses the dataset from the London Area Transport Survey. Using the first model as a benchmark, it is found that for a given computational load, the model based on adaptive zoning contains about twice the amount of information of the traditional model, and model parameters on adaptive zoning principles are more accurate by a factor of six to eight. The findings suggest that adaptive zoning has a significant potential in enhancing the accuracy of mode choice modeling at the city or city‐region scale.  相似文献   

15.
ABSTRACT

Allergic rhinitis (hay fever) resulting from seasonal pollen affects 15–30% of the population in the United States, and can exacerbate several related conditions, including asthma, atopic eczema, and allergic conjunctivitis. Timely monitoring, accurate prediction, and visualization of pollen levels are critical for public health prevention purposes, such as limiting outdoor exposure or physical activity. The low density of pollen detecting stations and complex movement of pollen represent a challenge for accurate prediction and modeling. In this paper, we reconstruct the dynamics of pollen variation across the Eastern United States for 2016 using space–time interpolation. Pollen levels were extracted according to a stratified spatial sampling design, augmented by additional samples in densely populated areas. These measurements were then used to estimate the space–time cross-correlation, inferring optimal spatial and temporal ranges to calibrate the space–time interpolation. Given the computational requirements of the interpolation algorithm, we implement a spatiotemporal domain decomposition algorithm, and use parallel computing to reduce the computational burden. We visualize our results in a 3D environment to identify the seasonal dynamics of pollen levels. Our approach is also portable to analyze other large space–time explicit datasets, such as air pollution, ash clouds, and precipitation.  相似文献   

16.
In recent years, it has been widely agreed that spatial features derived from textural, structural, and object-based methods are important information sources to complement spectral properties for accurate urban classification of high-resolution imagery. However, the spatial features always refer to a series of parameters, such as scales, directions, and statistical measures, leading to high-dimensional feature space. The high-dimensional space is almost impractical to deal with considering the huge storage and computational cost while processing high-resolution images. To this aim, we propose a novel multi-index learning (MIL) method, where a set of low-dimensional information indices is used to represent the complex geospatial scenes in high-resolution images. Specifically, two categories of indices are proposed in the study: (1) Primitive indices (PI): High-resolution urban scenes are represented using a group of primitives (e.g., building/shadow/vegetation) that are calculated automatically and rapidly; (2) Variation indices (VI): A couple of spectral and spatial variation indices are proposed based on the 3D wavelet transformation in order to describe the local variation in the joint spectral-spatial domains. In this way, urban landscapes can be decomposed into a set of low-dimensional and semantic indices replacing the high-dimensional but low-level features (e.g., textures). The information indices are then learned via the multi-kernel support vector machines. The proposed MIL method is evaluated using various high-resolution images including GeoEye-1, QuickBird, WorldView-2, and ZY-3, as well as an elaborate comparison to the state-of-the-art image classification algorithms such as object-based analysis, and spectral-spatial approaches based on textural and morphological features. It is revealed that the MIL method is able to achieve promising results with a low-dimensional feature space, and, provide a practical strategy for processing large-scale high-resolution images.  相似文献   

17.
 Markov Random Fields, implemented for the analysis of remote sensing images, capture the natural spatial dependence between band wavelengths taken at each pixel, through a suitable adjacency relationship between pixels, to be defined a priori. In most cases several adjacency definitions seem viable and a model selection problem arises. A BIC-penalized Pseudo-Likelihood criterion is suggested which combines good distributional properties and computational feasibility for analysis of high spatial resolution hyperspectral images. Its performance is compared with that of the BIC-penalized Likelihood criterion for detecting spatial structures in a high spatial resolution hyperspectral image for the Lamar area in Yellowstone National Park. Received: 9 March 2001 / Accepted: 2 August 2001  相似文献   

18.
Abstract

While significant progress has been made to implement the Digital Earth vision, current implementation only makes it easy to integrate and share spatial data from distributed sources and has limited capabilities to integrate data and models for simulating social and physical processes. To achieve effectiveness of decision-making using Digital Earth for understanding the Earth and its systems, new infrastructures that provide capabilities of computational simulation are needed. This paper proposed a framework of geospatial semantic web-based interoperable spatial decision support systems (SDSSs) to expand capabilities of the currently implemented infrastructure of Digital Earth. Main technologies applied in the framework such as heterogeneous ontology integration, ontology-based catalog service, and web service composition were introduced. We proposed a partition-refinement algorithm for ontology matching and integration, and an algorithm for web service discovery and composition. The proposed interoperable SDSS enables decision-makers to reuse and integrate geospatial data and geoprocessing resources from heterogeneous sources across the Internet. Based on the proposed framework, a prototype to assist in protective boundary delimitation for Lunan Stone Forest conservation was implemented to demonstrate how ontology-based web services and the services-oriented architecture can contribute to the development of interoperable SDSSs in support of Digital Earth for decision-making.  相似文献   

19.
This article illustrates two techniques for merging daily aerosol optical depth (AOD) measurements from satellite and ground-based data sources to achieve optimal data quality and spatial coverage. The first technique is a traditional Universal Kriging (UK) approach employed to predict AOD from multi-sensor aerosol products that are aggregated on a reference grid with AERONET as ground truth. The second technique is spatial statistical data fusion (SSDF); a method designed for massive satellite data interpolation. Traditional kriging has computational complexity O(N3), making it impractical for large datasets. Our version of UK accommodates massive data inputs by performing kriging locally, while SSDF accommodates massive data inputs by modelling their covariance structure with a low-rank linear model. In this study, we use aerosol data products from two satellite instruments: the moderate resolution imaging spectrometer and the geostationary operational environmental satellite, covering the Continental United States.  相似文献   

20.
ABSTRACT

Forecasting environmental parameters in the distant future requires complex modelling and large computational resources. Due to the sensitivity and complexity of forecast models, long-term parameter forecasts (e.g. up to 2100) are uncommon and only produced by a few organisations, in heterogeneous formats and based on different assumptions of greenhouse gases emissions. However, data mining techniques can be used to coerce the data to a uniform time and spatial representation, which facilitates their use in many applications. In this paper, streams of big data coming from AquaMaps and NASA collections of 126 long-term forecasts of nine types of environmental parameters are processed through a cloud computing platform in order to (i) standardise and harmonise the data representations, (ii) produce intermediate scenarios and new informative parameters, and (iii) align all sets on a common time and spatial resolution. Time series cross-correlation applied to these aligned datasets reveals patterns of climate change and similarities between parameter trends in 10 marine areas. Our results highlight that (i) the Mediterranean Sea may have a standalone ‘response’ to climate change with respect to other areas, (ii) the Poles are most representative of global forecasted change, and (iii) the trends are generally alarming for most oceans.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号