首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到15条相似文献,搜索用时 15 毫秒
1.
From a single-attribute raster layer in which each cell is assigned a numerical value, a connected set of a specified number of cells that has the maximum (or minimum) total value is selected. This is a highly common decision problem in the context of raster-based geographic information systems (GIS) and seems general enough to deserve inclusion in the standard functionality of such systems. Yet it is a computationally difficult optimization problem, for which no efficient exact solution method has been found. This article presents a new dynamic programming-based heuristic method for the problem. Its performance is tested with randomly generated raster layers with various degrees of spatial autocorrelation. Results suggest that the proposed heuristic is a promising alternative to the existing integer programming-based exact method, as it can handle significantly larger raster data with fair accuracy.  相似文献   

2.
This paper presents a formal framework for the representation of three-dimensional geospatial data and the definition of common geographic information system (GIS) spatial operations. We use the compact stack-based representation of terrains (SBRT) in order to model geological volumetric data, both at the surface and subsurface levels, thus preventing the large storage requirements of regular voxel models. The main contribution of this paper is fitting the SBRT into the geo-atom theory in a seamless way, providing it with a sound formal geographic foundation. In addition we have defined a set of common spatial operations on this representation using the tools provided by map algebra. More complex geoprocessing operations or geophysical simulations using the SBRT as representation can be implemented as a composition of these fundamental operations. Finally a data model and an implementation extending the coverage concept provided by the Geography Markup Language standard are suggested. Geoscientists and GIS professionals can take advantage of this model to exchange and reuse geoinformation within a well-specified framework.  相似文献   

3.
4.
There has been a resurgence of interest in time geography studies due to emerging spatiotemporal big data in urban environments. However, the rapid increase in the volume, diversity, and intensity of spatiotemporal data poses a significant challenge with respect to the representation and computation of time geographic entities and relations in road networks. To address this challenge, a spatiotemporal data model is proposed in this article. The proposed spatiotemporal data model is based on a compressed linear reference (CLR) technique to transform network time geographic entities in three-dimensional (3D) (x, y, t) space to two-dimensional (2D) CLR space. Using the proposed spatiotemporal data model, network time geographic entities can be stored and managed in classical spatial databases. Efficient spatial operations and index structures can be directly utilized to implement spatiotemporal operations and queries for network time geographic entities in CLR space. To validate the proposed spatiotemporal data model, a prototype system is developed using existing 2D GIS techniques. A case study is performed using large-scale datasets of space-time paths and prisms. The case study indicates that the proposed spatiotemporal data model is effective and efficient for storing, managing, and querying large-scale datasets of network time geographic entities.  相似文献   

5.
The use of a semantically rich registry containing a Feature Type Catalogue (FTC) to represent the semantics of geographic feature types including operations, attributes and relationships between feature types is required to realise the benefits of Spatial Data Infrastructures (SDIs). Specifically, such information provides a more complete representation of the semantics of the concepts used in the SDI, and enables advanced navigation, discovery and utilisation of discovered resources. The presented approach creates an FTC implementation in which attributes, associations and operations for a given feature type are encapsulated within the FTC, and these conceptual representations are separated from the implementation aspects of the web services that may realise the operations in the FTC. This differs from previous approaches that combine the implementation and conceptual aspects of behaviour in a web service ontology, but separate the behavioural aspects from the static aspects of the semantics of the concept or feature type. These principles are demonstrated by the implementation of such a registry using open standards. The ebXML Registry Information Model (ebRIM) was used to incorporate the FTC described in ISO 19110 by extending the Open Geospatial Consortium ebRIM Profile for the Web Catalogue Service (CSW) and adding a number of stored queries to allow the FTC component of the standards‐compliant registry to be interrogated. The registry was populated with feature types from the marine domain, incorporating objects that conform to both the object and field views of the world. The implemented registry demonstrates the benefits of inheritance of feature type operations, attributes and associations, the ability to navigate around the FTC and the advantages of separating the conceptual from the implementation aspects of the FTC. Further work is required to formalise the model and include axioms to allow enhanced semantic expressiveness and the development of reasoning capabilities.  相似文献   

6.
The vast accumulation of environmental data and the rapid development of geospatial visualization and analytical techniques make it possible for scientists to solicit information from local citizens to map spatial variation of geographic phenomena. However, data provided by citizens (referred to as citizen data in this article) suffer two limitations for mapping: bias in spatial coverage and imprecision in spatial location. This article presents an approach to minimizing the impacts of these two limitations of citizen data using geospatial analysis techniques. The approach reduces location imprecision by adopting a frequency-sampling strategy to identify representative presence locations from areas over which citizens observed the geographic phenomenon. The approach compensates for the spatial bias by weighting presence locations with cumulative visibility (the frequency at which a given location can be seen by local citizens). As a case study to demonstrate the principle, this approach was applied to map the habitat suitability of the black-and-white snub-nosed monkey (Rhinopithecus bieti) in Yunnan, China. Sightings of R. bieti were elicited from local citizens using a geovisualization platform and then processed with the proposed approach to predict a habitat suitability map. Presence locations of R. bieti recorded by biologists through intensive field tracking were used to validate the predicted habitat suitability map. Validation showed that the continuous Boyce index (Bcont(0.1)) calculated on the suitability map was 0.873 (95% CI: [0.810, 0.917]), indicating that the map was highly consistent with the field-observed distribution of R. bieti. Bcont(0.1) was much lower (0.173) for the suitability map predicted based on citizen data when location imprecision was not reduced and even lower (?0.048) when there was no compensation for spatial bias. This indicates that the proposed approach effectively minimized the impacts of location imprecision and spatial bias in citizen data and therefore effectively improved the quality of mapped spatial variation using citizen data. It further implies that, with the application of geospatial analysis techniques to properly account for limitations in citizen data, valuable information embedded in such data can be extracted and used for scientific mapping.  相似文献   

7.
8.
Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (~10× speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328%). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.  相似文献   

9.
Moving object databases are designed to store and process spatial and temporal object data. An especially useful moving object type is a moving region, which consists of one or more moving polygons suitable for modeling the spread of forest fires, the movement of clouds, spread of diseases and many other real-world phenomena. Previous implementations usually allow a changing shape of the region during the movement; however, the necessary restrictions on this model result in an inaccurate interpolation of rotating objects. In this paper, we present an alternative approach for moving and rotating regions of fixed shape, called Fixed Moving Regions, which provide a significantly better model for a wide range of applications like modeling the movement of oil tankers, icebergs and other rigid structures. Furthermore, we describe and implement several useful operations on this new object type to enable a database system to solve many real-world problems, as for example collision tests, projections and intersections, much more accurate than with other models. Based on this research, we also implemented a library for easy integration into moving objects database systems, as for example the DBMS Secondo (1) (2) developed at the FernUniversität in Hagen.  相似文献   

10.
Research activity and published literature on the reliability and vulnerability analysis of urban areas for disaster management has grown tremendously in the recent past. Population information has played the most important role during the entire disaster management process. In this article, population information was used as the evaluation criterion, and a fuzzy multiple-attribute decision-making (MADM) approach was used to support a vulnerability analysis of the Helsinki area for disaster management. A kernel density map was produced as a result that showed the vulnerable spatial locations in the event of a disaster. Model results were first validated against the original population information kernel density maps. In the second step, the model was validated by using fuzzy set accuracy assessment and the actual domain knowledge of the rescue experts. This is a novel approach to validation, which makes it possible to see how and if computer decision-making models compare to a real decision-making process in disaster management. The validation results showed that the fuzzy model has produced a reasonably accurate result. By using fuzzy modelling, the number of vulnerable areas was reduced to a reasonable scale and compares to the actual human assessment of these areas, which allows resources to be optimised during the rescue planning and operation.  相似文献   

11.
Volunteered geographic information (VGI) contains valuable field observations that represent the spatial distribution of geographic phenomena. As such, it has the potential to provide regularly updated low-cost field samples for predictively mapping the spatial variations of geographic phenomena. The predictive mapping of geographic phenomena often requires representative samples for high mapping accuracy, but samples consisting of VGI observations are often not representative as they concentrate on specific geographic areas (i.e. spatial bias) due to the opportunistic nature of voluntary observation efforts. In this article, we propose a representativeness-directed approach to mitigate spatial bias in VGI for predictive mapping. The proposed approach defines and quantifies sample representativeness by comparing the probability distributions of sample locations and the mapping area in the environmental covariate space. Spatial bias is mitigated by weighting the sample locations to maximize their representativeness. The approach is evaluated using species habit suitability mapping as a case study. The results show that the accuracy of predictive mapping using weighted sample locations is higher than using unweighted sample locations. A positive relationship between sample representativeness and mapping accuracy is also observed, suggesting that sample representativeness is a valid indicator of predictive mapping accuracy. This approach mitigates spatial bias in VGI to improve predictive mapping accuracy.  相似文献   

12.
The continuous development of Spatial Data Infrastructures (SDI) provides a favourable context for environmental management and planning. However, it appears that the actual contribution of SDIs should also depend on the correlation between users’ expectations and the services delivered to them. Several studies have addressed some organizational, methodological and technological aspects of the development of SDIs. However, only a few studies have, to the best of our knowledge, studied SDI use at large. This article introduces a methodological approach oriented towards the study of the relationship between SDIs and the users interacting with them as part of their professional practices. Our study is applied to coastal zone management and planning in France. This approach combines structural and data flow modelling. The former is based on Social Network Analysis (SNA) and the latter on Data Flow Diagrams (DFD). This modelling approach has been applied to an online questionnaire and semi-structured interviews. The results identify the SDIs, geographical data flows and institutional levels implied in French coastal zone management and planning.  相似文献   

13.
Viewshed analysis, often supported by geographic information system, is widely used in many application domains. However, as terrain data continue to become increasingly large and available at high resolutions, data-intensive viewshed analysis poses significant computational challenges. General-purpose computation on graphics processing units (GPUs) provides a promising means to address such challenges. This article describes a parallel computing approach to data-intensive viewshed analysis of large terrain data using GPUs. Our approach exploits the high-bandwidth memory of GPUs and the parallelism of massive spatial data to enable memory-intensive and computation-intensive tasks while central processing units are used to achieve efficient input/output (I/O) management. Furthermore, a two-level spatial domain decomposition strategy has been developed to mitigate a performance bottleneck caused by data transfer in the memory hierarchy of GPU-based architecture. Computational experiments were designed to evaluate computational performance of the approach. The experiments demonstrate significant performance improvement over a well-known sequential computing method, and an enhanced ability of analyzing sizable datasets that the sequential computing method cannot handle.  相似文献   

14.
One of the uses of geostatistical conditional simulation is as a tool in assessing the spatial uncertainty of inputs to the Monte Carlo method of system uncertainty analysis. Because the number of experimental data in practical applications is limited, the geostatistical parameters used in the simulation are themselves uncertain. The inference of these parameters by maximum likelihood allows for an easy assessment of this estimation uncertainty which, in turn, may be included in the conditional simulation procedure. A case study based on transmissivity data is presented to show the methodology whereby both model selection and parameter inference are solved by maximum likelihood.  相似文献   

15.
A method was developed to quantify a suite of organic compounds from snow melt water samples present at trace level concentrations, using a dichloromethane liquid–liquid extraction and GC–MS. Samples from a 3-m snow pit sampled in 2005 from Summit, Greenland were analyzed using the method developed, and a profile of organics over the past 4 years was compiled. Supporting data including the concentrations of total organic carbon (TOC), low molecular weight acids, and trace elements were determined using well established methods. The results show that low molecular weight acids contribute a significant percentage, up to 20%, of the measured TOC. Hopanes were measured quantitatively for the first time in Greenland snow. Hopanes, as well as PAHs, are at very low concentrations and contribute 0.0002–0.004% to TOC. Alkanes and alkanoic acids were also quantified, and contribute less than 1% and up to 7%, respectively to TOC. No apparent seasonal pattern was found for specific classes of organic compounds in the snow pit. The lack of seasonal pattern may be due to post-depositional processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号