首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 487 毫秒
1.
One difficulty in integrating geospatial data sets from different sources is variation in feature classification and semantic content of the data. One step towards achieving beneficial semantic interoperability is to assess the semantic similarity among objects that are categorised within data sets. This article focuses on measuring semantic and structural similarities between categories of formal data, such as Ordnance Survey (OS) cartographic data, and volunteered geographic information (VGI), such as that sourced from OpenStreetMap (OSM), with the intention of assessing possible integration. The model involves ‘tokenisation’ to search for common roots of words, and the feature classifications have been modelled as an XML schema labelled rooted tree for hierarchical analysis. The semantic similarity was measured using the WordNet::Similarity package, while the structural similarities between sub-trees of the source and target schemas have also been considered. Along with dictionary and structural matching, the data type of the category itself is a comparison variable. The overall similarity is based on a weighted combination of these three measures. The results reveal that the use of a generic similarity matching system leads to poor agreement between the semantics of OS and OSM data sets. It is concluded that a more rigorous peer-to-peer assessment of VGI data, increasing numbers and transparency of contributors, the initiation of more programs of quality testing and the development of more directed ontologies can improve spatial data integration.  相似文献   

2.
This paper presents a formal framework for the representation of three-dimensional geospatial data and the definition of common geographic information system (GIS) spatial operations. We use the compact stack-based representation of terrains (SBRT) in order to model geological volumetric data, both at the surface and subsurface levels, thus preventing the large storage requirements of regular voxel models. The main contribution of this paper is fitting the SBRT into the geo-atom theory in a seamless way, providing it with a sound formal geographic foundation. In addition we have defined a set of common spatial operations on this representation using the tools provided by map algebra. More complex geoprocessing operations or geophysical simulations using the SBRT as representation can be implemented as a composition of these fundamental operations. Finally a data model and an implementation extending the coverage concept provided by the Geography Markup Language standard are suggested. Geoscientists and GIS professionals can take advantage of this model to exchange and reuse geoinformation within a well-specified framework.  相似文献   

3.
Environmental simulation models need automated geographic data reduction methods to optimize the use of high-resolution data in complex environmental models. Advanced map generalization methods have been developed for multiscale geographic data representation. In the case of map generalization, positional, geometric and topological constraints are focused on to improve map legibility and communication of geographic semantics. In the context of environmental modelling, in addition to the spatial criteria, domain criteria and constraints also need to be considered. Currently, due to the absence of domain-specific generalization methods, modellers resort to ad hoc methods of manual digitization or use cartographic methods available in off-the-shelf software. Such manual methods are not feasible solutions when large data sets are to be processed, thus limiting modellers to the single-scale representations. Automated map generalization methods can rarely be used with confidence because simplified data sets may violate domain semantics and may also result in suboptimal model performance. For best modelling results, it is necessary to prioritize domain criteria and constraints during data generalization. Modellers should also be able to automate the generalization techniques and explore the trade-off between model efficiency and model simulation quality for alternative versions of input geographic data at different geographic scales. Based on our long-term research with experts in the analytic element method of groundwater modelling, we developed the multicriteria generalization (MCG) framework as a constraint-based approach to automated geographic data reduction. The MCG framework is based on the spatial multicriteria decision-making paradigm since multiscale data modelling is too complex to be fully automated and should be driven by modellers at each stage. Apart from a detailed discussion of the theoretical aspects of the MCG framework, we discuss two groundwater data modelling experiments that demonstrate how MCG is not just a framework for automated data reduction, but an approach for systematically exploring model performance at multiple geographic scales. Experimental results clearly indicate the benefits of MCG-based data reduction and encourage us to continue expanding the scope of and implement MCG for multiple application domains.  相似文献   

4.
5.
Assessment issues in geographic education for the twenty-first century   总被引:1,自引:1,他引:0  
《The Journal of geography》2012,111(4):171-174
  相似文献   

6.
地理视频数据模型及其应用开发研究   总被引:1,自引:0,他引:1  
讨论地理视频的基本概念和数据模型,尝试提出地理视频数据的实体-关系图,并在网络环境中进行地理视频应用开发.地理视频数据模型的核心要素是:对视频帧进行位置和语义描述,建立视频片段的轨迹图层和元数据,并在轨迹图层中扩充视频帧线性参照.通过空间参照、线性参照和语义参照将视频数据与地理数据集成,实现地理视频的查询、检索、播放和地图跟踪.以河南大学校园和开封市公路网络为例,采集地理视频数据,建立地理视频数据库,发布网络地图和视频服务,并分别采用最新的Adobe Flex框架、ArcGIS Server ADF和JavaScript Mashup方式,开发网络视频GIS应用系统.原型系统开发表明,地理视频数据模型适用于网络视频GIS开发,且在技术实现上较为简单.  相似文献   

7.
The ever‐increasing number of spatial data sets accessible through spatial data clearinghouses continues to make geographic information retrieval and spatial data discovery major challenges. Such challenges have been addressed in the discipline of Information Retrieval through ranking of data according to inferred degrees of relevance. Spatial data, however, present an additional challenge as they are characteristically made up of geometry, attribute and, optionally, temporal components. As these components are mutually independent of one another, this paper suggests that they be ranked independently of one another. The representation of the results of the independent ranking of these three components of spatial data suggests that representation of the results of the ranking process requires an alternative approach to currently used textual ranked lists: visualisation of relevance in a three‐dimensional visualisation environment. To illustrate the possible application of such an approach, a prototype browser is presented.  相似文献   

8.
当前时空数据模型多以描述空间实体的离散变化为主。该文中对空间运动对象在抽象层次的无限连续空间、离散层次的有限离散空间上的数据类型进行分析和定义,将其分别划分为时间类型、空间类型和时态类型来研究,并提出支持空间运动对象的表示方法和操作方式。该方法既能表示空间实体的连续运动,也能表示其离散变化,为空间运动对象时空数据模型的建立奠定了基础。  相似文献   

9.
Insufficient spatial coverage of existing land-cover data is a common limitation to timely and effective spatial analysis. Achieving spatial completeness of land-cover data is the most challenging for large study areas which straddle ecological or administrative boundaries, and where individuals and agencies lack access to, and the means to process, raw data from which to derive spatially complete land-cover maps. In many cases, various sources of secondary data are available, so that land-cover map assimilation and synthesis can resolve this research problem. The following paper develops a reliable and repeatable framework for assimilating and synthesizing pre-classified data sets. Assimilation is achieved through data reformatting and map legend reconciliation in the context of a specific application. Individual maps are assessed for accuracy at various geographic scales and levels of thematic precision, with an emphasis on the ‘area of overlap’, in order to extract information that guides the synthesis process. The quality of the synthesized land-cover data set is evaluated using advanced accuracy assessment methods, including a measure describing the ‘magnitude of disagreement’. This method is applied to derive a seamless thematic map of the land cover of eastern Ontario from two disparate map series. The importance of assessing data quality throughout the process using multiple reference data sets is highlighted, and limitations of the method are discussed.  相似文献   

10.
Geographical information systems are more and more based on a DBMS with spatial extensions, which is also the case for the system described in this paper. The design and implementation of a generic geographical query tool, a platform for querying multiple spatio-temporal data sets and associated thematic data, is presented. The system is designed to be generic, that is without one specific application in mind. It supports ad-hoc queries covering both the spatial and the thematic part of the data. The generic geographic query tool will be illustrated with spatial and thematic Cadastral data. Special attention will be given to the temporal aspects: a spatio-temporal data model will be described together with a set of views for easy querying. DBMS views play an important role in the architecture of the system: integration of models, aggregation of information, presentation of temporal data, and so on. The current production version of the geographic query tool within the Dutch Cadastre is based on GeoICT products with a relatively small market share (Ingres and GEO++). A new prototype version is being developed using mainstream Geo-ICT products (Oracle and MapInfo). First results and open issues with respect to this prototype are presented.  相似文献   

11.
The availability of continental and global-scale spatio-temporal geographical data sets and the requirement to efficiently process, analyse and manage them led to the development of the temporally enabled Geographic Resources Analysis Support System (GRASS GIS). We present the temporal framework that extends GRASS GIS with spatio-temporal capabilities. The framework provides comprehensive functionality to implement a full-featured temporal geographic information system (GIS) based on a combined field and object-based approach. A significantly improved snapshot approach is used to manage spatial fields of raster, three-dimensional raster and vector type in time. The resulting timestamped spatial fields are organised in spatio-temporal fields referred to as space-time data sets. Both types of fields are handled as objects in our framework. The spatio-temporal extent of the objects and related metadata is stored in relational databases, thus providing additional functionalities to perform SQL-based analysis. We present our combined field and object-based approach in detail and show the management, analysis and processing of spatio-temporal data sets with complex spatio-temporal topologies. A key feature is the hierarchical processing of spatio-temporal data ranging from topological analysis of spatio-temporal fields over boolean operations on spatio-temporal extents, to single pixel, voxel and vector feature access. The linear scalability of our approach is demonstrated by handling up to 1,000,000 raster layers in a single space-time data set. We provide several code examples to show the capabilities of the GRASS GIS Temporal Framework and present the spatio-temporal intersection of trajectory data which demonstrates the object-based ability of our framework.  相似文献   

12.
Wildlife ecologists frequently make use of limited information on locations of a species of interest in combination with readily available GIS data to build models to predict space use. In addition to a wide range of statistical data models that are more commonly used, machine learning approaches provide another means to develop predictive spatial models. However, comparison of output from these two families of models for the same data set is not often carried out. It is important that wildlife managers understand the pitfalls and limitations when a single set of models is used with limited GIS data to try to predict and understand species distribution. To illustrate this, we carried out two sets of models (generalized linear mixed models (GLMMs) and boosted regression trees (BRTs)) to predict geographic occupancy of the eastern coyote (Canis latrans) on the island of Newfoundland, Canada. This exercise is illustrative of common spatial questions in wildlife research and management. Our results show that models vary depending on the approach (GLMM vs. BRT) and that, overall, BRT had higher predictive ability. Although machine learning has been criticized because it is not explicitly hypothesis-driven, it has been used in other areas of spatial modelling with success. Here, we demonstrate that it may be a useful approach for predicting wildlife space use and to generate hypotheses when data are limited. The results of this comparison can help to improve other models for species distributions and also guide future sampling and modelling initiatives.  相似文献   

13.
Managing geophysical data generated by emerging spatiotemporal data sources (e.g. geosensor networks) presents a growing challenge to Geographic Information System science. The presence of correlation poses difficulties with respect to traditional spatial data analysis. This paper describes a novel spatiotemporal analytical scheme that allows us to yield a characterization of correlation in geophysical data along the spatial and temporal dimensions. We resort to a multivariate statistical model, namely CoKriging, in order to derive accurate spatiotemporal interpolation models. These predict unknown data by utilizing not only their own geosensor values at the same time, but also information from near past data. We use a window-based computation methodology that leverages the power of temporal correlation in a spatial modeling phase. This is done by also fitting the computed interpolation model to data which may change over time. In an assessment, using various geophysical data sets, we show that the presented algorithm is often able to deal with both spatial and temporal correlations. This helps to gain accuracy during the interpolation phase, compared to spatial and spatiotemporal competitors. Specifically, we evaluate the efficacy of the interpolation phase by using established machine-learning metrics (i.e. root mean squared error, Akaike information criterion and computation time).  相似文献   

14.
The availability of spatial data on an unprecedented scale as well as advancements in analytical and visualization techniques gives researchers the opportunity to study complex problems over large urban and regional areas. Nevertheless, few individual data sets exist that provide both the requisite spatial and/or temporal observational frequency to truly facilitate detailed investigations. Some data are collected frequently over time but only at a few geographic locations (e.g., weather stations). Similarly, other data are collected with a high level of spatial resolution but not at regular or frequent time intervals (e.g., satellite data). The purpose of this article is to present an interpolation approach that leverages the relative temporal richness of one data set with the relative spatial richness of another to fill in the gaps. Because different interpolation techniques are more appropriate than others for specific types of data, we propose a space–time interpolation approach whereby two interpolation methods – one for the temporal and one for the spatial dimension – are used in tandem to increase the accuracy results.

We call our ensemble approach the space–time interpolation environment (STIE). The primary steps within this environment include a spatial interpolation processor, a temporal interpolation processor, and a calibration processor, which enforces phenomenon-related behavioral constraints. The specific interpolation techniques used within the STIE can be chosen on the basis of suitability for the data and application at hand. In this article, we first describe STIE conceptually including the data input requirements, output structure, details of the primary steps, and the mechanism for coordinating the data within those steps. We then describe a case study focusing on urban land cover in Phoenix, Arizona, using our working implementation. Our empirical results show that our approach increased the accuracy for estimating urban land cover better than a single interpolation technique.  相似文献   

15.
Hybrid terrain models combine large regular data sets and high-resolution irregular meshes [triangulated irregular network (TIN)] for topographically and morphologically complex terrain features such as man-made microstructures or cliffs. In this paper, a new method to generate and visualize this kind of 3D hybrid terrain models is presented. This method can integrate geographic data sets from multiple sources without a remeshing process to combine the heterogeneous data of the different models. At the same time, the original data sets are preserved without modification, and, thus, TIN meshes can be easily edited and replaced, among other features. Specifically, our approach is based on the utilization of the external edges of convexified TINs as the fundamental primitive to tessellate the space between both types of meshes. Our proposal is eminently parallel, requires only a minimal preprocessing phase, and minimizes the storage requirements when compared with the previous proposals.  相似文献   

16.
Spatial sciences are confronted with increasing amounts of high-dimensional data. These data commonly exhibit spatial and temporal dimensions. To explore, extract, and generalize inherent patterns in large spatiotemporal data sets, clustering algorithms are indispensable. These clustering algorithms must account for the distinct special properties of space and time to outline meaningful clusters in such data sets. Therefore, this research develops a hierarchical method based on self-organizing maps. The hierarchical architecture permits independent modeling of spatial and temporal dependence. To exemplify the utility of the method, this research uses an artificial data set and a socio-economic data set of the Ostregion, Austria, from the years 1961 to 2001. The results for the artificial data set demonstrate that the proposed method produces meaningful clusters that cannot be achieved when disregarding differences in spatial and temporal dependence. The results for the socio-economic data set show that the proposed method is an effective and powerful tool for analyzing spatiotemporal patterns in a regional context.  相似文献   

17.
For the evaluation of results from remote sensing and high-resolution spatial models it is often necessary to assess the similarity of sets of maps. This paper describes a method to compare raster maps of categorical data. The method applies fuzzy set theory and involves both fuzziness of location and fuzziness of category. The fuzzy comparison yields a map, which specifies for each cell the degree of similarity on a scale of 0 to 1. Besides this spatial assessment of similarity also an overall value for similarity is derived. This statistic corrects the cell-average similarity value for the expected similarity. It can be considered the fuzzy equivalent of the Kappa statistic and is therefore called KFuzzy. A hypothetical case demonstrates how the comparison method distinguishes minor changes and fluctuations within patterns from major changes. Finally, a practical case illustrates how the method can be useful in a validation process.  相似文献   

18.
推荐系统是帮助互联网用户克服信息过剩的有效工具。在地学数据共享领域,较其他物品的内容属性,地学数据具有更加丰富的时空属性,这也给地学数据推荐带来挑战。针对地学数据的特点,为地学数据共享推荐服务开发了一种动态加权的混合过滤方法。该方法分别采用协同过滤和基于内容过滤算法预测用户对数据的兴趣度,再以训练模型计算最优加权权重,计算最终预测评分。在数据获取阶段,通过用户访问日志数据,采用Jenks Natural Break算法分析用户访问记录获取用户的数据兴趣度。在基于内容过滤部分,通过数据的空间、时间及内容属性计算数据相似度,并以用户历史行为为依据计算用户兴趣。在协同过滤和基于内容过滤中分别采用k-NN算法计算用户对未访问数据的预测评分,并进行加权求和。通过训练集,对理想权重值及用户的共同评价度(co-rating level)进行建模,拟合二者的关系。该模型被应用于混合过滤的权重调整,以获得最优的加权方程。测试结果显示,结合数据时空属性的混合过滤方法的准确度和召回率,较单一的协同过滤或基于内容过滤方法有显著提高。  相似文献   

19.
Childhood vaccination data are made available at a school level in some U.S. states. These data can be geocoded and may be considered as having a high spatial resolution. However, a school only represents the destination location for the set of students who actually reside and interact within some larger areal region, creating a spatial mismatch. Public school districts are often used to represent these regions, but fail to account for private schools and school of choice programs. We offer a new approach for estimating childhood vaccination coverage rates at a community level by integrating school level data with population commuting information. The resulting mobility-adjusted vaccine coverage estimates resolve the spatial mismatch problem and are more aligned with the geographic scale at which public health policies are implemented. We illustrate the utility of our approach using a case study on diphtheria, tetanus, and pertussis (DTP) vaccination coverage for kindergarten students in California. The modeled community-level DTP coverage estimates yield a statewide coverage of 92.37%, which is highly similar to the 92.44% coverage rate calculated from the original school-level data.  相似文献   

20.
The vast accumulation of environmental data and the rapid development of geospatial visualization and analytical techniques make it possible for scientists to solicit information from local citizens to map spatial variation of geographic phenomena. However, data provided by citizens (referred to as citizen data in this article) suffer two limitations for mapping: bias in spatial coverage and imprecision in spatial location. This article presents an approach to minimizing the impacts of these two limitations of citizen data using geospatial analysis techniques. The approach reduces location imprecision by adopting a frequency-sampling strategy to identify representative presence locations from areas over which citizens observed the geographic phenomenon. The approach compensates for the spatial bias by weighting presence locations with cumulative visibility (the frequency at which a given location can be seen by local citizens). As a case study to demonstrate the principle, this approach was applied to map the habitat suitability of the black-and-white snub-nosed monkey (Rhinopithecus bieti) in Yunnan, China. Sightings of R. bieti were elicited from local citizens using a geovisualization platform and then processed with the proposed approach to predict a habitat suitability map. Presence locations of R. bieti recorded by biologists through intensive field tracking were used to validate the predicted habitat suitability map. Validation showed that the continuous Boyce index (Bcont(0.1)) calculated on the suitability map was 0.873 (95% CI: [0.810, 0.917]), indicating that the map was highly consistent with the field-observed distribution of R. bieti. Bcont(0.1) was much lower (0.173) for the suitability map predicted based on citizen data when location imprecision was not reduced and even lower (?0.048) when there was no compensation for spatial bias. This indicates that the proposed approach effectively minimized the impacts of location imprecision and spatial bias in citizen data and therefore effectively improved the quality of mapped spatial variation using citizen data. It further implies that, with the application of geospatial analysis techniques to properly account for limitations in citizen data, valuable information embedded in such data can be extracted and used for scientific mapping.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号