首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A spatial data set is consistent if it satisfies a set of integrity constraints. Although consistency is a desirable property of databases, enforcing the satisfaction of integrity constraints might not be always feasible. In such cases, the presence of inconsistent data may have a negative effect on the results of data analysis and processing and, in consequence, there is an important need for data-cleaning tools to detect and remove, if possible, inconsistencies in large data sets. This work proposes strategies to support data cleaning of spatial databases with respect to a set of integrity constraints that impose topological relations between spatial objects. The basic idea is to rank the geometries in a spatial data set that should be modified to improve the quality of the data (in terms of consistency). An experimental evaluation validates the proposal and shows that the order in which geometries are modified affects both the overall quality of the database and the final number of geometries to be processed to restore consistency.  相似文献   

2.
Multi-resolution spatial data always contain the inconsistencies of topological, directional, and metric relations due to measurement methods, data acquisition approaches, and map generalization algorithms. Therefore, checking these inconsistencies is critical for maintaining the integrity of multi-resolution or multi-source spatial data. To date, research has focused on the topological consistency, while the directional consistency at different resolutions has been largely overlooked. In this study we developed computation methods to derive the direction relations between coarse spatial objects from the relations between detailed objects. Then, the consistency of direction relations at different resolutions can be evaluated by checking whether the derived relations are compatible with the relations computed from the coarse objects in multi-resolution spatial data. The methods in this study modeled explicitly the scale effects of direction relations induced by the map generalization operator – merging, thus they are efficient for evaluating consistency. The directional consistency is an essential complement to topological and object-based consistencies.  相似文献   

3.
4.
This paper introduces a new compact topological 3D data structure. The proposed method models the real world as a complete decomposition of space and this subdivision is represented by a constrained tetrahedral network (TEN). Operators and definitions from the mathematical field of simplicial homology are used to define and handle this TEN structure. Only tetrahedrons need to be stored explicitly in a (single column) database table, while all simplexes of lower dimensions, constraints and topological relationships can be derived in views. As a result the data structure is relatively compact and easy to update, while it still offers favourable characteristics from a computational point of view as well as presence of topological relationships.  相似文献   

5.
The accuracy of old maps can hold interesting historical information, and is therefore studied using distortion analysis methods. These methods start from a set of ground control points that are identified both on the old map and on a modern reference map or globe, and conclude with techniques that compute and visualise distortion. Such techniques have advanced over the years, but leave room for improvement, as the current ones result in approximate values and a coarse spatial resolution. We propose a more elegant and more accurate way to compute distortion of old maps by translating the technique of differential distortion analysis, used in map projection theory, to the setting where an old map and a reference map are directly compared. This enables the application of various useful distortion metrics to the study of old maps, such as the area scale factor, the maximum angular distortion and the Tissot indicatrices. As such a technique is always embedded in a full distortion analysis method we start by putting forward an optimal analysis method for a general-purpose study, which then serves as the foundation for the development of our technique. Thereto, we discuss the structure of distortion analysis methods and the various options available for every step of the process, including the different settings in which the old map can be compared to its modern counterpart, the techniques that can be used to interpolate between both, and the techniques available to compute and visualise the distortion. We conclude by applying our general-purpose method, including the differential distortion analysis technique, to an example map also used in other literature.  相似文献   

6.
Environmental simulation models need automated geographic data reduction methods to optimize the use of high-resolution data in complex environmental models. Advanced map generalization methods have been developed for multiscale geographic data representation. In the case of map generalization, positional, geometric and topological constraints are focused on to improve map legibility and communication of geographic semantics. In the context of environmental modelling, in addition to the spatial criteria, domain criteria and constraints also need to be considered. Currently, due to the absence of domain-specific generalization methods, modellers resort to ad hoc methods of manual digitization or use cartographic methods available in off-the-shelf software. Such manual methods are not feasible solutions when large data sets are to be processed, thus limiting modellers to the single-scale representations. Automated map generalization methods can rarely be used with confidence because simplified data sets may violate domain semantics and may also result in suboptimal model performance. For best modelling results, it is necessary to prioritize domain criteria and constraints during data generalization. Modellers should also be able to automate the generalization techniques and explore the trade-off between model efficiency and model simulation quality for alternative versions of input geographic data at different geographic scales. Based on our long-term research with experts in the analytic element method of groundwater modelling, we developed the multicriteria generalization (MCG) framework as a constraint-based approach to automated geographic data reduction. The MCG framework is based on the spatial multicriteria decision-making paradigm since multiscale data modelling is too complex to be fully automated and should be driven by modellers at each stage. Apart from a detailed discussion of the theoretical aspects of the MCG framework, we discuss two groundwater data modelling experiments that demonstrate how MCG is not just a framework for automated data reduction, but an approach for systematically exploring model performance at multiple geographic scales. Experimental results clearly indicate the benefits of MCG-based data reduction and encourage us to continue expanding the scope of and implement MCG for multiple application domains.  相似文献   

7.
Spatial data uncertainty models (SDUM) are necessary tools that quantify the reliability of results from geographical information system (GIS) applications. One technique used by SDUM is Monte Carlo simulation, a technique that quantifies spatial data and application uncertainty by determining the possible range of application results. A complete Monte Carlo SDUM for generalized continuous surfaces typically has three components: an error magnitude model, a spatial statistical model defining error shapes, and a heuristic that creates multiple realizations of error fields added to the generalized elevation map. This paper introduces a spatial statistical model that represents multiple statistics simultaneously and weighted against each other. This paper's case study builds a SDUM for a digital elevation model (DEM). The case study accounts for relevant shape patterns in elevation errors by reintroducing specific topological shapes, such as ridges and valleys, in appropriate localized positions. The spatial statistical model also minimizes topological artefacts, such as cells without outward drainage and inappropriate gradient distributions, which are frequent problems with random field-based SDUM. Multiple weighted spatial statistics enable two conflicting SDUM philosophies to co-exist. The two philosophies are ‘errors are only measured from higher quality data’ and ‘SDUM need to model reality’. This article uses an automatic parameter fitting random field model to initialize Monte Carlo input realizations followed by an inter-map cell-swapping heuristic to adjust the realizations to fit multiple spatial statistics. The inter-map cell-swapping heuristic allows spatial data uncertainty modelers to choose the appropriate probability model and weighted multiple spatial statistics which best represent errors caused by map generalization. This article also presents a lag-based measure to better represent gradient within a SDUM. This article covers the inter-map cell-swapping heuristic as well as both probability and spatial statistical models in detail.  相似文献   

8.
Topology is a central, defining feature of geographical information systems (GIS). The advantages of topological data structures are that data storage for polygons is reduced because boundaries between adjacent polygons are not stored twice, explicit adjacency relations are maintained, and data entry and map production is improved by providing a rigorous, automated method to handle artifacts of digitizing. However, what explains the resurgence of non-topological data structures and why do contemporary desktop GIS packages support them? The historical development of geographical data structures is examined to provide a context for identifying the advantages and disadvantages of topological and non-topological data structures. Although explicit storage of adjacent features increases performance of adjacency analyses, it is not required to conduct these operations. Non-topological data structures can represent features that conform to planar graph theory (i.e. non-overlapping, space-filling polygons). A data structure that can represent proximal and directional spatial relations, in addition to topological relationships is described. This extension allows a broader set of functional relationships and connections between geographical features to be explicitly represented.  相似文献   

9.
Abstract

This paper describes a regional geographical information system (GIS) for some Mediterranean benthic communities. The area covered by the GIS lies between the cities of La Ciotat and Giens in southeast France. The distinctive characteristics of this GIS compared with others usually described in the literature, are that all its layers describe the same theme but as seen at different moments with different scales and techniques used by different oceanographers. A method was devised to synthesize, on a pixel basis, the content of all these layers. Each pixel within each layer is weighted with a function relating to the year of survey, the sampling technique and the scale of the original map corresponding to that layer. The synthesis map is composed of the highest weighted values found in the set of layers. Also at each pixel, conflicts between the contents of layers are quantified and mapped.  相似文献   

10.
Existing sensor network query processors (SNQPs) have demonstrated that in-network processing is an effective and efficient means of interacting with wireless sensor networks (WSNs) for data collection tasks. Inspired by these findings, this article investigates the question as to whether spatial analysis over WSNs can be built upon established distributed query processing techniques, but, here, emphasis is on the spatial aspects of sensed data, which are not adequately addressed in the existing SNQPs. By spatial analysis, we mean the ability to detect topological relationships between spatially referenced entities (e.g. whether mist intersects a vineyard or is disjoint from it) and to derive representations grounded on such relationships (e.g. the geometrical extent of that part of a vineyard that is covered by mist). To support the efficient representation, querying and manipulation of spatial data, we use an algebraic approach. We revisit a previously proposed centralized spatial algebra comprising a set of spatial data types and a comprehensive collection of operations. We have redefined and re-conceptualized the algebra for distributed evaluation and shown that it can be efficiently implemented for in-network execution. This article provides rigorous, formal definitions of the spatial data types, points, lines and regions, together with spatial-valued and topological operations over them. The article shows how the algebra can be used to characterize complex and expressive topological relationships between spatial entities and spatial phenomena that, due to their dynamic, evolving nature, cannot be represented a priori.  相似文献   

11.
The utility of nonmetric, multidimensional-scaling techniques is demonstrated for the analysis and collection of environmental-cognition data. By comparing the multidimensional-scaling solutions of a real-setting map to scaling solutions for sketch maps and two psychophysical, distance-scaling procedures, we demonstrate that magnitude estimation of actual interpoint distances is comparable in accuracy to sketch maps when produced without constraints, or when subjects are given a specified list of landmarks to include on their maps. Triadic comparisons of actual interpoint distances were less accurate than the three other techniques.  相似文献   

12.
Abstract

A simple, exemplary system is described that performs reasoning about the spatial relationships between members of a set of spatial objects. The main problem of interest is to make sound and complete inferences about the set of all spatial relationships that hold between the objects, given prior information about a subset of the relationships. The spatial inferences are formalized within the framework of relation algebra and procedurally implemented in terms of constraint satisfaction procedures. Although the approach is general, the particular example employs a new ‘complete’ set of topological relationships that have been published elsewhere. In particular, a relation algebra for these topological relations is developed and a computational implementation of this algebra is described. Systems with such reasoning capabilities have many applications in geographical analysis and could be usefully incorporated into geographical information systems and related systems.  相似文献   

13.
Abstract

Visual interpretation of high-resolution satellite data has been useful for mapping linear features, such as roads and updating land-use changes. However, it would be beneficial to map new road networks digitally from satellite data to update digital databases using semi-automated techniques. In this paper, an algorithm called Gradient Direction Profile Analysis (GDPA) is used to extract road networks digitally from SPOT High Resolution Visible (HRV) panchromatic data. The roads generated are compared with a visual interpretation of the SPOT HRV multispectral and panchromatic data. The technique is most effective in areas where road development is relatively recent. This is due to the spectral consistency of new road networks. As new road networks are those of most interest to the land manager, this is a useful technique for updating digital road network files within a geographical information system of urban areas.  相似文献   

14.
This paper introduces the concept of the smooth topological Generalized Area Partitioning (tGAP) structure represented by a space-scale partition, which we term the space-scale cube. We take the view of ‘map generalization as extrusion of data into an additional dimension’. For 2D objects the resulting vario-scale representation is a 3D structure, while for 3D objects the result is a 4D structure.

This paper provides insights in: (1) creating valid data for the cube and proof that this is always possible for the implemented 2D tGAP generalization operators (line simplification, merge and split/collapse), (2) obtaining a valid 2D polygonal map representation at arbitrary scale from the cube, (3) using the vario-scale structure to provide smooth zoom and progressive transfer between server and client, (4) exploring which other possibilities the cube brings for obtaining maps having non-homogenous scales over their domain (which we term mixed-scale maps), and (5) using the same principles also for higher dimensional data; illustrated with 3D input data represented in a 4D hypercube.

The proposed new structure has very significant advantages over existing multi-scale/multi-representation solutions (in addition to being truly vario-scale): (1) due to tight integration of space and scale, there is guaranteed consistency between scales, (2) it is relatively easy to implement smooth zoom, and (3) compact, object-oriented encoding is provided for a complete scale range.  相似文献   


15.

The utility of nonmetric, multidimensional-scaling techniques is demonstrated for the analysis and collection of environmental-cognition data. By comparing the multidimensional-scaling solutions of a real-setting map to scaling solutions for sketch maps and two psychophysical, distance-scaling procedures, we demonstrate that magnitude estimation of actual interpoint distances is comparable in accuracy to sketch maps when produced without constraints, or when subjects are given a specified list of landmarks to include on their maps. Triadic comparisons of actual interpoint distances were less accurate than the three other techniques.  相似文献   

16.
17.
This research is motivated by the need for 3D GIS data models that allow for 3D spatial query, analysis and visualization of the subunits and internal network structure of ‘micro‐spatial environments’ (the 3D spatial structure within buildings). It explores a new way of representing the topological relationships among 3D geographical features such as buildings and their internal partitions or subunits. The 3D topological data model is called the combinatorial data model (CDM). It is a logical data model that simplifies and abstracts the complex topological relationships among 3D features through a hierarchical network structure called the node‐relation structure (NRS). This logical network structure is abstracted by using the property of Poincaré duality. It is modelled and presented in the paper using graph‐theoretic formalisms. The model was implemented with real data for evaluating its effectiveness for performing 3D spatial queries and visualization.  相似文献   

18.
The automation of cartographic map production is still an important research field in Geographical Information Systems (GIS). With the increasing development of monitoring and decision‐aid systems either on computer networks or wireless networks, efficient methods are needed to visualise geographical data while respecting some application constraints (accuracy, legibility, security, etc.). This paper introduces a B‐spline snake model to deal with the current operators involved in the cartographic generalisation process of lines. This model enables us to perform those operators with a continuous approach. In order to avoid local conflicts such as intersections or self‐intersections, the consistency of the lines is checked and discrete operations such as segment removal are performed during the process. We apply the method to map production in the highly constrained domain of maritime navigation systems. Experimental results of marine chart generalisation yield some discussions about generalisation robustness and quality.  相似文献   

19.
The map is a medium for recording geographical information. The information contents of a map are of interest to spatial information scientists. In this paper, existing quantitative measures for map information are evaluated. It is pointed out that these are only measures for statistical information and some sort of topological information. However, these measures have not taken into consideration the spaces occupied by map symbols and the spatial distribution of these symbols. As a result, a set of new quantitative measures is proposed, for metric information, topological information and thematic information. An experimental evaluation is also conducted. Results show that the metric information is more meaningful than statistical information, and the new index for topological information is more meaningful than the existing one. It is also found that the new measure for thematic information is useful in practice.  相似文献   

20.
Building generalization is a difficult operation due to the complexity of the spatial distribution of buildings and for reasons of spatial recognition. In this study, building generalization is decomposed into two steps, i.e. building grouping and generalization execution. The neighbourhood model in urban morphology provides global constraints for guiding the global partitioning of building sets on the whole map by means of roads and rivers, by which enclaves, blocks, superblocks or neighbourhoods are formed; whereas the local constraints from Gestalt principles provide criteria for the further grouping of enclaves, blocks, superblocks and/or neighbourhoods. In the grouping process, graph theory, Delaunay triangulation and the Voronoi diagram are employed as supporting techniques. After grouping, some useful information, such as the sum of the building's area, the mean separation and the standard deviation of the separation of buildings, is attached to each group. By means of the attached information, an appropriate operation is selected to generalize the corresponding groups. Indeed, the methodology described brings together a number of well-developed theories/techniques, including graph theory, Delaunay triangulation, the Voronoi diagram, urban morphology and Gestalt theory, in such a way that multiscale products can be derived.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号