首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Abstract

This study examines the propagation of thematic error through GIS overlay operations. Existing error propagation models for these operations are shown to yield results that are inconsistent with actual levels of propagation error. An alternate model is described that yields more consistent results. This model is based on the frequency of errors of omission and commission in input data. Model output can be used to compute a variety of error indices for data derived from different overlay operations.  相似文献   

3.
Abstract

This paper describes an inductive modelling procedure integrated with a geographical information system for analysis of pattern within spatial data. The aim of the modelling procedure is to predict the distribution within one data set by combining a number of other data sets. Data set combination is carried out using Bayes’ theorem. Inputs to the theorem, in the form of conditional probabilities, are derived from an inductive learning process in which attributes of the data set to be modelled are compared with attributes of a variety of predictor data sets. This process is carried out on random subsets of the data to generate error bounds on inputs for analysis of error propagation associated with the use of Bayes’ theorem to combine data sets in the GIS. The statistical significance of model inputs is calculated as part of the inductive learning process. Use of the modelling procedure is illustrated through the analysis of the winter habitat relationships of red deer in Grampian Region, north-east Scotland. The distribution of red deer in Deer Management Group areas in Gordon and in Kincardine and Deeside Districts is used to develop a model which predicts the distribution throughout Grampian Region; this is tested against red deer distribution in Moray District. Habitat data sets used for constructing the model are accumulated frost and altitude, obtained from maps, and land cover, derived from satellite imagery. Errors resulting from the use of Bayes’ theorem to combine data sets within the GIS and introduced in generalizing output from 50 m pixel to 1 km grid squares resolution are analysed and presented in a series of maps. This analysis of error trains is an integral part of the implemented analytical procedure and provides support to the interpretation of the results of modelling. Potential applications of the modelling procedure are discussed.  相似文献   

4.
Abstract

Kriging is an optimal method of spatial interpolation that produces an error for each interpolated value. Block kriging is a form of kriging that computes averaged estimates over blocks (areas or volumes) within the interpolation space. If this space is sampled sparsely, and divided into blocks of a constant size, a variable estimation error is obtained for each block, with blocks near to sample points having smaller errors than blocks farther away. An alternative strategy for sparsely sampled spaces is to vary the sizes of blocks in such away that a block's interpolated value is just sufficiently different from that of an adjacent block given the errors on both blocks. This has the advantage of increasing spatial resolution in many regions, and conversely reducing it in others where maintaining a constant size of block is unjustified (hence achieving data compression). Such a variable subdivision of space can be achieved by regular recursive decomposition using a hierarchical data structure. An implementation of this alternative strategy employing a split-and-merge algorithm operating on a hierarchical data structure is discussed. The technique is illustrated using an oceanographic example involving the interpolation of satellite sea surface temperature data. Consideration is given to the problem of error propagation when combining variable resolution interpolated fields in GIS modelling operations.  相似文献   

5.
Abstract

Recent developments in theory and computer software mean that it is now relatively straightforward to evaluate how attribute errors are propagated through quantitative spatial models in GIS. A major problem, however, is to estimate the errors associated with the inputs to these spatial models. A first approach is to use the root mean square error, but in many cases it is better to estimate the errors from the degree of spatial variation and the method used for mapping. It is essential to decide at an early stage whether one should use a discrete model of spatial variation (DMSV—homogeneous areas, abrupt boundaries), a continuous model (CMSV—a continuously varying regionalized variable field) or a mixture of both (MMSV—mixed model of spatial variation). Maps of predictions and prediction error standard deviations are different in all three cases, and it is crucial for error estimation which model of spatial variation is used. The choice of model has been insufficiently studied in depth, but can be based on prior information about the kinds of spatial processes and patterns that are present, or on validation results. When undetermined it is sensible to adopt the MMSV in order to bypass the rigidity of the DMSV and CMSV. These issues are explored and illustrated using data on the mean highest groundwater level in a polder area in the Netherlands.  相似文献   

6.
Abstract

Rule-based classifiers are used regularly with geographical information systems to map categorical attributes on the basis of a set of numeric or unordered categorical attributes. Although a variety of methods exist for inducing rule-based classifiers from training data, these tend to produce large numbers of rules when the data has noise. This paper describes a method for inducing compact rule-sets whose classification accuracy can, at least in some domains, compare favourably with that achieved by larger less succinct rule-sets produced by alternative methods. One rule is induced for each output class. The condition list for this rule represents a box in n-dimensional attribute space, formed by intersecting conditions which exclude other classes. Despite this simplicity, the classifier performed well in the test application prediction of soil classes in the Port Hills, New Zealand, on the basis of regolith type and topographic attributes obtained from a digital terrain model.  相似文献   

7.
8.
The calculation of surface area is meaningful for a variety of space-filling phenomena, e.g., the packing of plants or animals within an area of land. With Digital Elevation Model (DEM) data we can calculate the surface area by using a continuous surface model, such as by the Triangulated Irregular Network (TIN). However, just as the triangle-based surface area discussed in this paper, the surface area is generally biased because it is a nonlinear mapping about the DEM data which contain measurement errors. To reduce the bias in the surface area, we propose a second-order bias correction by applying nonlinear error propagation to the triangle-based surface area. This process reveals that the random errors in the DEM data result in a bias in the triangle-based surface area while the systematic errors in the DEM data can be reduced by using the height differences. The bias is theoretically given by a probability integral which can be approximated by numerical approaches including the numerical integral and the Monte Carlo method; but these approaches need a theoretical distribution assumption about the DEM measurement errors, and have a very high computational cost. In most cases, we only have variance information on the measurement errors; thus, a bias estimation based on nonlinear error propagation is proposed. Based on the second-order bias estimation proposed, the variance of the surface area can be improved immediately by removing the bias from the original variance estimation. The main results are verified by the Monte Carlo method and by the numerical integral. They show that an unbiased surface area can be obtained by removing the proposed bias estimation from the triangle-based surface area originally calculated from the DEM data.  相似文献   

9.
10.
Abstract

Error and uncertainty in spatial databases have gained considerable attention in recent years. The concern is that, as in other computer applications and, indeed, all analyses, poor quality input data will yield even worse output. Various methods for analysis of uncertainty have been developed, but none has been shown to be directly applicable to an actual geographical information system application in the area of natural resources. In spatial data on natural resources in general, and in soils data in particular, a major cause of error is the inclusion of unmapped units within areas delineated on the map as uniform. In this paper, two alternative algorithms for simulating inclusions in categorical natural resource maps are detailed. Their usefulness is shown by a simplified Monte Carlo testing to evaluate the accuracy of agricultural land valuation using land use and the soil information. Using two test areas it is possible to show that errors of as much as 6 per cent may result in the process of land valuation, with simulated valuations both above and below the actual values. Thus, although an actual monetary cost of the error term is estimated here, it is not found to be large.  相似文献   

11.
Estimation of Areal Soil Moisture by use of Terrain Data   总被引:2,自引:0,他引:2  
In this study measured soil moisture is related to primary and secondary topographical attributes within two small-scale drainage basins. The study sites are the Buddby and the Dansarhllarna drainage basins within the NOPEX area. The primary topographic attributes slope, aspect, plan and profile curvature, and the secondary topographic attribute, the wetness index, are derived from a 5 m resolution digital elevation model. The relationship with measured soil moisture in the Buddby basin is investigated by linear regression analysis. Based on mean plot measurements for the whole measurement period two different models were established, resulting in a high R 2 value. The best model was achieved with slope, profile curvature and aspect as regression variables. The models obtained were further used to regionalise the results into basin scale at both Buddby and Dansarhällarna. This demonstrated a soil moisture pattern different from the pattern resulting from the wetness index. Finally, models were established, based on two different dates of field campaigns. The results showed a good agreement with the observed soil moisture values, and a higher R 2 value was obtained when using the wetness index for the medium wet period compared to the wettest period. Further analysis is needed to verify the physical significance of the results and their suitability for hydrological modelling.  相似文献   

12.
ABSTRACT

Missing data is a common problem in the analysis of geospatial information. Existing methods introduce spatiotemporal dependencies to reduce imputing errors yet ignore ease of use in practice. Classical interpolation models are easy to build and apply; however, their imputation accuracy is limited due to their inability to capture spatiotemporal characteristics of geospatial data. Consequently, a lightweight ensemble model was constructed by modelling the spatiotemporal dependencies in a classical interpolation model. Temporally, the average correlation coefficients were introduced into a simple exponential smoothing model to automatically select the time window which ensured that the sample data had the strongest correlation to missing data. Spatially, the Gaussian equivalent and correlation distances were introduced in an inverse distance-weighting model, to assign weights to each spatial neighbor and sufficiently reflect changes in the spatiotemporal pattern. Finally, estimations of the missing values from temporal and spatial were aggregated into the final results with an extreme learning machine. Compared to existing models, the proposed model achieves higher imputation accuracy by lowering the mean absolute error by 10.93 to 52.48% in the road network dataset and by 23.35 to 72.18% in the air quality station dataset and exhibits robust performance in spatiotemporal mutations.  相似文献   

13.
Abstract

Explicit and quantitative models for the spatial prediction of soil and landscape attributes are required for environmental modelling and management. In this study, advances in the spatial representation of hydrological and geomorphological processes using terrain analysis techniques are integrated with the development of a field sampling and soil-landscape model building strategy. Statistical models are developed using relationships between terrain attributes (plan curvature, compound topographic index, upslope mean plan curvature) and soil attributes (A horizon depth, Solum depth, E horizon presence/absence) in an area with uniform geology and geomorphic history. These techniques seem to provide appropriate methodologies for spatial prediction and understanding soil landscape processes.  相似文献   

14.
Editorial     
Abstract

The analysis of geographical information is compared with other production processes in which a user can only accept an end-product if it meets certain quality requirements. Whereas users are responsible for defining the levels of quality they need to use the results of the analyses of geographical information systems in their work, database managers, experts and modellers could greatly assist users to achieve the quality of results they seek by formalizing information on: (1) data collection, level of resolution and quality; (2) the use of the basic analytical functions of the geographical information system; and (3) the data requirements, sensitivity and error propagation in models. These meta-data could be incorporated in a knowledge base alongside the geographical information system where, together with procedures for on-line error propagation, a user could be advised on the best way to achieve a desired aim. If the analysis showed that the original constellation of data, methods and models could not achieve the aim with the desired quality, the intelligent geographical information system would present a range of alternative strategies—better methods, more data, different data, better models, better model calibration, or better spatial resolution—and their costs by which the user's aims could reasonably be achieved.  相似文献   

15.
地形湿度指数算法误差的定量评价   总被引:2,自引:0,他引:2  
地形湿度指数(TWI)能够定量指示地形对土壤湿度空间分布的控制,是一种应用广泛的地形属性.目前基于栅格DEM的TWI计算方法结果各异,因此有必要对'TWI算法进行定量评价.对TWI算法通常是应用实际DEM数据进行评价.但实际DEM中存在的数据源误差会干扰对算法误差的评价.针对该问题,本文介绍了一种用不含数据源误差的人造...  相似文献   

16.
This article applies error propagation in a Monte Carlo simulation for a spatial-based fuzzy logic multi-criteria evaluation (MCE) in order to investigate the output uncertainty created by the input data sets and model structure. Six scenarios for quantifying uncertainty are reviewed. Three scenarios are progressively more complex in defining observational data (attribute uncertainty); while three other scenarios include uncertainty in observational data (position of boundaries between map units), weighting of evidence (fuzzy membership assignment), and evaluating changes in the MCE model (fuzzy logic operators). A case study of petroleum exploration in northern South America is used. Despite the resources and time required, the best estimate of input uncertainty is that based on expert-defined values. Uncertainties for fuzzy membership assignment and boundary transition zones do not affect the results as much as the attribute assignment uncertainty. The MCE fuzzy logic operator uncertainty affects the results the most. Confidence levels of 95% and 60% are evaluated with threshold values of 0.7 and 0.5 and show that accepting more uncertainty in the results increases the total area available for decision-making. Threshold values and confidence levels should be predetermined, although a series of combinations may yield the best decision-making support.  相似文献   

17.
The theory and methods for attribute error and sensitivity analysis associated with map-based suitability analysis are developed. In particular, this paper delineates the underlying types of sensitivities for suitability analysis and derives ways to measure these sensitivities. Additionally, it shows how to undertake geographical sensitivity analyses given a generic geographical information system. The uses of these methods include understanding the relationship of the attribute errors in the output map generated by errors in the input maps for a given geographical analysis. This information provides a means to assess the quality and reliability of conclusions inferred from the output map created by such an analysis.  相似文献   

18.
Artificial Intelligence (AI) models such as Artificial Neural Networks (ANNs), Decision Trees and Dempster–Shafer's Theory of Evidence have long claimed to be more error‐tolerant than conventional statistical models, but the way error is propagated through these models is unclear. Two sources of error have been identified in this study: sampling error and attribute error. The results show that these errors propagate differently through the three AI models. The Decision Tree was the most affected by error, the Artificial Neural Network was less affected by error, and the Theory of Evidence model was not affected by the errors at all. The study indicates that AI models have very different modes of handling errors. In this case, the machine‐learning models, including ANNs and Decision Trees, are more sensitive to input errors. Dempster–Shafer's Theory of Evidence has demonstrated better potential in dealing with input errors when multisource data sets are involved. The study suggests a strategy of combining AI models to improve classification accuracy. Several combination approaches have been applied, based on a ‘majority voting system’, a simple average, Dempster–Shafer's Theory of Evidence, and fuzzy‐set theory. These approaches all increased classification accuracy to some extent. Two of them also demonstrated good performance in handling input errors. Second‐stage combination approaches which use statistical evaluation of the initial combinations are able to further improve classification results. One of these second‐stage combination approaches increased the overall classification accuracy on forest types to 54% from the original 46.5% of the Decision Tree model, and its visual appearance is also much closer to the ground data. By combining models, it becomes possible to calculate quantitative confidence measurements for the classification results, which can then serve as a better error representation. Final classification products include not only the predicted hard classes for individual cells, but also estimates of the probability and the confidence measurements of the prediction.  相似文献   

19.
Digital elevation models (DEMs) have been widely used for a range of applications and form the basis of many GIS-related tasks. An essential aspect of a DEM is its accuracy, which depends on a variety of factors, such as source data quality, interpolation methods, data sampling density and the surface topographical characteristics. In recent years, point measurements acquired directly from land surveying such as differential global positioning system and light detection and ranging have become increasingly popular. These topographical data points can be used as the source data for the creation of DEMs at a local or regional scale. The errors in point measurements can be estimated in some cases. The focus of this article is on how the errors in the source data propagate into DEMs. The interpolation method considered is a triangulated irregular network (TIN) with linear interpolation. Both horizontal and vertical errors in source data points are considered in this study. An analytical method is derived for the error propagation into any particular point of interest within a TIN model. The solution is validated using Monte Carlo simulations and survey data obtained from a terrestrial laser scanner.  相似文献   

20.
ABSTRACT

Rooted in the philosophy of point- and segment-based approaches for transportation mode segmentation of trajectories, the measures that researchers have adopted to evaluate the quality of the results (1) are incomparable across approaches, hence slowing the progress in the field and (2) do not provide insight about the quality of the continuous transportation mode segmentation. To address these problems, this paper proposes new error measures that can be applied to measure how well a continuous transportation mode segmentation model performs. The error measures introduced are based on aligning multiple inferred continuous intervals to ground truth intervals, and measure the cardinality of the alignment and the spatial and temporal discrepancy between the corresponding aligned segments. The utility of this new way of computing errors is shown by evaluating the segmentation of three generic transportation mode segmentation approaches (implicit, explicit–holistic, and explicit–consensus-based transport mode segmentation), which can be implemented in a thick client architecture. Empirical evaluations on a large real-word data set reveal the superiority of explicit–consensus-based transport mode segmentation, which can be attributed to the explicit modeling of segments and transitions, which allows for a meaningful decomposition of the complex learning task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号