首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Abstract

When data on environmental attributes such as those of soil or groundwater are manipulated by logical cartographic modelling, the results are usually assumed to be exact. However, in reality the results will be in error because the values of input attributes cannot be determined exactly. This paper analyses how errors in such values propagate through Boolean and continuous modelling, involving the intersection of several maps. The error analysis is carried out using Monte Carlo methods on data interpolated by block kriging to a regular grid which yields predictions and prediction error standard deviations of attribute values for each pixel. The theory is illustrated by a case study concerning the selection of areas of medium textured, non-saline soil at an experimental farm in Alberta, Canada. The results suggest that Boolean methods of sieve mapping are much more prone to error propagation than the more robust continuous equivalents. More study of the effects of errors and of the choice of attribute classes and of class parameters on error propagation is recommended.  相似文献   

3.
Abstract

This study examines the propagation of thematic error through GIS overlay operations. Existing error propagation models for these operations are shown to yield results that are inconsistent with actual levels of propagation error. An alternate model is described that yields more consistent results. This model is based on the frequency of errors of omission and commission in input data. Model output can be used to compute a variety of error indices for data derived from different overlay operations.  相似文献   

4.
Environmental simulation models need automated geographic data reduction methods to optimize the use of high-resolution data in complex environmental models. Advanced map generalization methods have been developed for multiscale geographic data representation. In the case of map generalization, positional, geometric and topological constraints are focused on to improve map legibility and communication of geographic semantics. In the context of environmental modelling, in addition to the spatial criteria, domain criteria and constraints also need to be considered. Currently, due to the absence of domain-specific generalization methods, modellers resort to ad hoc methods of manual digitization or use cartographic methods available in off-the-shelf software. Such manual methods are not feasible solutions when large data sets are to be processed, thus limiting modellers to the single-scale representations. Automated map generalization methods can rarely be used with confidence because simplified data sets may violate domain semantics and may also result in suboptimal model performance. For best modelling results, it is necessary to prioritize domain criteria and constraints during data generalization. Modellers should also be able to automate the generalization techniques and explore the trade-off between model efficiency and model simulation quality for alternative versions of input geographic data at different geographic scales. Based on our long-term research with experts in the analytic element method of groundwater modelling, we developed the multicriteria generalization (MCG) framework as a constraint-based approach to automated geographic data reduction. The MCG framework is based on the spatial multicriteria decision-making paradigm since multiscale data modelling is too complex to be fully automated and should be driven by modellers at each stage. Apart from a detailed discussion of the theoretical aspects of the MCG framework, we discuss two groundwater data modelling experiments that demonstrate how MCG is not just a framework for automated data reduction, but an approach for systematically exploring model performance at multiple geographic scales. Experimental results clearly indicate the benefits of MCG-based data reduction and encourage us to continue expanding the scope of and implement MCG for multiple application domains.  相似文献   

5.
Mineral magnetic properties have been used recently to classify and to attempt to quantify the sources of sediments through environmental systems. Linear modelling techniques could be used with a wide range of concentration-dependent magnetic measurements to quantify the sources of sediments. To investigate wider application of linear modelling techniques using mineral magnetic properties, research has been conducted using laboratory mixtures of up to six source materials, including both natural environmental materials and synthetic compounds. While six sources may seem ambitious, this figure was used as an absolute upper limit rather than giving a real prospect of mathematically unmixing six sources. It has been found that even with the most magnetically differentiable materials, large errors are encountered when modelling the sources of the mixtures. This paper investigates the causes of 'non-additivity' of certain magnetic measurements and the failure of the linear modelling of the sources of the mixtures. Possible reasons for this failure include source homogeneity, calibration and linearity of equipment, magnetic viscosity of materials and/or the changing physical characteristics of the source materials once mixed together (interaction effects). In testing linear additivity, low-frequency susceptibility is the most reliable mineral magnetic measurement, while remanence measurements suffer from a systematic error in the expected results. Results have shown that in the best controlled conditions where the sources are identified and are artificially mixed together, the results of linear modelling are quite poor and at best four sources can be 'unmixed' with reasonable success. It is suggested that interaction within the mixtures, especially when containing highly ferrimagnetic burnt environmental materials, causes some of the non-additivity phenomena.  相似文献   

6.
Abstract

This paper describes an inductive modelling procedure integrated with a geographical information system for analysis of pattern within spatial data. The aim of the modelling procedure is to predict the distribution within one data set by combining a number of other data sets. Data set combination is carried out using Bayes’ theorem. Inputs to the theorem, in the form of conditional probabilities, are derived from an inductive learning process in which attributes of the data set to be modelled are compared with attributes of a variety of predictor data sets. This process is carried out on random subsets of the data to generate error bounds on inputs for analysis of error propagation associated with the use of Bayes’ theorem to combine data sets in the GIS. The statistical significance of model inputs is calculated as part of the inductive learning process. Use of the modelling procedure is illustrated through the analysis of the winter habitat relationships of red deer in Grampian Region, north-east Scotland. The distribution of red deer in Deer Management Group areas in Gordon and in Kincardine and Deeside Districts is used to develop a model which predicts the distribution throughout Grampian Region; this is tested against red deer distribution in Moray District. Habitat data sets used for constructing the model are accumulated frost and altitude, obtained from maps, and land cover, derived from satellite imagery. Errors resulting from the use of Bayes’ theorem to combine data sets within the GIS and introduced in generalizing output from 50 m pixel to 1 km grid squares resolution are analysed and presented in a series of maps. This analysis of error trains is an integral part of the implemented analytical procedure and provides support to the interpretation of the results of modelling. Potential applications of the modelling procedure are discussed.  相似文献   

7.
During the last two decades, a variety of models have been applied to understand and predict changes in land use. These models assign a single-attribute label to each spatial unit at any particular time of the simulation. This is not realistic because mixed use of land is quite common. A more detailed classification allowing the modelling of mixed land use would be desirable for better understanding and interpreting the evolution of the use of land. A possible solution is the multi-label (ML) concept where each spatial unit can belong to multiple classes simultaneously. For example, a cluster of summer houses at a lake in a forested area should be classified as water, forest and residential (built-up). The ML concept was introduced recently, and it belongs to the machine learning field. In this article, the ML concept is introduced and applied in land-use modelling. As a novelty, we present a land-use change model that allows ML class assignment using the k nearest neighbour (kNN) method that derives a functional relationship between land use and a set of explanatory variables. A case study with a rich data-set from Luxembourg using biophysical data from aerial photography is described. The model achieves promising results based on the well-known ML evaluation criteria. The application described in this article highlights the value of the multi-label k nearest neighbour method (MLkNN) for land-use modelling.  相似文献   

8.
Artificial neural networks were applied to simulate runoff from the glacierized part of the Waldemar River catchment (Svalbard) based on hydrometeorological data collected in the summer seasons of 2010, 2011 and 2012. Continuous discharge monitoring was performed at about 1 km from the glacier snout, in the place where the river leaves the marginal zone. Averaged daily values of discharge and selected meteorological variables in a number of combinations were used to create several models based on the feed‐forward multilayer perceptron architecture. Due to specific conditions of melt water storing and releasing, two groups of models were established: the first is based on meteorological inputs only, while second includes the preceding day's mean discharge. Analysis of the multilayer perceptron simulation performance was done in comparison to the other black‐box model type, a multivariate regression method based on the following efficiency criteria: coefficient of determination (R2) and its adjusted form (adj. R2), weighted coefficient of determination (wR2), Nash–Sutcliffe coefficient of efficiency, mean absolute error, and error analysis. Moreover, the predictors' importance analysis for both multilayer perceptron and multivariate regression models was done. The performed study showed that the nonlinear estimation realized by the multilayer perceptron gives more accurate results than the multivariate regression approach in both groups of models.  相似文献   

9.
Abstract

Recent developments in theory and computer software mean that it is now relatively straightforward to evaluate how attribute errors are propagated through quantitative spatial models in GIS. A major problem, however, is to estimate the errors associated with the inputs to these spatial models. A first approach is to use the root mean square error, but in many cases it is better to estimate the errors from the degree of spatial variation and the method used for mapping. It is essential to decide at an early stage whether one should use a discrete model of spatial variation (DMSV—homogeneous areas, abrupt boundaries), a continuous model (CMSV—a continuously varying regionalized variable field) or a mixture of both (MMSV—mixed model of spatial variation). Maps of predictions and prediction error standard deviations are different in all three cases, and it is crucial for error estimation which model of spatial variation is used. The choice of model has been insufficiently studied in depth, but can be based on prior information about the kinds of spatial processes and patterns that are present, or on validation results. When undetermined it is sensible to adopt the MMSV in order to bypass the rigidity of the DMSV and CMSV. These issues are explored and illustrated using data on the mean highest groundwater level in a polder area in the Netherlands.  相似文献   

10.
11.
We analysed the sensitivity of a decision tree derived forest type mapping to simulated data errors in input digital elevation model (DEM), geology and remotely sensed (Landsat Thematic Mapper) variables. We used a stochastic Monte Carlo simulation model coupled with a one‐at‐a‐time approach. The DEM error was assumed to be spatially autocorrelated with its magnitude being a percentage of the elevation value. The error of categorical geology data was assumed to be positional and limited to boundary areas. The Landsat data error was assumed to be spatially random following a Gaussian distribution. Each layer was perturbed using its error model with increasing levels of error, and the effect on the forest type mapping was assessed. The results of the three sensitivity analyses were markedly different, with the classification being most sensitive to the DEM error, than to the Landsat data errors, but with only a limited sensitivity to the geology data error used. A linear increase in error resulted in non‐linear increases in effect for the DEM and Landsat errors, while it was linear for geology. As an example, a DEM error of as small as ±2% reduced the overall test accuracy by more than 2%. More importantly, the same uncertainty level has caused nearly 10% of the study area to change its initial class assignment at each perturbation, on average. A spatial assessment of the sensitivities indicates that most of the pixel changes occurred within those forest classes expected to be more sensitive to data error. In addition to characterising the effect of errors on forest type mapping using decision trees, this study has demonstrated the generality of employing Monte Carlo analysis for the sensitivity and uncertainty analysis of categorical outputs that have distinctive characteristics from that of numerical outputs.  相似文献   

12.
Notiser     
The consequences of errors in data and in models which treat these data are illustrated wi discussion of concepts in numerical examples. There is a particular emphasis on interactions between data and models, with a view to minimizing the total error.  相似文献   

13.
New data technologies and modelling methods have gained more attention in the field of periglacial geomorphology during the last decade. In this paper we present a new modelling approach that integrates topographical, ground and remote sensing information in predictive geomorphological mapping using generalized additive modelling (GAM) . First, we explored the roles of different environmental variable groups in determining the occurrence of non‐sorted and sorted patterned ground in a fell region of 100 km2 at the resolution of 1 ha in northern Finland. Second, we compared the predictive accuracy of ground‐topography‐ and remote‐sensing‐based models. The results indicate that non‐sorted patterned ground is more common at lower altitudes where the ground moisture and vegetation abundance is relatively high, whereas sorted patterned ground is dominant at higher altitudes with relatively high slope angle and sparse vegetation cover. All modelling results were from good to excellent in model evaluation data using the area under the curve (AUC) values, derived from receiver operating characteristic (ROC) plots. Generally, models built with remotely sensed data were better than ground‐topography‐based models and combination of all environmental variables improved the predictive ability of the models. This paper confirms the potential utility of remote sensing information for modelling patterned ground distribution in subarctic landscapes.  相似文献   

14.
黎夏  叶嘉安  刘涛  刘小平 《地理研究》2007,26(3):443-451
元胞自动机(Cellular Automata,简称CA)已越来越多地用于地理现象的模拟中,如城市系统的演化等。城市模拟经常要使用GIS数据库中的空间信息,数据源中的误差将会通过CA模拟过程发生传递。此外,CA 模型只是对现实世界的近似模拟,这就使得其本身也具有不确定性。这些不确定因素将对城市模拟的结果产生较大的影响,有必要探讨CA在模拟过程中的误差传递与不确定性问题。本文采用蒙特卡罗方法模拟了CA误差的传递特征,并从转换规则、邻域结构、模拟时间以及随机变量等几个方面分析了CA不确定性产生的根源。发现与传统的GIS模型相比,城市CA模型中的误差和不确定性的很多性质是非常独特的。例如,在模拟过程中由于邻域函数平均化的影响,数据源误差将减小;随着可用的土地越来越少,该限制也使城市模拟的误差随时间而减小;模拟结果的不确定性主要体现在城市的边缘。这些分析结果有助于城市建模和规划者更好地理解CA建模的特点。  相似文献   

15.
Vehicle trajectory modelling is an essential foundation for urban intelligent services. In this paper, a novel method, Distant Neighbouring Dependencies (DND) model, has been proposed to transform vehicle trajectories into fixed-length vectors which are then applied to predict the final destination. This paper defines the problem of neighbouring and distant dependencies for the first time, and then puts forward a way to learn and memorize these two kinds of dependencies. Next, a destination prediction model is given based on the DND model. Finally, the proposed method is tested on real taxi trajectory datasets. Results show that our method can capture neighbouring and distant dependencies, and achieves a mean error of 1.08 km, which outperforms other existing models in destination prediction significantly.  相似文献   

16.
Empirical models designed to simulate and predict urban land‐use change in real situations are generally based on the utilization of statistical techniques to compute the land‐use change probabilities. In contrast to these methods, artificial neural networks arise as an alternative to assess such probabilities by means of non‐parametric approaches. This work introduces a simulation experiment on intra‐urban land‐use change in which a supervised back‐propagation neural network has been employed in the parameterization of several biophysical and infrastructure variables considered in the simulation model. The spatial land‐use transition probabilities estimated thereof feed a cellular automaton (CA) simulation model, based on stochastic transition rules. The model has been tested in a medium‐sized town in the Midwest of São Paulo State, Piracicaba. A series of simulation outputs for the case study town in the period 1985–1999 were generated, and statistical validation tests were then conducted for the best results, based on fuzzy similarity measures.  相似文献   

17.
This paper reviews the practices, problems, and prospects of GISbased urban modelling. The author argues that current stand-alone and various loose/tight coupling approaches for GIS-based urban modelling are essentially technology-driven without adequate justification and verification for the urban models being implemented. The absolute view of space and time embodied in the current generation of GIS also imposes constraints on the type of new urban models that can be developed. By reframing the future research agenda from a geographical information science (GISci) perspective, the author contends that the integration of urban modelling with GIS must proceed with the development of new models for the informational cities, the incorporation of multi-dimensional concepts of space and time in GIS, and the further extension of the feature-based model to implement these new urban models and spatial-temporal concepts according to the emerging interoperable paradigm. GISci-based urban modelling will not only espouse new computational models and implementation strategies that are computing platform independent but also liberate us from the constraints of existing urban models and the rigid spatial-temporal framework embedded in the current generation of GIS, and enable us to think above and beyond the technical issues that have occupied us during the past ten years.  相似文献   

18.
Artificial Intelligence (AI) models such as Artificial Neural Networks (ANNs), Decision Trees and Dempster–Shafer's Theory of Evidence have long claimed to be more error‐tolerant than conventional statistical models, but the way error is propagated through these models is unclear. Two sources of error have been identified in this study: sampling error and attribute error. The results show that these errors propagate differently through the three AI models. The Decision Tree was the most affected by error, the Artificial Neural Network was less affected by error, and the Theory of Evidence model was not affected by the errors at all. The study indicates that AI models have very different modes of handling errors. In this case, the machine‐learning models, including ANNs and Decision Trees, are more sensitive to input errors. Dempster–Shafer's Theory of Evidence has demonstrated better potential in dealing with input errors when multisource data sets are involved. The study suggests a strategy of combining AI models to improve classification accuracy. Several combination approaches have been applied, based on a ‘majority voting system’, a simple average, Dempster–Shafer's Theory of Evidence, and fuzzy‐set theory. These approaches all increased classification accuracy to some extent. Two of them also demonstrated good performance in handling input errors. Second‐stage combination approaches which use statistical evaluation of the initial combinations are able to further improve classification results. One of these second‐stage combination approaches increased the overall classification accuracy on forest types to 54% from the original 46.5% of the Decision Tree model, and its visual appearance is also much closer to the ground data. By combining models, it becomes possible to calculate quantitative confidence measurements for the classification results, which can then serve as a better error representation. Final classification products include not only the predicted hard classes for individual cells, but also estimates of the probability and the confidence measurements of the prediction.  相似文献   

19.
分布式水文模型软件系统研究综述   总被引:3,自引:1,他引:2  
分布式水文模型软件系统作为分布式水文模型的技术外壳,是模型应用的重要技术保障。当前分布式水文模型应用呈现出多过程综合模拟、用户群范围广和计算量大的特点,对分布式水文模型软件系统的灵活性、易用性和高效性提出了更高的要求。本文首先分析了分布式水文模型应用的主要流程,之后从应用视角对现有分布式水文模型软件系统的特点进行了归纳,主要结论为:①软件系统按照模型结构灵活性的高低分为以下3种类型:不支持子过程选择和算法设置,不支持子过程选择、但支持算法设置,同时支持子过程选择和算法设置;②根据用户操作数据预处理软件方式的不同,参数提取方式分为菜单/命令行式和向导式;③按照模型的程序实现方法分为串行和并行方式,按照模型运行环境分为本地和网络模式。现有软件系统在灵活性、易用性和高效性方面存在如下问题:一是尚未解决模型结构灵活性和对用户知识依赖性之间的矛盾;二是现有菜单/命令行式和向导式的参数提取方式步骤繁琐,难以实现参数的自动提取;三是模型大多为串行方式和本地模式,容易遇到计算瓶颈问题。最后从模块化、智能化、网络化及移动化、并行化和虚拟仿真等方面探讨了分布式水文模型软件系统的发展趋势和研究方向。  相似文献   

20.
It is widely known that intensive land use generally decreases stream water quality, but the influence of watershed physiography is relatively poorly understood. Since management planning has to take into account the protection of water quality, the current status of stream water must be identified. The potential effects of land use and watershed physiography variables on water quality were studied in an extensive set of 83 watersheds in the Helsinki region, Finland, covering wide land‐use intensity gradient. The aims of this study were to test if the geographical information of watershed land‐use data can be used to model the stream water quality, and to examine whether the spatial water quality models are improved after including predictors of watershed physiography to the land‐use model. Water quality variables were related to watershed predictors by utilizing generalized additive models and linear mixed models, and the independent effect of the variables was investigated using a hierarchical partitioning approach. While land use turned out to be the most influential factor explaining water quality, all models improved significantly after incorporating the watershed characteristics, such as topography and soil. These results were consistent across three modelling techniques. This study, with its novel approach to examine the impacts of several watershed physiographic characteristics on urban stream water quality in northern Europe, demonstrates that spatial land use and watershed physiography data can be used as cost‐efficient predictors in stream water quality models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号