首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The calculation of surface area is meaningful for a variety of space-filling phenomena, e.g., the packing of plants or animals within an area of land. With Digital Elevation Model (DEM) data we can calculate the surface area by using a continuous surface model, such as by the Triangulated Irregular Network (TIN). However, just as the triangle-based surface area discussed in this paper, the surface area is generally biased because it is a nonlinear mapping about the DEM data which contain measurement errors. To reduce the bias in the surface area, we propose a second-order bias correction by applying nonlinear error propagation to the triangle-based surface area. This process reveals that the random errors in the DEM data result in a bias in the triangle-based surface area while the systematic errors in the DEM data can be reduced by using the height differences. The bias is theoretically given by a probability integral which can be approximated by numerical approaches including the numerical integral and the Monte Carlo method; but these approaches need a theoretical distribution assumption about the DEM measurement errors, and have a very high computational cost. In most cases, we only have variance information on the measurement errors; thus, a bias estimation based on nonlinear error propagation is proposed. Based on the second-order bias estimation proposed, the variance of the surface area can be improved immediately by removing the bias from the original variance estimation. The main results are verified by the Monte Carlo method and by the numerical integral. They show that an unbiased surface area can be obtained by removing the proposed bias estimation from the triangle-based surface area originally calculated from the DEM data.  相似文献   

2.
基于数字高程模型(DEM)计算得到的坡度、坡向等地形属性是滑坡危险性评价模型的重要输入数据, DEM误差会导致地形属性计算结果不确定性, 进而影响滑坡危险性评价模型的结果。本文选择基于专家知识的滑坡危险性评价模型和逻辑斯第回归模型, 采用蒙特卡洛模拟方法, 研究DEM误差所导致的滑坡危险性评价模型结果不确定性。研究区位于长江中上游的重庆开县, 采用5 m分辨率的DEM, 以序贯高斯模拟方法模拟了不同大小(误差标准差为1 m、7.5 m、15 m)和空间自相关性(变程为0 m、30 m、60 m、120 m)的12 类DEM误差场参与滑坡危险性评价。每次模拟包括100 个实现, 通过对每次模拟分别计算滑坡危险性评价结果的标准差图层和分类一致性百分比图层, 用以评价结果不确定性。评价结果表明, 在不同的DEM精度下, 两个滑坡危险性评价模型所得结果的总体不确定性随空间自相关程度的变化趋势并不相同。当DEM空间自相关性程度不同时, 基于专家知识的滑坡危险性评价模型的评价结果总体不确定随着DEM误差增加而呈现不同的变化趋势, 而逻辑斯第回归模型的评价结果总体不确定性随着DEM误差大小增加而单调增加。从评价结果总体不确定性角度而言, 总体上逻辑斯第回归模型比基于专家知识的滑坡危险性评价模型更加依赖于DEM数据质量。  相似文献   

3.
4.
Spatial data uncertainty models (SDUM) are necessary tools that quantify the reliability of results from geographical information system (GIS) applications. One technique used by SDUM is Monte Carlo simulation, a technique that quantifies spatial data and application uncertainty by determining the possible range of application results. A complete Monte Carlo SDUM for generalized continuous surfaces typically has three components: an error magnitude model, a spatial statistical model defining error shapes, and a heuristic that creates multiple realizations of error fields added to the generalized elevation map. This paper introduces a spatial statistical model that represents multiple statistics simultaneously and weighted against each other. This paper's case study builds a SDUM for a digital elevation model (DEM). The case study accounts for relevant shape patterns in elevation errors by reintroducing specific topological shapes, such as ridges and valleys, in appropriate localized positions. The spatial statistical model also minimizes topological artefacts, such as cells without outward drainage and inappropriate gradient distributions, which are frequent problems with random field-based SDUM. Multiple weighted spatial statistics enable two conflicting SDUM philosophies to co-exist. The two philosophies are ‘errors are only measured from higher quality data’ and ‘SDUM need to model reality’. This article uses an automatic parameter fitting random field model to initialize Monte Carlo input realizations followed by an inter-map cell-swapping heuristic to adjust the realizations to fit multiple spatial statistics. The inter-map cell-swapping heuristic allows spatial data uncertainty modelers to choose the appropriate probability model and weighted multiple spatial statistics which best represent errors caused by map generalization. This article also presents a lag-based measure to better represent gradient within a SDUM. This article covers the inter-map cell-swapping heuristic as well as both probability and spatial statistical models in detail.  相似文献   

5.
This paper explores three theoretical approaches for estimating the degree of correctness to which the accuracy figures of a gridded Digital Elevation Model (DEM) have been estimated depending on the number of checkpoints involved in the assessment process. The widely used average‐error statistic Mean Square Error (MSE) was selected for measuring the DEM accuracy. The work was focused on DEM uncertainty assessment using approximate confidence intervals. Those confidence intervals were constructed both from classical methods which assume a normal distribution of the error and from a new method based on a non‐parametric approach. The first two approaches studied, called Chi‐squared and Asymptotic Student t, consider a normal distribution of the residuals. That is especially true in the first case. The second case, due to the asymptotic properties of the t distribution, can perform reasonably well with even slightly non‐normal residuals if the sample size is large enough. The third approach developed in this article is a new method based on the theory of estimating functions which could be considered much more general than the previous two cases. It is based on a non‐parametric approach where no particular distribution is assumed. Thus, we can avoid the strong assumption of distribution normality accepted in previous work and in the majority of current standards of positional accuracy. The three approaches were tested using Monte Carlo simulation for several populations of residuals generated from originally sampled data. Those original grid DEMs, considered as ground data, were collected by means of digital photogrammetric methods from seven areas displaying differing morphology employing a 2 by 2 m sampling interval. The original grid DEMs were subsampled to generate new lower‐resolution DEMs. Each of these new DEMs was then interpolated to retrieve its original resolution using two different procedures. Height differences between original and interpolated grid DEMs were calculated to obtain residual populations. One interpolation procedure resulted in slightly non‐normal residual populations, whereas the other produced very non‐normal residuals with frequent outliers. Monte Carlo simulations allow us to report that the estimating function approach was the most robust and general of those tested. In fact, the other two approaches, especially the Chi‐squared method, were clearly affected by the degree of normality of the residual population distribution, producing less reliable results than the estimating functions approach. This last method shows good results when applied to the different datasets, even in the case of more leptokurtic populations. In the worst cases, no more than 64–128 checkpoints were required to construct an estimate of the global error of the DEM with 95% confidence. The approach therefore is an important step towards saving time and money in the evaluation of DEM accuracy using a single average‐error statistic. Nevertheless, we must take into account that MSE is essentially a single global measure of deviations, and thus incapable of characterizing the spatial variations of errors over the interpolated surface.  相似文献   

6.
Terrain attributes such as slope gradient and slope shape, computed from a gridded digital elevation model (DEM), are important input data for landslide susceptibility mapping. Errors in DEM can cause uncertainty in terrain attributes and thus influence landslide susceptibility mapping. Monte Carlo simulations have been used in this article to compare uncertainties due to DEM error in two representative landslide susceptibility mapping approaches: a recently developed expert knowledge and fuzzy logic-based approach to landslide susceptibility mapping (efLandslides), and a logistic regression approach that is representative of multivariate statistical approaches to landslide susceptibility mapping. The study area is located in the middle and upper reaches of the Yangtze River, China, and includes two adjacent areas with similar environmental conditions – one for efLandslides model development (approximately 250 km2) and the other for model extrapolation (approximately 4600 km2). Sequential Gaussian simulation was used to simulate DEM error fields at 25-m resolution with different magnitudes and spatial autocorrelation levels. Nine sets of simulations were generated. Each set included 100 realizations derived from a DEM error field specified by possible combinations of three standard deviation values (1, 7.5, and 15 m) for error magnitude and three range values (0, 60, and 120 m) for spatial autocorrelation. The overall uncertainties of both efLandslides and the logistic regression approach attributable to each model-simulated DEM error were evaluated based on a map of standard deviations of landslide susceptibility realizations. The uncertainty assessment showed that the overall uncertainty in efLandslides was less sensitive to DEM error than that in the logistic regression approach and that the overall uncertainties in both efLandslides and the logistic regression approach for the model-extrapolation area were generally lower than in the model-development area used in this study. Boxplots were produced by associating an independent validation set of 205 observed landslides in the model-extrapolation area with the resulting landslide susceptibility realizations. These boxplots showed that for all simulations, efLandslides produced more reasonable results than logistic regression.  相似文献   

7.
Digital elevation models (DEMs) have been widely used for a range of applications and form the basis of many GIS-related tasks. An essential aspect of a DEM is its accuracy, which depends on a variety of factors, such as source data quality, interpolation methods, data sampling density and the surface topographical characteristics. In recent years, point measurements acquired directly from land surveying such as differential global positioning system and light detection and ranging have become increasingly popular. These topographical data points can be used as the source data for the creation of DEMs at a local or regional scale. The errors in point measurements can be estimated in some cases. The focus of this article is on how the errors in the source data propagate into DEMs. The interpolation method considered is a triangulated irregular network (TIN) with linear interpolation. Both horizontal and vertical errors in source data points are considered in this study. An analytical method is derived for the error propagation into any particular point of interest within a TIN model. The solution is validated using Monte Carlo simulations and survey data obtained from a terrestrial laser scanner.  相似文献   

8.
Abstract

When data on environmental attributes such as those of soil or groundwater are manipulated by logical cartographic modelling, the results are usually assumed to be exact. However, in reality the results will be in error because the values of input attributes cannot be determined exactly. This paper analyses how errors in such values propagate through Boolean and continuous modelling, involving the intersection of several maps. The error analysis is carried out using Monte Carlo methods on data interpolated by block kriging to a regular grid which yields predictions and prediction error standard deviations of attribute values for each pixel. The theory is illustrated by a case study concerning the selection of areas of medium textured, non-saline soil at an experimental farm in Alberta, Canada. The results suggest that Boolean methods of sieve mapping are much more prone to error propagation than the more robust continuous equivalents. More study of the effects of errors and of the choice of attribute classes and of class parameters on error propagation is recommended.  相似文献   

9.
Abstract

Error and uncertainty in spatial databases have gained considerable attention in recent years. The concern is that, as in other computer applications and, indeed, all analyses, poor quality input data will yield even worse output. Various methods for analysis of uncertainty have been developed, but none has been shown to be directly applicable to an actual geographical information system application in the area of natural resources. In spatial data on natural resources in general, and in soils data in particular, a major cause of error is the inclusion of unmapped units within areas delineated on the map as uniform. In this paper, two alternative algorithms for simulating inclusions in categorical natural resource maps are detailed. Their usefulness is shown by a simplified Monte Carlo testing to evaluate the accuracy of agricultural land valuation using land use and the soil information. Using two test areas it is possible to show that errors of as much as 6 per cent may result in the process of land valuation, with simulated valuations both above and below the actual values. Thus, although an actual monetary cost of the error term is estimated here, it is not found to be large.  相似文献   

10.
The weights-of-evidence model (a Bayesian probability model) was applied to the task of evaluating landslide susceptibility using GIS. Using landslide location and a spatial database containing information such as topography, soil, forest, geology, land cover and lineament, the weights-of-evidence model was applied to calculate each relevant factor's rating for the Boun area in Korea, which had suffered substantial landslide damage following heavy rain in 1998. In the topographic database, the factors were slope, aspect and curvature; in the soil database, they were soil texture, soil material, soil drainage, soil effective thickness and topographic type; in the forest map, they were forest type, timber diameter, timber age and forest density; lithology was derived from the geological database; land-use information came from Landsat TM satellite imagery; and lineament data from IRS satellite imagery. Tests of conditional independence were performed for the selection of factors, allowing 43 combinations of factors to be analysed. For the analysis of mapping landslide susceptibility, the contrast values, W + and W -, of each factor's rating were overlaid spatially. The results of the analysis were validated using the previous landslide locations. The combination of slope, curvature, topography, timber diameter, geology and lineament showed the best results. The results can be used for hazard prevention and land-use planning.  相似文献   

11.
The complexity of land use and land cover (LULC) change models is often attributed to spatial heterogeneity of the phenomena they try to emulate. The associated outcome uncertainty stems from a combination of model unknowns. Contrarily to the widely shared consensus on the importance of evaluating outcome uncertainty, little attention has been given to the role a well-structured spatially explicit sensitivity analysis (SSA) of LULC models can play in corroborating model results. In this article, I propose a methodology for SSA that employs sensitivity indices (SIs), which decompose outcome uncertainty and allocate it to various combinations of inputs. Using an agent-based model of residential development, I explore the utility of the methodology in explaining the uncertainty of simulated land use change. Model sensitivity is analyzed using two approaches. The first is spatially inexplicit in that it applies SI to scalar outputs, where outcome land use maps are lumped into spatial statistics. The second approach, which is spatially explicit, employs the maps directly in SI calculations. It generates sensitivity maps that allow for identifying regions of factor influence, that is, areas where a particular input contributes most to the clusters of residential development uncertainty. I demonstrate that these two approaches are complementary, but at the same time can lead to different decisions regarding input factor prioritization.  相似文献   

12.
We use a GIS‐based agent‐based model (ABM), named dynamic ecological exurban development (DEED), with spatial data in hypothetical scenarios to evaluate the individual and interacting effects of lot‐size zoning and municipal land‐acquisition strategies on possible forest‐cover outcomes in Scio Township, a municipality in Southeastern Michigan. Agent types, characteristics, behavioural methods, and landscape perceptions (i.e. landscape aesthetics) are empirically informed using survey data, spatial analyses, and a USDA methodology for mapping landscape aesthetic quality. Results from our scenario experiments computationally verified literature that show large lot‐size zoning policies lead to greater sprawl, and large lot‐size zoning policies can lead to increased forest cover, although we found this effect to be small relative to municipal land acquisition. The return on land acquisition for forest conservation was strongly affected by the location strategy used to select parcels for conservation. Furthermore, the location strategy for forest conservation land acquisition was more effective at increasing aggregate forest levels than the independent zoning policies, the quantity of area acquired for forest conservation, and any combination of the two. The results using an integrated GIS and ABM framework for evaluating land‐use development policies on forest cover provide additional insight into how these types of policies may act out over time and what aspects of the policies were more influential towards the goal of maximising forest cover.  相似文献   

13.
As sea level is projected to rise throughout the twenty-first century due to climate change, there is a need to ensure that sea level rise (SLR) models accurately and defensibly represent future flood inundation levels to allow for effective coastal zone management. Digital elevation models (DEMs) are integral to SLR modelling, but are subject to error, including in their vertical resolution. Error in DEMs leads to uncertainty in the output of SLR inundation models, which if not considered, may result in poor coastal management decisions. However, DEM error is not usually described in detail by DEM suppliers; commonly only the RMSE is reported. This research explores the impact of stated vertical error in delineating zones of inundation in two locations along the Devon, United Kingdom, coastline (Exe and Otter Estuaries). We explore the consequences of needing to make assumptions about the distribution of error in the absence of detailed error data using a 1 m, publically available composite DEM with a maximum RMSE of 0.15 m, typical of recent LiDAR-derived DEMs. We compare uncertainty using two methods (i) the NOAA inundation uncertainty mapping method which assumes a normal distribution of error and (ii) a hydrologically correct bathtub method where the DEM is uniformly perturbed between the upper and lower bounds of a 95% linear error in 500 Monte Carlo Simulations (HBM+MCS). The NOAA method produced a broader zone of uncertainty (an increase of 134.9% on the HBM+MCS method), which is particularly evident in the flatter topography of the upper estuaries. The HBM+MCS method generates a narrower band of uncertainty for these flatter areas, but very similar extents where shorelines are steeper. The differences in inundation extents produced by the methods relate to a number of underpinning assumptions, and particularly, how the stated RMSE is interpreted and used to represent error in a practical sense. Unlike the NOAA method, the HBM+MCS model is computationally intensive, depending on the areas under consideration and the number of iterations. We therefore used the HBM+ MCS method to derive a regression relationship between elevation and inundation probability for the Exe Estuary. We then apply this to the adjacent Otter Estuary and show that it can defensibly reproduce zones of inundation uncertainty, avoiding the computationally intensive step of the HBM+MCS. The equation-derived zone of uncertainty was 112.1% larger than the HBM+MCS method, compared to the NOAA method which produced an uncertain area 423.9% larger. Each approach has advantages and disadvantages and requires value judgements to be made. Their use underscores the need for transparency in assumptions and communications of outputs. We urge DEM publishers to move beyond provision of a generalised RMSE and provide more detailed estimates of spatial error and complete metadata, including locations of ground control points and associated land cover.  相似文献   

14.
In a mountainous region, the glacier area and length extracted form the satellite imagery data is the projected area and length of the land surface, which can’t be representative of the reality; there are always some errors. In this paper, the methods of calculating glacier area and length calculation were put forward based on satellite imagery data and a digital elevation model (DEM). The pure pixels and the mixed pixels were extracted based on the linear spectral un-mixing approach, the slop of the pixels was calculated based on the DEM, then the area calculation method was presented. The projection length was obtained from the satellite imagery data, and the elevation differences was calculated from the DEM. The length calculation method was presented based on the Pythagorean theorem. For a glacier in the study area of western Qilian Mountain, northwestern China, the projected area and length were 140.93 km2 and 30.82 km, respectively. This compares with the results calculated by the methods in this paper, which were 155.16 km2 and 32.11 km respectively, a relative error of the projected area and length extracted from the LandSat Thematic Mapper (TM) image directly reach to -9.2 percent and -4.0 percent, respectively. The calculation method is more in accord with the practicality and can provide reference for some other object’s area and length monitoring in a mountainous region.  相似文献   

15.
Mineral exploration activities require robust predictive models that result in accurate mapping of the probability that mineral deposits can be found at a certain location. Random forest (RF) is a powerful machine data-driven predictive method that is unknown in mineral potential mapping. In this paper, performance of RF regression for the likelihood of gold deposits in the Rodalquilar mining district is explored. The RF model was developed using a comprehensive exploration GIS database composed of: gravimetric and magnetic survey, a lithogeochemical survey of 59 elements, lithology and fracture maps, a Landsat 5 Thematic Mapper image and gold occurrence locations. The results of this study indicate that the use of RF for the integration of large multisource data sets used in mineral exploration and for prediction of mineral deposit occurrences offers several advantages over existing methods. Key advantages of RF include: (1) the simplicity of parameter setting; (2) an internal unbiased estimate of the prediction error; (3) the ability to handle complex data of different statistical distributions, responding to nonlinear relationships between variables; (4) the capability to use categorical predictors; and (5) the capability to determine variable importance. Additionally, variables that RF identified as most important coincide with well-known geologic expectations. To validate and assess the effectiveness of the RF method, gold prospectivity maps are also prepared using the logistic regression (LR) method. Statistical measures of map quality indicate that the RF method performs better than LR, with mean square errors equal to 0.12 and 0.19, respectively. The efficiency of RF is also better, achieving an optimum success rate when half of the area predicted by LR is considered.  相似文献   

16.
Satellite instruments, particularly the Landsat TM (Thematic Mapper) and ETM+ (Enhanced Thematic Mapper Plus) series of sensors, are important tools in the interdisciplinary study of tropical forests that are increasingly integrated into studies that monitor changes in vegetation cover within tropical forests and tropical protected areas, and also applied with other types of data to investigate the drivers of land cover change. However, further advances in the use of Landsat to study and monitor tropical forests and protected areas are threatened by the scan line corrector failure on the ETM+ sensor, as well as uncertainty about the continuity of the Landsat mission. Given these problems, this paper illustrates how ETM+ data were used in an interdisciplinary study that effectively monitored forest cover change in Gunung Palung National Park in West Kalimantan, Indonesian Borneo. Following 31 May 2003, when the ETM+ sensor's scan line corrector failed, we analysed how this failure impedes our ability to perform a similar study from this date onwards. This analysis uses six simulated post-scan line corrector failure (SLC-off) images and reveals that data gaps caused by SLC-off introduce maximum errors of 1.47 per cent and 4.04 per cent in estimates of forest cover and rates of forest loss, respectively. The analysis also demonstrates how SLC-off has transformed ETM+ data from a complete inventory dataset to a statistical sample with variable sample fraction, and notes how this data loss will confound the use of Landsat data to model land cover change in a spatially explicit manner. We discuss potential limited uses of SLC-off data and suggest alternative sensors that may provide essential remotely sensed data for monitoring tropical forests in Southeast Asia.  相似文献   

17.
Areal interpolation is the process by which data collected from one set of zonal units can be estimated for another zonal division of the same space that shares few or no boundaries with the first. In previous research, we outlined the use of dasymetric mapping for areal interpolation and showed it to be the most accurate method tested. There we used control information derived from classified satellite imagery to parameterize the dasymetric method, but because such data are rife with errors, here we extend the work to examine the sensitivity of the population estimates to error in the classified imagery. Results show the population estimates by dasymetric mapping to be largely insensitive to the errors of classification in the Landsat image when compared with the other methods tested. The dasymetric method deteriorates to the accuracy of the next worst estimate only when 40% error occurs in the classified image, a level of error that may easily be bettered within most remote sensing projects.  相似文献   

18.
Lead-210 assay and dating are subject to several sources of error, including natural variation, the statistical nature of measuring radioactivity, and estimation of the supported fraction. These measurable errors are considered in calculating confidence intervals for 210Pb dates. Several sources of error, including the effect of blunders or misapplication of the mathematical model, are not included in the quantitative analysis. First-order error analysis and Monte Carlo simulation (of cores from Florida PIRLA lakes) are used as independent estimates of dating uncertainty. CRS-model dates average less than 1% older than Monte Carlo median dates, but the difference increases non-linearly with age to a maximum of 11% at 160 years. First-order errors increase exponentially with calculated CRS-model dates, with the largest 95% confidence interval in the bottommost datable section being 155±90 years, and the smallest being 128±8 years. Monte Carlo intervals also increase exponentially with age, but the largest 95% occurrence interval is 152±44 years. Confidence intervals calculated by first-order methods and ranges of Monte Carlo dates agree fairly well until the 210Pb date is about 130 years old. Older dates are unreliable because of this divergence. Ninety-five per cent confidence intervals range from about 1–2 years at 10 years of age, 10–20 at 100 years, and 8–90 at 150 years old.This is the third of a series of papers to be published by this journal which is a contribution of the Paleoecological Investigation of Recent Lake Acidification (PIRLA) project. Drs. D.F. Charles and D.R. Whitehead are guest editors for this series.  相似文献   

19.
Most forest fires in Korea are spatially concentrated in certain areas and are highly related to human activities. These site-specific characteristics of forest fires are analyzed by spatial regression analysis using the R-module generalized linear mixed model (GLMM), which can consider spatial autocorrelation. We examined the quantitative effect of topology, human accessibility, and forest cover without and with spatial autocorrelation. Under the assumption that slope, elevation, aspect, population density, distance from road, and forest cover are related to forest fire occurrence, the explanatory variables of each of these factors were prepared using a Geographic Information System-based process. First, we tried to test the influence of fixed effects on the occurrence of forest fires using a generalized linear model (GLM) with Poisson distribution. In addition, the overdispersion of the response data was also detected, and variogram analysis was performed using the standardized residuals of GLM. Second, GLMM was applied to consider the obvious residual autocorrelation structure. The fitted models were validated and compared using the multiple correlation and root mean square error (RMSE). Results showed that slope, elevation, aspect index, population density, and distance from road were significant factors capable of explaining the forest fire occurrence. Positive spatial autocorrelation was estimated up to a distance of 32 km. The kriging predictions based on GLMM were smoother than those of the GLM. Finally, a forest fire occurrence map was prepared using the results from both models. The fire risk decreases with increasing distance to areas with high population densities, and increasing elevation showed a suppressing effect on fire occurrence. Both variables are in accordance with the significance tests.  相似文献   

20.
Land cover class composition of remotely sensed image pixels can be estimated using soft classification techniques increasingly available in many GIS packages. However, their output provides no indication of how such classes are distributed spatially within the instantaneous field of view represented by the pixel. Techniques that attempt to provide an improved spatial representation of land cover have been developed, but not tested on the difficult task of mapping from real satellite imagery. The authors investigated the use of a Hopfield neural network technique to map the spatial distributions of classes reliably using information of pixel composition determined from soft classification previously. The approach involved designing the energy function to produce a ‘best guess’ prediction of the spatial distribution of class components in each pixel. In previous studies, the authors described the application of the technique to target identification, pattern prediction and land cover mapping at the sub-pixel scale, but only for simulated imagery. We now show how the approach can be applied to Landsat Thematic Mapper (TM) agriculture imagery to derive accurate estimates of land cover and reduce the uncertainty inherent in such imagery. The technique was applied to Landsat TM imagery of small-scale agriculture in Greece and largescale agriculture near Leicester, UK. The resultant maps provided an accurate and improved representation of the land covers studied, with RMS errors for the Landsat imagery of the order of 0.1 in the new fine resolution map recorded. The results showed that the neural network represents a simple efficient tool for mapping land cover from operational satellite sensor imagery and can deliver requisite results and improvements over traditional techniques for the GIS analysis of practical remotely sensed imagery at the sub pixel scale.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号