首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Airborne LiDAR (light detection and ranging) data are now commonly regarded as the most accurate source of elevation data for medium-scale topographical modelling applications. However, quoted LiDAR elevation error may not necessarily represent the actual errors occurring across all surfaces, potentially impacting the reliability of derived predictions in Geographical Information Systems (GIS). The extent to which LiDAR elevation error varies in association with land cover, vegetation class and LiDAR data source is quantified relative to dual-frequency global positioning system survey data captured in a 400-ha area in Ireland, where four separate classes of LiDAR point data overlap. Quoted elevation errors are found to correspond closely with the minimum requirement recommended by the American Society of Photogrammetry and Remote Sensing for the definition of 95% error in urban areas only. Global elevation errors are found to be up to 5 times the quoted error, and errors within vegetation areas are found to be even larger, with errors in individual vegetation classes reaching up to 15 times the quoted error. Furthermore, a strong skew is noted in vegetated areas within all the LiDAR data sets tested, pushing errors in some cases to more than 25 times the quoted error. The skew observed suggests that an assumption of a normal error distribution is inappropriate in vegetated areas. The physical parameters that were found to affect elevation error most fundamentally were canopy depth, canopy density and granularity. Other factors observed to affect the degree to which actual errors deviate from quoted error included the primary use for which the data were acquired and the processing applied by data suppliers to meet these requirements.  相似文献   

2.
We analysed the sensitivity of a decision tree derived forest type mapping to simulated data errors in input digital elevation model (DEM), geology and remotely sensed (Landsat Thematic Mapper) variables. We used a stochastic Monte Carlo simulation model coupled with a one‐at‐a‐time approach. The DEM error was assumed to be spatially autocorrelated with its magnitude being a percentage of the elevation value. The error of categorical geology data was assumed to be positional and limited to boundary areas. The Landsat data error was assumed to be spatially random following a Gaussian distribution. Each layer was perturbed using its error model with increasing levels of error, and the effect on the forest type mapping was assessed. The results of the three sensitivity analyses were markedly different, with the classification being most sensitive to the DEM error, than to the Landsat data errors, but with only a limited sensitivity to the geology data error used. A linear increase in error resulted in non‐linear increases in effect for the DEM and Landsat errors, while it was linear for geology. As an example, a DEM error of as small as ±2% reduced the overall test accuracy by more than 2%. More importantly, the same uncertainty level has caused nearly 10% of the study area to change its initial class assignment at each perturbation, on average. A spatial assessment of the sensitivities indicates that most of the pixel changes occurred within those forest classes expected to be more sensitive to data error. In addition to characterising the effect of errors on forest type mapping using decision trees, this study has demonstrated the generality of employing Monte Carlo analysis for the sensitivity and uncertainty analysis of categorical outputs that have distinctive characteristics from that of numerical outputs.  相似文献   

3.
Summary. Palaeomagnetic results for a sequence of Permocarboniferous rhythmites presented in the previous paper have been submitted to maximum entropy spectral analysis to test whether these palaeomagnetic data could supply information on geomagnetic variations. There is a good correlation between the thickness of the rhythmites and sunspot spectra, suggesting that these sediments are really seasonal. The palaeomagnetic spectra are compared with those of observatory records. Periods of approximately 24.4, 12.4, 8.6, 6.7 and 5.5 found for palaeomagnetic data have corresponding values in the geomagnetic spectrum. Most of these periods, however, are the same as those found in the thickness data, implying that magnetization can be influenced by the sedimentation process as suggested by other investigators. On the other hand, both geomagnetic and climatic (thickness) variations seem to be related to solar activity. Therefore, at least indirectly, palaeomagnetic data may reflect geomagnetic variations.  相似文献   

4.
Digital elevation model (DEM) elevation accuracy and spatial resolution are typically considered before a given DEM is used for the assessment of coastal flooding, sea-level rise or erosion risk. However, limitations of DEMs arising from their original data source can often be overlooked during DEM selection. Global elevation error statistics provided by DEM data suppliers can provide a useful indicator of actual DEM error, but these statistics can understate elevation errors occurring outside of idealised ground reference areas. The characteristic limitations of a range of DEM sources that may be used for the assessment of coastal inundation and erosion risk are tested using high-resolution photogrammetric, low- and medium-resolution global positioning system (GPS)-derived and very high-resolution terrestrial laser scanning point data sets. Errors detected in a high-resolution photogrammetric DEM are found to be substantially beyond quoted error, demonstrating the degree to which quoted DEM accuracy can understate local DEM error and highlighting the extent to which spatial resolution can fail to provide a reliable indicator of DEM accuracy. Superior accuracies and inundation prediction results are achieved based on much lower-resolution GPS points confirming conclusions drawn in the case of the photogrammetric DEM data. This suggests a scope for the use of GPS-derived DEMs in preference to the photogrammetric DEM data in large-scale risk-mapping studies. DEM accuracies and superior representation of micro-topography achieved using high-resolution terrestrial laser scan data confirm its advantages for the prediction of subtle inundation and erosion risk. However, the requirement for data fusion of GPS to remove ground-vegetation error highlighted limitations for the use of side-scan laser scan data in densely vegetated areas.  相似文献   

5.
The increasing use of Geographical Information System applications has generated a strong interest in the assessment of data quality. As an example of quantitative raster data, we analysed errors in Digital Terrain Models (DTM). Errors might be classified as systematic (strongly dependent on the production methodology) and random. The present work attempts to locate some types of randomly distributed, weakly spatially correlated errors by applying a new methodology based on Principal Components Analysis. The Principal Components approach presented is very different from the typical scheme used in image processing. A prototype implementation has been conducted using MATLAB, and the overall procedure has been numerically tested using a Monte Carlo approach. A DTM of Stockholm, with integer-valued heights varying from 0 to 59 m has been used as a testbed.The model was contaminated by adding randomly located errors, distributed uniformly within 4 m and 4m. The procedure has been applied using both spike shaped (isolated errors) and pyramid-like errors. The preliminary results show that for the former, roughly half of the errors have been located with a Type I error probability of 4.6 per cent on average, checking up to 1 per cent of the dataset. The associated Type II error of the larger errors (of exactly 4m or 4 m) drops from an initial value of 1.21 per cent down to 0.63 per cent. By checking another 1 per cent of the dataset, such error drops to 0.34 per cent implying that about 71 per cent of the 4m errors have been located; Type I error was below 11.27 per cent. The results for pyramid-like errors are slightly worse, with a Type I error of 25.80 per cent on average for the first 1 per cent effort, and a Type II error drop from an initial value of 0.81 per cent down to 0.65 per cent. The procedure can be applied both for error detection during the DTM generation and by end users. It might also be used for other types of quantitative raster data.  相似文献   

6.
土地利用变化数据的误差特性   总被引:3,自引:2,他引:1  
朱会义 《地理学报》2004,59(4):615-620
土地利用变化数据,主要由不同时点的土地利用数据通过统计、对比、空间叠置分析等途径派生而来;而不同时点的土地利用数据主要来源于遥感数据、土地调查数据、历史图件数据以及统计数据。由于这些数据来源本身不可避免地带有不同程度的误差,由此派生的变化数据也就具有不同的精度。现有的文献中通常会给出时点数据的误差,但很少关注土地利用变化数据的误差。本文首先建立变化数据的误差分析方法,然后举例分析了土地利用类型变化幅度、时点数据误差及其不同组合对变化数据误差的影响。最后得出如下结论:时点数据的高精度并不能完全保证变化数据的高精度,还要看误差的方向是否一致;土地利用变化幅度越小,时点数据的误差对变化数据的精度影响越大,精度水平越难保证。这些结论提醒我们,土地利用变化研究中要对数据误差问题给以足够的重视,并应从数据源质量、数据获取方式、数据分析方法等方面提高数据的精度水平。  相似文献   

7.
Abstract

Kriging is an optimal method of spatial interpolation that produces an error for each interpolated value. Block kriging is a form of kriging that computes averaged estimates over blocks (areas or volumes) within the interpolation space. If this space is sampled sparsely, and divided into blocks of a constant size, a variable estimation error is obtained for each block, with blocks near to sample points having smaller errors than blocks farther away. An alternative strategy for sparsely sampled spaces is to vary the sizes of blocks in such away that a block's interpolated value is just sufficiently different from that of an adjacent block given the errors on both blocks. This has the advantage of increasing spatial resolution in many regions, and conversely reducing it in others where maintaining a constant size of block is unjustified (hence achieving data compression). Such a variable subdivision of space can be achieved by regular recursive decomposition using a hierarchical data structure. An implementation of this alternative strategy employing a split-and-merge algorithm operating on a hierarchical data structure is discussed. The technique is illustrated using an oceanographic example involving the interpolation of satellite sea surface temperature data. Consideration is given to the problem of error propagation when combining variable resolution interpolated fields in GIS modelling operations.  相似文献   

8.
Robust estimation of geomagnetic transfer functions   总被引:22,自引:0,他引:22  
Summary. We show, through an examination of residuals, that all of the statistical assumptions usually used in estimating transfer functions for geomagnetic induction data fail at periods from 5 min to several hours at geomagnetic mid-latitudes. This failure can be traced to the finite spatial scale of many sources. In the past, workers have tried to deal with this problem by hand selecting data segments thought to be free of source effects. We propose an automatic robust analysis scheme which accounts for the systematic increase of errors with increasing power and which automatically downweights source contaminated outliers. We demonstrate that, in contrast to ordinary least squares, this automatic procedure consistently yields reliable transfer function estimates with realistic errors.  相似文献   

9.
Digital elevation models (DEMs) have been widely used for a range of applications and form the basis of many GIS-related tasks. An essential aspect of a DEM is its accuracy, which depends on a variety of factors, such as source data quality, interpolation methods, data sampling density and the surface topographical characteristics. In recent years, point measurements acquired directly from land surveying such as differential global positioning system and light detection and ranging have become increasingly popular. These topographical data points can be used as the source data for the creation of DEMs at a local or regional scale. The errors in point measurements can be estimated in some cases. The focus of this article is on how the errors in the source data propagate into DEMs. The interpolation method considered is a triangulated irregular network (TIN) with linear interpolation. Both horizontal and vertical errors in source data points are considered in this study. An analytical method is derived for the error propagation into any particular point of interest within a TIN model. The solution is validated using Monte Carlo simulations and survey data obtained from a terrestrial laser scanner.  相似文献   

10.
Summary. It has recently been proposed that a reliable estimate of the errors of tidal harmonics present in geomagnetic data can be obtained by the following method: (1) randomly assign each observation to one of 10 subsets, (2) determine the tidal harmonics separately for each subset, and (3) calculate the standard deviation of the mean from the scatter among the 10 determinations. This method is valid if the subsets are statistically independent, but will lead to an underestimate of the errors if they are not. Here we show that, for real geomagnetic data, the assumption of statistical independence is valid.  相似文献   

11.
The use of a priori data to resolve non-uniqueness in linear inversion   总被引:2,自引:0,他引:2  
Summary . The recent, but by now classical method for dealing with non-uniqueness in geophysical inverse problems is to construct linear averages of the unknown function whose values are uniquely defined by empirical data (Backus & Gilbert). However, the usefulness of such linear averages for making geophysical inferences depends on the good behaviour of the unknown function in the region in which it is averaged. The assumption of good behaviour, which is implicit in the acceptance of a given average property, is equivalent to the use of a priori information about the unknown function. There are many cases in which such a priori information may be expressed quantitatively and incorporated in the analysis from the very beginning. In these cases, the classical least-squares method may be used both to estimate the unknown function and to provide meaningful error estimates. In this paper I develop methods for exploring the resolving power in such cases. For those problems in which a continuous unknown function is represented by a finite number of'layer averages', the ultimately achievable resolving width is simply the layer thickness, and perfectly rectangular resolving kernels of greater width are achievable. The method is applied to synthetic data for the inverse'gravitational edge effect'problem where yi are data, f (z) is an unknown function, and ei are random errors. Results are compared with those of Parker, who studied the same problem using the Backus—Gilbert approach.  相似文献   

12.
Abstract

This paper describes an inductive modelling procedure integrated with a geographical information system for analysis of pattern within spatial data. The aim of the modelling procedure is to predict the distribution within one data set by combining a number of other data sets. Data set combination is carried out using Bayes’ theorem. Inputs to the theorem, in the form of conditional probabilities, are derived from an inductive learning process in which attributes of the data set to be modelled are compared with attributes of a variety of predictor data sets. This process is carried out on random subsets of the data to generate error bounds on inputs for analysis of error propagation associated with the use of Bayes’ theorem to combine data sets in the GIS. The statistical significance of model inputs is calculated as part of the inductive learning process. Use of the modelling procedure is illustrated through the analysis of the winter habitat relationships of red deer in Grampian Region, north-east Scotland. The distribution of red deer in Deer Management Group areas in Gordon and in Kincardine and Deeside Districts is used to develop a model which predicts the distribution throughout Grampian Region; this is tested against red deer distribution in Moray District. Habitat data sets used for constructing the model are accumulated frost and altitude, obtained from maps, and land cover, derived from satellite imagery. Errors resulting from the use of Bayes’ theorem to combine data sets within the GIS and introduced in generalizing output from 50 m pixel to 1 km grid squares resolution are analysed and presented in a series of maps. This analysis of error trains is an integral part of the implemented analytical procedure and provides support to the interpretation of the results of modelling. Potential applications of the modelling procedure are discussed.  相似文献   

13.
Spatially coincident land-cover information frequently varies due to technological and political variations. This is especially problematic for time-series analyses. We present an approach using expert expressions of how the semantics of different datasets relate to integrating temporal time series land-cover information where the classification classes have fundamentally changed. We use land-cover mapping in the UK (LCMGB and LCM2000) as example data sets because of the extensive object-based meta-data in the LCM2000. Inconsistencies between the two datasets can arise from random, gross and systematic error and from an actual change in land cover. Locales of possible land-cover change are inferred by comparing characterizations derived from the semantic relations and meta-data. Field visits showed errors of omission to be 21% and errors of commission to be 28%, despite the accuracy limitations of the land-cover information when compared with the field survey component of the Countryside Survey 2000.  相似文献   

14.
The geomagnetic power spectrum   总被引:1,自引:0,他引:1  
Combining CHAMP satellite magnetic measurements with aeromagnetic and marine magnetic data, the global geomagnetic field has now been modelled to spherical harmonic degree 720. An important tool in field modelling is the geomagnetic power spectrum. It allows the comparison of field models estimated from different data sets and can be used to identify noise levels and systematic errors. A correctly defined geomagnetic power spectrum is flat (white) for an uncorrelated field, such as the Earth's crustal magnetic field at long wavelengths. It can be inferred from global spherical harmonic models as well as from regional grids. Marine and aeromagnetic grids usually represent the anomaly of the total intensity of the magnetic field. Appropriate corrections have to be applied in estimating the geomagnetic power spectrum from such data. The comparison of global and regional spectra using a consistently defined azimuthally averaged geomagnetic power spectrum facilitates quality control in field modelling and should provide new insights in magnetic anomaly interpretation.  相似文献   

15.
Summary. Palaeomagnetic results from Part I of this study and their analysis in Part II are combined to eliminate bias from the Cenozoic apparent polar wander path for Australia – a bias due to non-dipole components in past geomagnetic fields or, for poles calculated from hot-spot data, due to the motion of hot spots relative to the Earth's rotational axis. This path is extended in approximately bias-free form to the late Mesozoic, and indicates a significant change in the drift direction of the continent between 26 and about 60 Ma.
The bias-corrected Australian path is used, first, with seafloor spreading data for the Southern Ocean to derive a corresponding late Mesozoic–Cenozoic pole path for Antarctica. The latter shows that the Antarctic drift direction reversed in the early Tertiary. It is suggested that the early Tertiary directional changes of both Australia and Antarctica are part of a global reorganization of plates during the Eocene, postulated by Rona & Richardson, Cande & Mutter and Patriat & Achache.
Next, the Australian path is compared with hot-spot data from the African and Australian plates, indicating a movement of the hot spots relative the Earth's rotational axis during the Cenozoic. The direction of this movement is found to be consistent with previous results from other parts of the world.
Finally, the Australian path is used together with non-dipole components in the geomagnetic field to explain a prominent westward displacement of the mid- and late Cenozoic poles of India relative to those of Australia.
Because of uncertainties in the original poles and in the analysis, the present results are likely to contain appreciable errors. Nevertheless, their consistency with independent findings supports the dipole-quadrupole model of Part II for mid- and late Cenozoic geomagnetic fields.  相似文献   

16.
A geomagnetic scattering theory for evaluation of earth structure   总被引:1,自引:0,他引:1  
Summary. Structural features of the Earth's lower crust and upper mantle can be mapped by the analysis of temporal geomagnetic fluctuations using the electromagnetic scattering theory developed in this paper. Decomposing geomagnetic field fluctuations at the Earth's surface into an excitation part and a scattered part forms the basis of a power series development. The vertical field component is interpreted as a scattering of the excitation field. The horizontal gradient and geomagnetic depth sounding methods are special cases of the theory developed. The horizontal gradient sounding method has a tensorial aspect which has not been recognized before; it should be included to obtain correct penetration depth parameter evaluations from field data.  相似文献   

17.
ABSTRACT

Missing data is a common problem in the analysis of geospatial information. Existing methods introduce spatiotemporal dependencies to reduce imputing errors yet ignore ease of use in practice. Classical interpolation models are easy to build and apply; however, their imputation accuracy is limited due to their inability to capture spatiotemporal characteristics of geospatial data. Consequently, a lightweight ensemble model was constructed by modelling the spatiotemporal dependencies in a classical interpolation model. Temporally, the average correlation coefficients were introduced into a simple exponential smoothing model to automatically select the time window which ensured that the sample data had the strongest correlation to missing data. Spatially, the Gaussian equivalent and correlation distances were introduced in an inverse distance-weighting model, to assign weights to each spatial neighbor and sufficiently reflect changes in the spatiotemporal pattern. Finally, estimations of the missing values from temporal and spatial were aggregated into the final results with an extreme learning machine. Compared to existing models, the proposed model achieves higher imputation accuracy by lowering the mean absolute error by 10.93 to 52.48% in the road network dataset and by 23.35 to 72.18% in the air quality station dataset and exhibits robust performance in spatiotemporal mutations.  相似文献   

18.
Hydrologic data derived from digital elevation models (DEM) has been regarded as an effective method in the spatial analysis of geographical information systems (GIS). However, both DEM resolution and terrain complexity has impacts on the accuracy of hydrologic derivatives. In this study, a multi-resolution and multi-relief comparative approach was used as a major methodology to investigate the accuracy of hydrologic data derived from DEMs. The experiment reveals that DEM terrain representation error affects the accuracy of DEM hydrological derivatives (drainage networks and watershed etc.). Coarser DEM resolutions can usually cause worse results. However, uncertain result commonly exists in this calculation. The derivative errors can be found closely related with DEM vertical resolution and terrain roughness. DEM vertical resolution can be found closely related with the accuracy of DEM hydrological derivatives, especially in the smooth plain area. If the mean slope is less than 4 degrees, the derived hydrologic data are usually unreliable. This result may be helpful in estimating the accuracy of the hydrologic derivatives and determining the DEM resolution that is appropriate to the accuracy requirement of a particular user. By applying a threshold value to subset the cells of a higher accumulation flow, a stream network of a specific network density can be extracted. Some very important geomorphologic characteristics, e.g., shallow and deep gullies, can be separately extracted by means of adjusting the threshold value. However, such a flow accumulationbased processing method can not correctly derive those streams that pass through the working area because it is hard to accumulate enough flow direction values to express the stream channels at the stream's entrance area. Consequently, errors will definitely occur at the stream’s entrance area. In addition, erroneous derivatives can also be found in deriving some particular rivers, e.g., perched (hanging up) rivers, anastomosing rivers and braided rivers. Therefore, more work should be done to develop and perfect the algorithms.  相似文献   

19.
1 Introduction Automated extraction of drainage features from DEMs is an effective alternative to the tedious manual mapping from topographic maps. The derived hydrologic characteristics include stream-channel networks, delineation of catchment boundaries, catchment area, catchment length, stream-channel long profiles and stream order etc. Other important characteristics of river catchments, such as the stream-channel density, stream-channel bifurcation ratios, stream-channel order, number…  相似文献   

20.
A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceano-graphic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations, Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号