首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary. Most of the Earth's magnetic field and its secular change originate in the core. Provided the mantle can be treated as an electrical insulator, stochastic inversion enables surface observations to be analysed for the core field. A priori information about the variation of the field at the core boundary leads to very stringent conditions at the Earth's surface. The field models are identical with those derived from the method of harmonic splines (Shure, Parker & Backus) provided the a priori information is specified appropriately.
The method is applied to secular variation data from 106 magnetic observatories. Model predictions for fields at the Earth's surface have error estimates associated with them that appear realistic. For plausible choices of a priori information the error of the field at the core is unbounded, but integrals over patches of the core surface can have finite errors. The hypothesis that magnetic fields are frozen to the core fluid implies that certain integrals of the secular variation vanish. This idea is tested by computing the integrals and their standard and maximum errors. Most of the integrals are within one standard deviation of zero, but those over the large patches to the north and south of the magnetic equator are many times their standard error, because of the dominating influence of the decaying dipole. All integrals are well within their maximum error, indicating that it will be possible to construct core fields, consistent with frozen flux, that satisfy the observations.  相似文献   

2.
3.
In many cases of model evaluation in physical geography, the observed data to which model predictions are compared may not be error free. This paper addresses the effect of observational errors on the mean squared error, the mean bias error and the mean absolute deviation through the derivation of a statistical framework and Monte Carlo simulation. The effect of bias in the observed values may either decrease or increase the expected values of the mean squared error and mean bias error, depending on whether model and observational biases have the same or opposite signs, respectively. Random errors in observed data tend to inflate the mean squared error and the mean absolute deviation, and also increase the variability of all the error indices considered here. The statistical framework is applied to a real example, in which sampling variability of the observed data appears to account for most of the difference between observed and predicted values. Examination of scaled differences between modelled and observed values, where the differences are divided by the estimated standard errors of the observed values, is suggested as a diagnostic tool for determining whether random observational errors are significant.  相似文献   

4.
Weighted averaging is widely used for inferring environmental conditions from an observed species assemblage. However, weighted average inferences are known to be systematically biased, and linear corrections (i.e., deshrinking functions) are commonly applied to adjust for this bias. In this analysis, the magnitude of the biases in weighted average inferences (and therefore the values of the deshrinking coefficients) are shown to depend upon the range of conditions sampled in the calibration data set and the true optima and niche breadths of the species observed in the calibration data set. Since the range of conditions and the observed species can differ between the calibration data set and the new data set for which environmental conditions are inferred, the coefficients for the deshrinking function derived using the calibration data may not be applicable to inferences computed using a new data set. Thus, environmental inferences may still exhibit systematic errors even after application of the linear correction. The findings from the theoretical analysis are demonstrated using stream temperature and macroinvertebrate data collected from wadeable streams in the western United States.  相似文献   

5.
Abstract

Results of a simulation study of map-image rectification accuracy are reported. Sample size, spatial distribution pattern and measurement errors in a set of ground control points, and the computational algorithm employed to derive the estimate of the parameters of a least-squares bivariate map-image transformation function, are varied in order to assess the sensitivity of the procedure. Standard errors and confidence limits are derived for each of 72 cases, and it is shown that the effects of all four factors are significant. Standard errors fall rapidly as sample size increases, and rise as the control point pattern becomes more linear. Measurement error is shown to have a significant effect on both accuracy and precision. The Gram-Schmidt orthogonal polynomial algorithm performs consistently better than the Gauss-Jordan matrix inversion procedure in all circumstances.  相似文献   

6.
Secular variation 'master' curves are built up using geomagnetic historical observations or archaeomagnetic data from a limited area and their use is usually restricted to regions of around 1000 km radius. Relocation of data within this distance is a common practice to enable comparison of data, although the errors due to such process are rarely taken into account. A detailed analysis of the distribution of relocating geomagnetic data has been done using three popular sets of geomagnetic models (IGRF-9, GUFM and CALS7K-2). This study improves the error analysis of relocating geomagnetic directions made up to date and expands it to geomagnetic intensities. Maximum errors correlate with the non-dipole to dipole field ratio. Archaeomagnetists could use this analysis to valuate the error introduced by reducing data.  相似文献   

7.
We propose a method to evaluate the existence of spatial variability in the covariance structure in a geographically weighted principal components analysis (GWPCA). The method, that is extensive to locally weighted principal components analysis, is based on performing a statistical hypothesis test using the eigenvectors of the PCA scores covariance matrix. The application of the method to simulated data shows that it has a greater statistical power than the current statistical test that uses the eigenvalues of the raw data covariance matrix. Finally, the method was applied to a real problem whose objective is to find spatial distribution patterns in a set of soil pollutants. The results show the utility of GWPCA versus PCA.  相似文献   

8.
Airborne LiDAR (light detection and ranging) data are now commonly regarded as the most accurate source of elevation data for medium-scale topographical modelling applications. However, quoted LiDAR elevation error may not necessarily represent the actual errors occurring across all surfaces, potentially impacting the reliability of derived predictions in Geographical Information Systems (GIS). The extent to which LiDAR elevation error varies in association with land cover, vegetation class and LiDAR data source is quantified relative to dual-frequency global positioning system survey data captured in a 400-ha area in Ireland, where four separate classes of LiDAR point data overlap. Quoted elevation errors are found to correspond closely with the minimum requirement recommended by the American Society of Photogrammetry and Remote Sensing for the definition of 95% error in urban areas only. Global elevation errors are found to be up to 5 times the quoted error, and errors within vegetation areas are found to be even larger, with errors in individual vegetation classes reaching up to 15 times the quoted error. Furthermore, a strong skew is noted in vegetated areas within all the LiDAR data sets tested, pushing errors in some cases to more than 25 times the quoted error. The skew observed suggests that an assumption of a normal error distribution is inappropriate in vegetated areas. The physical parameters that were found to affect elevation error most fundamentally were canopy depth, canopy density and granularity. Other factors observed to affect the degree to which actual errors deviate from quoted error included the primary use for which the data were acquired and the processing applied by data suppliers to meet these requirements.  相似文献   

9.
Spatially coincident land-cover information frequently varies due to technological and political variations. This is especially problematic for time-series analyses. We present an approach using expert expressions of how the semantics of different datasets relate to integrating temporal time series land-cover information where the classification classes have fundamentally changed. We use land-cover mapping in the UK (LCMGB and LCM2000) as example data sets because of the extensive object-based meta-data in the LCM2000. Inconsistencies between the two datasets can arise from random, gross and systematic error and from an actual change in land cover. Locales of possible land-cover change are inferred by comparing characterizations derived from the semantic relations and meta-data. Field visits showed errors of omission to be 21% and errors of commission to be 28%, despite the accuracy limitations of the land-cover information when compared with the field survey component of the Countryside Survey 2000.  相似文献   

10.
Summary. In palaeomagnetic studies the analysis of multicomponent magnetizations has evolved from the eye-ball, orthogonal plot, and vector difference methods to the more elaborate computer-based methods such as principle component analysis (PCA), linearity spectrum analysis (LSA), and the recent package called LINEFIND. the errors involved in estimating a particular direction in a multicomponent system from a single specimen are fundamental to PCA, LSA, and LINEFIND, yet these errors are not used in estimating an overall direction from a number of observations of a particular component (other than in some acceptance or rejection criterion). the distribution of errors relates very simply to a Fisher distribution, and so these errors may be included fairly naturally in the overall analysis. In the absence of a rigorous theory to cover all situations, we consider here approximate methods for the use of these errors in estimating overall directions and cones of confidence. Some examples are presented to demonstrate the application of these methods.  相似文献   

11.
Influence of survey strategy and interpolation model on DEM quality   总被引:2,自引:0,他引:2  
Accurate characterisation of morphology is critical to many studies in the field of geomorphology, particularly those dealing with changes over time. Digital elevation models (DEMs) are commonly used to represent morphology in three dimensions. The quality of the DEM is largely a function of the accuracy of individual survey points, field survey strategy, and the method of interpolation. Recommendations concerning field survey strategy and appropriate methods of interpolation are currently lacking. Furthermore, the majority of studies to date consider error to be uniform across a surface. This study quantifies survey strategy and interpolation error for a gravel bar on the River Nent, Blagill, Cumbria, UK. Five sampling strategies were compared: (i) cross section; (ii) bar outline only; (iii) bar and chute outline; (iv) bar and chute outline with spot heights; and (v) aerial LiDAR equivalent, derived from degraded terrestrial laser scan (TLS) data. Digital Elevation Models were then produced using five different common interpolation algorithms. Each resultant DEM was differentiated from a terrestrial laser scan of the gravel bar surface in order to define the spatial distribution of vertical and volumetric error. Overall triangulation with linear interpolation (TIN) or point kriging appeared to provide the best interpolators for the bar surface. Lowest error on average was found for the simulated aerial LiDAR survey strategy, regardless of interpolation technique. However, comparably low errors were also found for the bar-chute-spot sampling strategy when TINs or point kriging was used as the interpolator. The magnitude of the errors between survey strategy exceeded those found between interpolation technique for a specific survey strategy. Strong relationships between local surface topographic variation (as defined by the standard deviation of vertical elevations in a 0.2-m diameter moving window), and DEM errors were also found, with much greater errors found at slope breaks such as bank edges. A series of curves are presented that demonstrate these relationships for each interpolation and survey strategy. The simulated aerial LiDAR data set displayed the lowest errors across the flatter surfaces; however, sharp slope breaks are better modelled by the morphologically based survey strategy. The curves presented have general application to spatially distributed data of river beds and may be applied to standard deviation grids to predict spatial error within a surface, depending upon sampling strategy and interpolation algorithm.  相似文献   

12.
Concentration estimates of components present in a sample mixture can be obtained using matrixmathematics. In the past, the condition number of the calibration matrix has been used to give anamplification factor by which uncertainties in data can work through to errors in the concentrationestimates. This paper explores an additional interpretation of condition numbers with regards tosignificant figures and rounding errors. A procedure is suggested which will always give the most accurateconcentration estimates provided the calibration matrix is not too ill-conditioned. Condition numbershave also been used by analytical chemists to discuss the error bounds for concentration estimates.Unfortunately, only one representative error bound can be approximated for all the components. Thispaper will show how to compute bounds for individual concentration estimates obtained as solutions to asystem of m equations and n unknowns. The procedure is appropriate when calibration data and sampleresponses are inaccurate.  相似文献   

13.
Summary. We investigate the effects of various sources of error on the estimation of the seismic moment tensor using a linear least squares inversion on surface wave complex spectra. A series of numerical experiments involving synthetic data subjected to controlled error contamination are used to demonstrate the effects. Random errors are seen to enter additively or multiplicitively into the complex spectra. We show that random additive errors due to background recording noise do not pose difficulties for recovering reliable estimates of the moment tensor. On the other hand, multiplicative errors from a variety of sources, such as focusing, multipathing, or epicentre mislocation, may lead to significant overestimation or underestimation of the tensor elements and in general cause the estimates to be less reliable.  相似文献   

14.
Rank estimation by canonical correlation analysis in multivariate statistics has been proposed as analternative approach for estimating the number of components in a multicomponent mixture.Amethodological turning point of this new approach is that it focuses on the difference in structure ratherthan in magnitude in characterizing the difference between the signal and the noise.This structuraldifference is quantified through the analysis of canonical correlation,which is a well-established datareduction technique in multivariate statistics.Unfortunately,there is a price to be paid for having thisstructural difference:at least two replicate data matrices are needed to carry out the analysis.In this paper we continue to explore the potential and to extend the scope of the canonical correlationtechnique.In particular,we propose a bootstrap resampling method which makes it possible to performthe canonical correlation analysis on a single data matrix.Since a robust estimator is introduced to makeinference about the rank,the procedure may be applied to a wide range of data without any restrictionon the noise distribution.Results from real as well as simulated mixture samples indicate that when usedin conjunction with this resampling method,canonical correlation analysis of a single data matrix isequally efficient as of replicate data matrices.  相似文献   

15.
Ambient noise tomography is a rapidly emerging field of seismological research. This paper presents the current status of ambient noise data processing as it has developed over the past several years and is intended to explain and justify this development through salient examples. The ambient noise data processing procedure divides into four principal phases: (1) single station data preparation, (2) cross-correlation and temporal stacking, (3) measurement of dispersion curves (performed with frequency–time analysis for both group and phase speeds) and (4) quality control, including error analysis and selection of the acceptable measurements. The procedures that are described herein have been designed not only to deliver reliable measurements, but to be flexible, applicable to a wide variety of observational settings, as well as being fully automated. For an automated data processing procedure, data quality control measures are particularly important to identify and reject bad measurements and compute quality assurance statistics for the accepted measurements. The principal metric on which to base a judgment of quality is stability, the robustness of the measurement to perturbations in the conditions under which it is obtained. Temporal repeatability, in particular, is a significant indicator of reliability and is elevated to a high position in our assessment, as we equate seasonal repeatability with measurement uncertainty. Proxy curves relating observed signal-to-noise ratios to average measurement uncertainties show promise to provide useful expected measurement error estimates in the absence of the long time-series needed for temporal subsetting.  相似文献   

16.
The calculation of surface area is meaningful for a variety of space-filling phenomena, e.g., the packing of plants or animals within an area of land. With Digital Elevation Model (DEM) data we can calculate the surface area by using a continuous surface model, such as by the Triangulated Irregular Network (TIN). However, just as the triangle-based surface area discussed in this paper, the surface area is generally biased because it is a nonlinear mapping about the DEM data which contain measurement errors. To reduce the bias in the surface area, we propose a second-order bias correction by applying nonlinear error propagation to the triangle-based surface area. This process reveals that the random errors in the DEM data result in a bias in the triangle-based surface area while the systematic errors in the DEM data can be reduced by using the height differences. The bias is theoretically given by a probability integral which can be approximated by numerical approaches including the numerical integral and the Monte Carlo method; but these approaches need a theoretical distribution assumption about the DEM measurement errors, and have a very high computational cost. In most cases, we only have variance information on the measurement errors; thus, a bias estimation based on nonlinear error propagation is proposed. Based on the second-order bias estimation proposed, the variance of the surface area can be improved immediately by removing the bias from the original variance estimation. The main results are verified by the Monte Carlo method and by the numerical integral. They show that an unbiased surface area can be obtained by removing the proposed bias estimation from the triangle-based surface area originally calculated from the DEM data.  相似文献   

17.
Summary. The 1964-70 Florida Current data of Niiler & Richardson are examined for linear correlation with observed sea-level and weather, because their data provide an independent test of similar correlations reported in Maul et al. Seventy-five values of directly measured volume transport and 67 values of surface speed from Niiler & Richardson's unevenly spaced data are correlated with available daily mean values of Miami Beach sea-level, Bimini sea-level, Bimini-Miami Beach sea-level difference, and Miami weather (barometric pressure, air temperature, and north and east components of wind speed). Statistical frequency distribution of transport and of surface speed suggest variability that is not dominated by annual and/or semiannual cycles. Volume transport is most highly correlated with Bimini minus Miami Beach sea-level difference, and surface speed is most highly correlated with inverted Miami Beach sea-level. Including certain weather variables results in statistically significant improvements in linear multivariate modelling of transport and surface speed from sea-level; the standard errors are ± 2.6 sverdrup and ±10 cms−1 respectively. Linear correlation coefficients and multivariate regression parameters from Niiler & Richardson's data are in agreement with those from Maul et al. , except that the standard error of estimating volume transport from sea-level is smaller in Maul et al. , apparently because of smaller errors in the direct measurements.  相似文献   

18.
Geoscientists have undertaken mapping of the Earth's crustal strain (or stress) fields using a great variety of field data. The output can be represented by a 3-D second-rank symmetric random strain tensor. The random principal strains-land rotations of the random tensor are frequently computed. The accuracy is calculated using a first-order approximation. The distribution aspects of the random principal strains and rotations have received almost no attention in Earth Sciences. A first-order approximation of accuracy may not be sufficient if the signal-to-noise ratio is small, as is often the case for geodetically derived random strain tensors. Therefore, the purpose of this paper is to investigate the distribution and estimation problems of the general 3-D second-rank tensor equation GΛG T= T , where T is a given 3-D second-rank symmetric random tensor, Λ a diagonal (3 × 3) random eigenvalue matrix, and G a (3 × 3) random orientation matrix, which is also orthogonal. Λ and G are to be estimated (or solved) from T . If some eigenvalues coincide, additional conditions are imposed on the eigenvectors so that they can be chosen uniquely. The joint probability density function (pdf) of the random eigenvalues and rotations will be worked out, given a joint pdf of the elements of random tensors T. Because the rotations are of special interest in Earth Sciences, we shall also derive the joint marginal pdf of random rotations. The geometry of eigenspectra will be studied. The biases of random eigenvalues and rotations will be derived, which have been neglected in the past. They can be very crucial in interpreting the pattern of a derived strain field, however, when applied to a real Earth Science problem. The variance-covariance matrices will be computed using a second-order approximation.  相似文献   

19.
In this paper, a least‐squares based cadastral parcel area adjustment in geographic information systems (GIS) is developed based on (1) both the areas and coordinates being treated as observations with errors; and (2) scale parameters being introduced to take the systematic effect into account in the process of cadastral map digitization. The area condition equation for cadastral parcel considerations of scale parameters and geometric constraints is first constructed. The effects of the scale error on area adjustment results are then derived, and statistical hypothesis testing is presented to determine the significance of the scale error. Afterwards, Helmert's variance component estimation based on least‐squares adjustment using the condition equation with additional parameters is proposed to determine the weight between the coordinate and area measurements of the parcel. Practical tests are conducted to illustrate the implementation of the proposed methods. Four schemes for solving the inconsistencies between the registered areas and the digitized areas of the parcels are studied. The analysis of the results demonstrates that in the case of significant systematic errors in cadastral map digitization, the accuracies of the adjusted coordinates and areas are improved by introducing scale parameters to reduce the systematic error influence in the parcel area adjustment. Meanwhile, Helmert's variance component estimation method determines more accurate weights of the digitized coordinates and parcel areas, and the least‐squares adjustment solves the inconsistencies between the registered areas and the digitized areas of the parcels.  相似文献   

20.
Abstract

This study examines the propagation of thematic error through GIS overlay operations. Existing error propagation models for these operations are shown to yield results that are inconsistent with actual levels of propagation error. An alternate model is described that yields more consistent results. This model is based on the frequency of errors of omission and commission in input data. Model output can be used to compute a variety of error indices for data derived from different overlay operations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号