首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 656 毫秒
1.
Marshall and Mardia (1985) and Kitanidis (1985) have suggested using minimum norm quadratic estimation as a method to estimate parameters of a generalized covariance function. Unfortunately, this method is difficult to use with large data sets as it requires inversion of an n × n matrix, where n is number of observations. These authors suggest replacing the matrix to be inverted by the identity matrix, which eliminates the computational burden, although with a considerable loss of efficiency. As an alternative, the data set can be broken into subsets, and minimum norm quadratic estimates of parameters of the generalized covariance function can be obtained within each subset. These local estimates can be averaged to obtain global estimates. This procedure also avoids large matrix inversions, but with less loss in efficiency.  相似文献   

2.
Generalized covariance functions in estimation   总被引:3,自引:0,他引:3  
I discuss the role of generalized covariance functions in best linear unbiased estimation and methods for their selection. It is shown that the experimental variogram (or covariance function) of the detrended data can be used to obtain a preliminary estimate of the generalized covariance function without iterations and I discuss the advantages of other parameter estimation methods.  相似文献   

3.
The parameters of covariance functions (or variograms) of regionalized variables must be determined before linear unbiased estimation can be applied. This work examines the problem of minimum-variance unbiased quadratic estimation of the parameters of ordinary or generalized covariance functions of regionalized variables. Attention is limited to covariance functions that are linear in the parameters and the normality assumption is invoked when fourth moments of the data need to be calculated. The main contributions of this work are (1) it shows when and in what sense minimum-variance unbiased quadratic estimation can be achieved, and (2) it yields a well-founded, practicable, and easy-to-automate methodology for the estimation of parameters of covariance functions. Results of simulation studies are very encouraging.  相似文献   

4.
Computational aspects of the estimation of generalized covariance functions by the method of restricted maximum likelihood (REML) are considered in detail. In general, REML estimation is computationally intensive, but significant computational savings are available in important special cases. The approach taken here restricts attention to data whose spatial configuration is a regular lattice, but makes no restrictions on the number of parameters involved in the generalized covariance nor (with the exception of one result) on the nature of the generalized covariance function's dependence on those parameters. Thus, this approach complements the recent work of L. G. Barendregt (1987), who considered computational aspects of REML estimation in the context of arbitrary spatial data configurations, but restricted attention to generalized covariances which are linear functions of only two parameters.  相似文献   

5.
Empirical Maximum Likelihood Kriging: The General Case   总被引:4,自引:0,他引:4  
Although linear kriging is a distribution-free spatial interpolator, its efficiency is maximal only when the experimental data follow a Gaussian distribution. Transformation of the data to normality has thus always been appealing. The idea is to transform the experimental data to normal scores, krige values in the “Gaussian domain” and then back-transform the estimates and uncertainty measures to the “original domain.” An additional advantage of the Gaussian transform is that spatial variability is easier to model from the normal scores because the transformation reduces effects of extreme values. There are, however, difficulties with this methodology, particularly, choosing the transformation to be used and back-transforming the estimates in such a way as to ensure that the estimation is conditionally unbiased. The problem has been solved for cases in which the experimental data follow some particular type of distribution. In general, however, it is not possible to verify distributional assumptions on the basis of experimental histograms calculated from relatively few data and where the uncertainty is such that several distributional models could fit equally well. For the general case, we propose an empirical maximum likelihood method in which transformation to normality is via the empirical probability distribution function. Although the Gaussian domain simple kriging estimate is identical to the maximum likelihood estimate, we propose use of the latter, in the form of a likelihood profile, to solve the problem of conditional unbiasedness in the back-transformed estimates. Conditional unbiasedness is achieved by adopting a Bayesian procedure in which the likelihood profile is the posterior distribution of the unknown value to be estimated and the mean of the posterior distribution is the conditionally unbiased estimate. The likelihood profile also provides several ways of assessing the uncertainty of the estimation. Point estimates, interval estimates, and uncertainty measures can be calculated from the posterior distribution.  相似文献   

6.
Quadratic estimators of components of a nested spatial covariance function are presented. Estimators are unbiased and possess a minimum norm property. Inversion of a covariance matrix is required but, by assuming that spatial correlation is absent, a priori, matrix inversion can be avoided. The loss of efficiency that results from this assumption is discussed. Methods can be generalized to include estimation of components of a generalized polynomial covariance assuming the underlying process to be an intrinsic random function. Particular attention is given to the special case where just two components of spatial covariance exist, one of which represents a nugget effect.  相似文献   

7.
On the estimation of the generalized covariance function   总被引:1,自引:0,他引:1  
The estimation of the generalized covariance function, K, is a major problem in the use of intrinsic random functions of order k to obtain kriging estimates. The precise estimation by least-squares regression of the parameters in polynomial models for K is made difficult by the nature of the distribution of the dependent variable and the multicollinearity of the independent variables.  相似文献   

8.
It has been recognized that wildfire, followed by large precipitation events, triggers both flooding and debris flows in mountainous regions. The ability to predict and mitigate these hazards is crucial in protecting public safety and infrastructure. A need for advanced modeling techniques was highlighted by re-evaluating existing prediction models from the literature. Data from 15 individual burn basins in the intermountain western United States, which contained 388 instances and 26 variables, were obtained from the United States Geological Survey (USGS). After randomly selecting a subset of the data to serve as a validation set, advanced predictive modeling techniques, using machine learning, were implemented using the remaining training data. Tenfold cross-validation was applied to the training data to ensure nearly unbiased error estimation and also to avoid model over-fitting. Linear, nonlinear, and rule-based predictive models including naïve Bayes, mixture discriminant analysis, classification trees, and logistic regression models were developed and tested on the validation dataset. Results for the new non-linear approaches were nearly twice as successful as those for the linear models, previously published in debris flow prediction literature. The new prediction models advance the current state-of-the-art of debris flow prediction and improve the ability to accurately predict debris flow events in wildfire-prone intermountain western United States.  相似文献   

9.
A Bayesian linear inversion methodology based on Gaussian mixture models and its application to geophysical inverse problems are presented in this paper. The proposed inverse method is based on a Bayesian approach under the assumptions of a Gaussian mixture random field for the prior model and a Gaussian linear likelihood function. The model for the latent discrete variable is defined to be a stationary first-order Markov chain. In this approach, a recursive exact solution to an approximation of the posterior distribution of the inverse problem is proposed. A Markov chain Monte Carlo algorithm can be used to efficiently simulate realizations from the correct posterior model. Two inversion studies based on real well log data are presented, and the main results are the posterior distributions of the reservoir properties of interest, the corresponding predictions and prediction intervals, and a set of conditional realizations. The first application is a seismic inversion study for the prediction of lithological facies, P- and S-impedance, where an improvement of 30% in the root-mean-square error of the predictions compared to the traditional Gaussian inversion is obtained. The second application is a rock physics inversion study for the prediction of lithological facies, porosity, and clay volume, where predictions slightly improve compared to the Gaussian inversion approach.  相似文献   

10.
The Second-Order Stationary Universal Kriging Model Revisited   总被引:3,自引:0,他引:3  
Universal kriging originally was developed for problems of spatial interpolation if a drift seemed to be justified to model the experimental data. But its use has been questioned in relation to the bias of the estimated underlying variogram (variogram of the residuals), and furthermore universal kriging came to be considered an old-fashioned method after the theory of intrinsic random functions was developed. In this paper the model is reexamined together with methods for handling problems in the inference of parameters. The efficiency of the inference of covariance parameters is shown in terms of bias, variance, and mean square error of the sampling distribution obtained by Monte Carlo simulation for three different estimators (maximum likelihood, bias corrected maximum likelihood, and restricted maximum likelihood). It is shown that unbiased estimates for the covariance parameters may be obtained but if the number of samples is small there can be no guarantee of good estimates (estimates close to the true value) because the sampling variance usually is large. This problem is not specific to the universal kriging model but rather arises in any model where parameters are inferred from experimental data. The validity of the estimates may be evaluated statistically as a risk function as is shown in this paper.  相似文献   

11.
Various approaches exist to relate saturated hydraulic conductivity (K s) to grain-size data. Most methods use a single grain-size parameter and hence omit the information encompassed by the entire grain-size distribution. This study compares two data-driven modelling methods??multiple linear regression and artificial neural networks??that use the entire grain-size distribution data as input for K s prediction. Besides the predictive capacity of the methods, the uncertainty associated with the model predictions is also evaluated, since such information is important for stochastic groundwater flow and contaminant transport modelling. Artificial neural networks (ANNs) are combined with a generalised likelihood uncertainty estimation (GLUE) approach to predict K s from grain-size data. The resulting GLUE-ANN hydraulic conductivity predictions and associated uncertainty estimates are compared with those obtained from the multiple linear regression models by a leave-one-out cross-validation. The GLUE-ANN ensemble prediction proved to be slightly better than multiple linear regression. The prediction uncertainty, however, was reduced by half an order of magnitude on average, and decreased at most by an order of magnitude. This demonstrates that the proposed method outperforms classical data-driven modelling techniques. Moreover, a comparison with methods from the literature demonstrates the importance of site-specific calibration. The data set used for this purpose originates mainly from unconsolidated sandy sediments of the Neogene aquifer, northern Belgium. The proposed predictive models are developed for 173 grain-size K s-pairs. Finally, an application with the optimised models is presented for a borehole lacking K s data.  相似文献   

12.
The problem of assimilating biased and inaccurate observations into inadequate models of the physical systems from which the observations were taken is common in the petroleum and groundwater fields. When large amounts of data are assimilated without accounting for model error and observation bias, predictions tend to be both overconfident and incorrect. In this paper, we propose a workflow for calibration of imperfect models to biased observations that involves model construction, model calibration, model criticism and model improvement. Model criticism is based on computation of model diagnostics which provide an indication of the validity of assumptions. During the model improvement step, we advocate identification of additional physically motivated parameters based on examination of data mismatch after calibration and addition of bias correction terms. If model diagnostics indicates the presence of residual model error after parameters have been added, then we advocate estimation of a “total” observation error covariance matrix, whose purpose is to reduce weighting of observations that cannot be matched because of deficiency of the model. Although the target applications of this methodology are in the subsurface, we illustrate the approach with two simplified examples involving prediction of the future velocity of fall of a sphere from models calibrated to a short-time series of biased measurements with independent additive random noise. The models into which the data are assimilated contain model errors due to neglect of physical processes and neglect of uncertainty in parameters. In every case, the estimated total error covariance is larger than the true observation covariance implying that the observations need not be matched to the accuracy of the measuring instrument. Predictions are much improved when all model improvement steps were taken.  相似文献   

13.
Bayesian updating methods provide an alternate philosophy to the characterization of the input variables of a stochastic mathematical model. Here, a priori values of statistical parameters are assumed on subjective grounds or by analysis of a data base from a geologically similar area. As measurements become available during site investigations, updated estimates of parameters characterizing spatial variability are generated. However, in solving the traditional updating equations, an updated covariance matrix may be generated that is not positive-definite, particularly when observed data errors are small. In addition, measurements may indicate that initial estimates of the statistical parameters are poor. The traditional procedure does not have a facility to revise the parameter estimates before the update is carried out. alternatively, Bayesian updating can be viewed as a linear inverse problem that minimizes a weighted combination of solution simplicity and data misfit. Depending on the weight given to the a priori information, a different solution is generated. A Bayesian updating procedure for log-conductivity interpolation that uses a singular value decomposition (SVD) is presented. An efficient and stable algorithm is outlined that computes the updated log-conductivity field and the a posteriori covariance of the estimated values (estimation errors). In addition, an information density matrix is constructed that indicates how well predicted data match observations. Analysis of this matrix indicates the relative importance of the observed data. The SVD updating procedure is used to interpolate the log-conductivity fields of a series of hypothetical aquifers to demonstrate pitfalls and possibilities of the method.  相似文献   

14.
Statistical modelling of thermal annealing of fission tracks in apatite   总被引:8,自引:0,他引:8  
We develop an improved methodology for modelling the relationship between mean track length, temperature, and time in fission track annealing experiments. We consider “fanning Arrhenius” models, in which contours of constant mean length on an Arrhenius plot are straight lines meeting at a common point. Features of our approach are explicit use of subject matter knowledge, treating mean length as the response variable, modelling of the mean-variance relationship with two components of variance, improved modelling of the control sample, and using information from experiments in which no tracks are seen.

This approach overcomes several weaknesses in previous models and provides a robust six parameter model that is widely applicable. Estimation is via direct maximum likelihood which can be implemented using a standard numerical optimisation package. Because the model is highly nonlinear, some reparameterisations are needed to achieve stable estimation and calculation of precisions. Experience suggests that precisions are more convincingly estimated from profile log-likelihood functions than from the information matrix.

We apply our method to the B-5 and Sr fluorapatite data of Crowley et al. (1991) and obtain well-fitting models in both cases. For the B-5 fluorapatite, our model exhibits less fanning than that of Crowley et al. (1991), although fitted mean values above 12 μm are fairly similar. However, predictions can be different, particularly for heavy annealing at geological time scales, where our model is less retentive. In addition, the refined error structure of our model results in tighter prediction errors, and has components of error that are easier to verify or modify. For the Sr fluorapatite, our fitted model for mean lengths does not differ greatly from that of Crowley et al. (1991), but our error structure is quite different.  相似文献   


15.
基于ARMA模型的隧道位移时间序列分析   总被引:7,自引:0,他引:7  
尹光志  岳顺  钟焘  李德泉 《岩土力学》2009,30(9):2727-2732
在新奥法隧道施工中,隧道位移监测对于评价围岩稳定性和支护结构合理性起重要作用。目前大都采用AR模型对隧道位移进行时间序列分析,避开了非线性估计,致使拟合精度和模型实用性较差。为此,介绍了具有较高预测精度和较好适用条件的ARMA模型及其常用参数估计方法,基于其参数非线性估计带来的不便性,提出一种ARMA模型参数估计近似线性方法,把残差用Taylor级数一阶展开,将非线性估计线性化,用线性最小二乘法估计参数最终值。用该方法对重庆市大足县南环二路南山隧道位移监测数据进行时间序列建模分析,预测与实测值吻合较好,证明了该方法的实用性。  相似文献   

16.
Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.This paper was presented at Emerging Concepts, MGUS-87 Conference, Redwood City, California, 13–15 April 1987.  相似文献   

17.
《Mathematical Geology》1997,29(6):779-799
Generalized cross-covariances describe the linear relationships between spatial variables observed at different locations. They are invariant under translation of the locations for any intrinsic processes, they determine the cokriging predictors without additional assumptions and they are unique up to linear functions. If the model is stationary, that is if the variograms are bounded, they correspond to the stationary cross-covariances. Under some symmetry condition they are equal to minus the usual cross-variogram. We present a method to estimate these generalized cross-covariances from data observed at arbitrary sampling locations. In particular we do not require that all variables are observed at the same points. For fitting a linear coregionalization model we combine this new method with a standard algorithm which ensures positive definite coregionalization matrices. We study the behavior of the method both by computing variances exactly and by simulating from various models.  相似文献   

18.
A data reduction method is described for determining platinum-group element (PGE) abundances by inductively coupled plasma-mass spectrometry (ICP-MS) using external calibration or the method of standard addition. Gravimetric measurement of volumes, the analysis of reference materials and the use of procedural blanks were all used to minimise systematic errors. Internal standards were used to correct for instrument drift. A linear least squares regression model was used to calculate concentrations from drift-corrected counts per second (cps). Furthermore, mathematical manipulations also contribute to the uncertainty estimates of a procedure. Typical uncertainty estimate calculations for ICP-MS data manipulations involve: (1) Carrying standard deviations from the raw cps through the data reduction or (2) calculating a standard deviation from multiple final concentration calculations. It is demonstrated that method 2 may underestimate the uncertainty estimate of the calculated data. Methods 1 and 2 do not typically include an uncertainty estimate component from a regression model. As such models contribute to the uncertainty estimates affecting the calculated data, an uncertainty estimate component from the regression must be included in any final error calculations. Confidence intervals are used to account for uncertainty estimates from the regression model. These confidence intervals are simpler to calculate than uncertainty estimates from method 1, for example. The data reduction and uncertainty estimation method described here addresses problems of reporting PGE data from an article in the literature and addresses both precision and accuracy. The method can be applied to any analytical technique where drift corrections or regression models are used.  相似文献   

19.
Before optimal linear prediction can be performed on spatial data sets, the variogram is usually estimated at various lags and a parametric model is fitted to those estimates. Apart from possible a priori knowledge about the process and the user's subjectivity, there is no standard methodology for choosing among valid variogram models like the spherical or the exponential ones. This paper discusses the nonparametric estimation of the variogram and its derivative, based on the spectral representation of positive definite functions. The use of the estimated derivative to help choose among valid parametric variogram models is presented. Once a model is selected, its parameters can be estimated—for example, by generalized least squares. A small simulation study is performed that demonstrates the usefulness of estimating the derivative to help model selection and illustrates the issue of aliasing. MATLAB software for nonparametric variogram derivative estimation is available at http://www-math.mit.edu/~gorsich/derivative.html. An application to the Walker Lake data set is also presented.  相似文献   

20.
Conditioning of coefficient matrices of Ordinary Kriging   总被引:1,自引:0,他引:1  
The solution of a set of linear equations is central to Ordinary Kriging. Computers are commonly applied because of the amount of data and work involved. There has, until recently, been little attention devoted toward the conditioning of kriging matrices. This article considers implications of conditioning upon numerical stability, instead of on robustness which has been the main focus of past work. The effect of properties of the stationary covariance matrix on the conditioning of the kriging matrix is discussed. The relationship between the covariance and autocorrelation functions allows some conclusions about the conditioning of covariance matrices, based on past work in deconvolution. The conditioning of some coefficient matrices of stationary kriging, defined in terms of either the semivariogram or the covariance, is examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号