首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Gaussian conditional realizations are routinely used for risk assessment and planning in a variety of Earth sciences applications. Assuming a Gaussian random field, conditional realizations can be obtained by first creating unconditional realizations that are then post-conditioned by kriging. Many efficient algorithms are available for the first step, so the bottleneck resides in the second step. Instead of doing the conditional simulations with the desired covariance (F approach) or with a tapered covariance (T approach), we propose to use the taper covariance only in the conditioning step (half-taper or HT approach). This enables to speed up the computations and to reduce memory requirements for the conditioning step but also to keep the right short scale variations in the realizations. A criterion based on mean square error of the simulation is derived to help anticipate the similarity of HT to F. Moreover, an index is used to predict the sparsity of the kriging matrix for the conditioning step. Some guides for the choice of the taper function are discussed. The distributions of a series of 1D, 2D and 3D scalar response functions are compared for F, T and HT approaches. The distributions obtained indicate a much better similarity to F with HT than with T.  相似文献   

2.
Conditional bias-penalized kriging (CBPK)   总被引:1,自引:1,他引:0  
Simple and ordinary kriging, or SK and OK, respectively, represent the best linear unbiased estimator in the unconditional sense in that they minimize the unconditional (on the unknown truth) error variance and are unbiased in the unconditional mean. However, because the above properties hold only in the unconditional sense, kriging estimates are generally subject to conditional biases that, depending on the application, may be unacceptably large. For example, when used for precipitation estimation using rain gauge data, kriging tends to significantly underestimate large precipitation and, albeit less consequentially, overestimate small precipitation. In this work, we describe an extremely simple extension to SK or OK, referred to herein as conditional bias-penalized kriging (CBPK), which minimizes conditional bias in addition to unconditional error variance. For comparative evaluation of CBPK, we carried out numerical experiments in which normal and lognormal random fields of varying spatial correlation scale and rain gauge network density are synthetically generated, and the kriging estimates are cross-validated. For generalization and potential application in other optimal estimation techniques, we also derive CBPK in the framework of classical optimal linear estimation theory.  相似文献   

3.
Data collected along transects are becoming more common in environmental studies as indirect measurement devices, such as geophysical sensors, that can be attached to mobile platforms become more prevalent. Because exhaustive sampling is not always possible under constraints of time and costs, geostatistical interpolation techniques are used to estimate unknown values at unsampled locations from transect data. It is known that outlying observations can receive significantly greater ordinary kriging weights than centrally located observations when the data are contiguously aligned along a transect within a finite search window. Deutsch (1994) proposed a kriging algorithm, finite domain kriging, that uses a redundancy measure in place of the covariance function in the data-to-data kriging matrix to address the problem of overweighting the outlying observations. This paper compares the performances of two kriging techniques, ordinary kriging (OK) and finite domain kriging (FDK), on examining unexploded ordnance (UXO) densities by comparing prediction errors at unsampled locations. The impact of sampling design on object count prediction is also investigated using data collected from transects and at random locations. The Poisson process is used to model the spatial distribution of UXO for three 5000 × 5000 m fields; one of which does not have any ordnance target (homogeneous field), while the other two sites have an ordnance target in the center of the site (isotropic and anisotropic fields). In general, for a given sampling transects width, the differences between OK and FDK in terms of the mean error and the mean square error are not significant regardless of the sampled area and the choice of the field. When 20% or more of the site is sampled, the estimation of object counts is unbiased on average for all three fields regardless of the choice of the transect width and the choice of the kriging algorithm. However, for non-homogeneous fields (isotropic and anisotropic fields), the mean error fluctuates considerably when a small number of transects are sampled. The difference between the transect sampling and the random sampling in terms of prediction errors becomes almost negligible if more than 20% of the site is sampled. Overall, FDK is no better than OK in terms of the prediction performances when the transect sampling procedure is used.  相似文献   

4.
This paper presents new ideas on sampling design and minimax prediction in a geostatistical model setting. Both presented methodologies are based on regression design ideas. For this reason the appendix of this paper gives an introduction to optimum Bayesian experimental design theory for linear regression models with uncorrelated errors. The presented methodologies and algorithms are then applied to the spatial setting of correlated random fields. To be specific, in Sect. 1 we will approximate an isotropic random field by means of a regression model with a large number of regression functions with random amplitudes, similarly to Fedorov and Flanagan (J Combat Inf Syst Sci: 23, 1997). These authors make use of the Karhunen Loeve approximation of the isotropic random field. We use the so-called polar spectral approximation instead; i.e. we approximate the isotropic random field by means of a regression model with sine-cosine-Bessel surface harmonics with random amplitudes and then, in accordance with Fedorov and Flanagan (J Combat Inf Syst Sci: 23, 1997), apply standard Bayesian experimental design algorithms to the resulting Bayesian regression model. Section 2 deals with minimax prediction when the covariance function is known to vary in some set of a priori plausible covariance functions. Using a minimax theorem due to Sion (Pac J Math 8:171–176, 1958) we are able to formulate the minimax problem as being equivalent to an optimum experimental design problem, too. This makes the whole experimental design apparatus available for finding minimax kriging predictors. Furthermore some hints are given, how the approach to spatial sampling design with one a priori fixed covariance function may be extended by means of minimax kriging to a whole set of a priori plausible covariance functions such that the resulting designs are robust. The theoretical developments are illustrated with two examples taken from radiological monitoring and soil science.  相似文献   

5.
6.
The multi-Gaussian model is used in geostatistical applications to predict functions of a regionalized variable and to assess uncertainty by determining local (conditional to neighboring data) distributions. The model relies on the assumption that the regionalized variable can be represented by a transform of a Gaussian random field with a known mean value, which is often a strong requirement. This article presents two variations of the model to account for an uncertain mean value. In the first one, the mean of the Gaussian random field is regarded as an unknown non-random parameter. In the second model, the mean of the Gaussian field is regarded as a random variable with a very large prior variance. The properties of the proposed models are compared in the context of non-linear spatial prediction and uncertainty assessment problems. Algorithms for the conditional simulation of Gaussian random fields with an uncertain mean are also examined, and problems associated with the selection of data in a moving neighborhood are discussed.  相似文献   

7.
地震烈度物理标准研究   总被引:2,自引:0,他引:2  
王虎栓 《中国地震》1994,10(3):197-205
基于对地震烈度及其物理标准新的认识,本文采用更丰富的强震观测资料和新的理论分析方法对地震烈度物理标准进行了广泛的研究,提出了新的地震烈度物理标准,并对其应用进行了讨论。  相似文献   

8.
Compositional Bayesian indicator estimation   总被引:1,自引:1,他引:0  
Indicator kriging is widely used for mapping spatial binary variables and for estimating the global and local spatial distributions of variables in geosciences. For continuous random variables, indicator kriging gives an estimate of the cumulative distribution function, for a given threshold, which is then the estimate of a probability. Like any other kriging procedure, indicator kriging provides an estimation variance that, although not often used in applications, should be taken into account as it assesses the uncertainty of the estimate. An alternative approach to indicator estimation is proposed in this paper. In this alternative approach the complete probability density function of the indicator estimate is evaluated. The procedure is described in a Bayesian framework, using a multivariate Gaussian likelihood and an a priori distribution which are both combined according to Bayes theorem in order to obtain a posterior distribution for the indicator estimate. From this posterior distribution, point estimates, interval estimates and uncertainty measures can be obtained. Among the point estimates, the median of the posterior distribution is the maximum entropy estimate because there is a fifty-fifty chance of the unknown value of the estimate being larger or smaller than the median; that is, there is maximum uncertainty in the choice between two alternatives. Thus in some sense, the latter is an indicator estimator, alternative to the kriging estimator, that includes its own uncertainty. On the other hand, the mode of the posterior distribution estimator, assuming a uniform prior, is coincidental with the simple kriging estimator. Additionally, because the indicator estimate can be considered as a two-part composition which domain of definition is the simplex, the method is extended to compositional Bayesian indicator estimation. Bayesian indicator estimation and compositional Bayesian indicator estimation are illustrated with an environmental case study in which the probability of the content of a geochemical element in soil being over a particular threshold is of interest. The computer codes and its user guides are public domain and freely available.  相似文献   

9.
Forecasting of space–time groundwater level is important for sparsely monitored regions. Time series analysis using soft computing tools is powerful in temporal data analysis. Classical geostatistical methods provide the best estimates of spatial data. In the present work a hybrid framework for space–time groundwater level forecasting is proposed by combining a soft computing tool and a geostatistical model. Three time series forecasting models: artificial neural network, least square support vector machine and genetic programming (GP), are individually combined with the geostatistical ordinary kriging model. The experimental variogram thus obtained fits a linear combination of a nugget effect model and a power model. The efficacy of the space–time models was decided on both visual interpretation (spatial maps) and calculated error statistics. It was found that the GP–kriging space–time model gave the most satisfactory results in terms of average absolute relative error, root mean square error, normalized mean bias error and normalized root mean square error.  相似文献   

10.
Why do we need and how should we implement Bayesian kriging methods   总被引:1,自引:0,他引:1  
The spatial prediction methodology that has become known under the heading of kriging is largely based on the assumptions that the underlying random field is Gaussian and the covariance function is exactly known. In practical applications, however, these assumptions will not hold. Beyond Gaussianity of the random field, lognormal kriging, disjunctive kriging, (generalized linear) model-based kriging and trans-Gaussian kriging have been proposed in the literature. The latter approach makes use of the Box–Cox-transform of the data. Still, all the alternatives mentioned do not take into account the uncertainty with respect to the distribution (or transformation) and the estimated covariance function of the data. The Bayesian trans-Gaussian kriging methodology proposed in the present paper is in the spirit of the “Bayesian bootstrap” idea advocated by Rubin (Ann Stat 9:130–134, 1981) and avoids the unusual specification of noninformative priors often made in the literature and is entirely based on the sample distribution of the estimators of the covariance function and of the Box–Cox parameter. After some notes on Bayesian spatial prediction, noninformative priors and developing our new methodology finally we will present an example illustrating our pragmatic approach to Bayesian prediction by means of a simulated data set.  相似文献   

11.
The input uncertainty is as significant as model error, which affects the parameter estimation, yields bias and misleading results. This study performed a comprehensive comparison and evaluation of uncertainty estimates according to the impact of precipitation errors by GLUE and Bayesian methods using the Metropolis Hasting algorithm in a validated conceptual hydrological model (WASMOD). It aims to explain the sensitivity and differences between the GLUE and Bayesian method applied to hydrological model under precipitation errors with constant multiplier parameter and random multiplier parameter. The 95 % confidence interval of monthly discharge in low flow, medium flow and high flow were selected for comparison. Four indices, i.e. the average relative interval length, the percentage of observations bracketed by the confidence interval, the percentage of observations bracketed by the unit confidence interval and the continuous rank probability score (CRPS) were used in this study for sensitivity analysis under model input error via GLUE and Bayesian methods. It was found that (1) the posterior distributions derived by the Bayesian method are narrower and sharper than those obtained by the GLUE under precipitation errors, but the differences are quite small; (2) Bayesian method performs more sensitive in uncertainty estimates of discharge than GLUE according to the impact of precipitation errors; (3) GLUE and Bayesian methods are more sensitive in uncertainty estimate of high flow than the other flows by the impact of precipitation errors; and (4) under the impact of precipitation, the results of CRPS for low and medium flows are quite stable from both GLUE and Bayesian method while it is sensitive for high flow by Bayesian method.  相似文献   

12.
Computer modelling is used to investigate the possibility of determining ionospheric parameters from slightly oblique ionospheric soundings, using absorption data for decametric radio waves of different polarization. It is shown that with mean square measurement errors of 0.5 dB, and using regularization algorithms to solve the inverse problems, electron collision frequency profiles can be obtained for the night F-region with errors of less than 30%. Both temperatures of electrons and neutrals are also determined to within 10%.  相似文献   

13.
Abstract

Unconfined aquifer parameters, viz. transmissivity, storage coefficient, specific yield and delay index from a pumping test are estimated using the genetic algorithm optimization (GA) technique. The parameter estimation problem is formulated as a least-squares optimization, in which the parameters are optimized by minimizing the deviations between the field-observed and the model-predicted time–drawdown data. Boulton's convolution integral for the determination of drawdown is coupled with the GA optimization technique. The bias induced by three different objective functions: (a) the sum of squares of absolute deviations between the observed and computed drawdown; (b) the sum of squares of normalized deviations with respect to the observed drawdown; and (c) the sum of squares of normalized deviations with respect to the computed drawdown, is statistically analysed. It is observed that, when the time–drawdown data contain no errors, the objective functions do not induce any bias in the parameter estimates and the true parameters are uniquely identified. However, in the presence of noise, these objective functions induce bias in the parameter estimates. For the case considered, defining the objective function as the sum of the squares of absolute deviations between the observed and simulated drawdowns resulted in the best possible estimates. A comparison of the GA technique with the curve-matching procedure and a conventional optimization technique, such as the sequential unconstrained minimization technique (SUMT), is made in estimating the aquifer parameters from a reported field pumping test in an unconfined aquifer. For the case considered, the GA technique performed better than the other two techniques in parameter estimation, with the sum-of-squares errors obtained from the GA about one fourth of those obtained by the curve matching procedure, and about half of those obtained by SUMT.

Citation Rajesh, M., Kashyap, D. & Hari Prasad, K. S. (2010) Estimation of unconfined aquifer parameters by genetic algorithms. Hydrol. Sci. J. 55(3), 403–413.  相似文献   

14.
Large observed datasets are not stationary and/or depend on covariates, especially, in the case of extreme hydrometeorological variables. This causes the difficulty in estimation, using classical hydrological frequency analysis. A number of non-stationary models have been developed using linear or quadratic polynomial functions or B-splines functions to estimate the relationship between parameters and covariates. In this article, we propose regularised generalized extreme value model with B-splines (GEV-B-splines models) in a Bayesian framework to estimate quantiles. Regularisation is based on penalty and aims to favour parsimonious model especially in the case of large dimension space. Penalties are introduced in a Bayesian framework and the corresponding priors are detailed. Five penalties are considered and the corresponding priors are developed for comparison purpose as: Least absolute shrinkage and selection (Lasso and Ridge) and smoothing clipped absolute deviations (SCAD) methods (SCAD1, SCAD2 and SCAD3). Markov chain Monte Carlo (MCMC) algorithms have been developed for each model to estimate quantiles and their posterior distributions. Those approaches are tested and illustrated using simulated data with different sample sizes. A first simulation was made on polynomial B-splines functions in order to choose the most efficient model in terms of relative mean biais (RMB) and the relative mean-error (RME) criteria. A second simulation was performed with the SCAD1 penalty for sinusoidal dependence to illustrate the flexibility of the proposed approach. Results show clearly that the regularized approaches leads to a significant reduction of the bias and the mean square error, especially for small sample sizes (n < 100). A case study has been considered to model annual peak flows at Fort-Kent catchment with the total annual precipitations as covariates. The conditional quantile curves were given for the regularized and the maximum likelihood methods.  相似文献   

15.
Nonlocal moment equations allow one to render deterministically optimum predictions of flow in randomly heterogeneous media and to assess predictive uncertainty conditional on measured values of medium properties. We present a geostatistical inverse algorithm for steady-state flow that makes it possible to further condition such predictions and assessments on measured values of hydraulic head (and/or flux). Our algorithm is based on recursive finite-element approximations of exact first and second conditional moment equations. Hydraulic conductivity is parameterized via universal kriging based on unknown values at pilot points and (optionally) measured values at other discrete locations. Optimum unbiased inverse estimates of natural log hydraulic conductivity, head and flux are obtained by minimizing a residual criterion using the Levenberg-Marquardt algorithm. We illustrate the method for superimposed mean uniform and convergent flows in a bounded two-dimensional domain. Our examples illustrate how conductivity and head data act separately or jointly to reduce parameter estimation errors and model predictive uncertainty.This work is supported in part by NSF/ITR Grant EAR-0110289. The first author was additionally supported by scholarships from CONACYT and Instituto de Investigaciones Electricas of Mexico. Additional support was provided by the European Commission under Contract EVK1-CT-1999-00041 (W-SAHaRA-Stochastic Analysis of Well Head Protection and Risk Assessment).  相似文献   

16.
An estimation of the volume of light nonaqueous phase liquids (LNAPL) is often required during site assessment, remedial design, or litigation. LNAPL volume can be estimated by a strictly empirical approach whereby core samples, distributed throughout the vertical and lateral extent of LNAPL, are analyzed for LNAPL content, and these data are then integrated to compute a volume. Alternatively, if the LNAPL has obtained vertical equilibrium, the thickness of LNAPL in monitoring wells can be used to calculate of LNAPL in monitoring wells can be used to calculate LNAPL volume at the well locations if appropriate soil and LNAPL properties can be estimated.
A method is described for estimating key soil and LNAPL properties by nonlinear regression of vertical profiles of LNAPL saturation. The methods is relatively fast, cost effective, and amenable to quantitative analysis of uncertainty. Optionally, the method allows statistical determination of best-fit values for the Van Genuchten capillary parameters (n, αoil-water and αoil-air), residual water saturation and ANAPL density. The sensitivity of the method was investigated by fitting field LNAPL saturation profiles and then determining the variation in misfit (mean square residual) as a function of parameter value for each parameter. Using field data from a sandy aquifer, the fitting statistics were found to be highly sensitive to LNAPL density, αoil-water and αoil-air moderately sensitive to the Van Genuchten n value, and weakly sensitive to residual water saturation. The regression analysis also provides information that can be used to estimate uncertainty in the estimated parameters, which can then be used to estimate uncertainty in calculated values of specific volume.  相似文献   

17.
ABSTRACT

A linear approach is presented for analysing flood discharge series affected by measurement errors which are random in nature. A general model based upon the conditional probability concept is introduced to represent random errors and to analyse their effect on flood estimates. Flood predictions provided by quantiles are shown to be positively biased when performed from a sample of measured discharge. Though for design purposes such an effect is conservative, this bias cannot be neglected if the peak discharges are determined from stage measurements by means of the extrapolated tail of the rating curve for the gauging station concerned. Monte Carlo experiments, which have been carried out to analyse small sample effects, have finally shown that the use of the method of maximum likelihood is able to reduce the bias due to measurement errors in discharge data.  相似文献   

18.
Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.  相似文献   

19.
We present a nonlinear stochastic inverse algorithm that allows conditioning estimates of transient hydraulic heads, fluxes and their associated uncertainty on information about hydraulic conductivity (K) and hydraulic head (h  ) data collected in a randomly heterogeneous confined aquifer. Our algorithm is based on Laplace-transformed recursive finite-element approximations of exact nonlocal first and second conditional stochastic moment equations of transient flow. It makes it possible to estimate jointly spatial variations in natural log-conductivity (Y=lnK)(Y=lnK), the parameters of its underlying variogram, and the variance–covariance of these estimates. Log-conductivity is parameterized geostatistically based on measured values at discrete locations and unknown values at discrete “pilot points”. Whereas prior values of Y at pilot point are obtained by generalized kriging, posterior estimates at pilot points are obtained through a maximum likelihood fit of computed and measured transient heads. These posterior estimates are then projected onto the computational grid by kriging. Optionally, the maximum likelihood function may include a regularization term reflecting prior information about Y. The relative weight assigned to this term is evaluated separately from other model parameters to avoid bias and instability. We illustrate and explore our algorithm by means of a synthetic example involving a pumping well. We find that whereas Y and h can be reproduced quite well with parameters estimated on the basis of zero-order mean flow equations, all model quality criteria identify the second-order results as being superior to zero-order results. Identifying the weight of the regularization term and variogram parameters can be done with much lesser ambiguity based on second- than on zero-order results. A second-order model is required to compute predictive error variances of hydraulic head (and flux) a posteriori. Conditioning the inversion jointly on conductivity and hydraulic head data results in lesser predictive uncertainty than conditioning on conductivity or head data alone.  相似文献   

20.
The performances of kriging, stochastic simulations and sequential self-calibration inversion are assessed when characterizing a non-multiGaussian synthetic 2D braided channel aquifer. The comparison is based on a series of criteria such as the reproduction of the original reference transmissivity or head fields, but also in terms of accuracy of flow and transport (capture zone) forecasts when the flow conditions are modified. We observe that the errors remain large even for a dense data network. In addition some unexpected behaviours are observed when large transmissivity datasets are used. In particular, we observe an increase of the bias with the number of transmissivity data and an increasing uncertainty with the number of head data. This is interpreted as a consequence of the use of an inadequate multiGaussian stochastic model that is not able to reproduce the connectivity of the original field.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号