首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

The current state of kriging in subsurface hydrology is critically reviewed. In an application to a region where boreholes already exist, methods of optimal location of additional observation wells for geophysical parameter investigation and optimal interpolation for the purpose of solving the inverse problem are investigated. The particular case of the location of wells for the measurements of transmissivity and hydraulic head in the Kennet Valley Chalk aquifer, UK, is examined. Results of interpolation of measured hydraulic conductivity values by kriging are compared with results from a standard graphical package for interpolation. Reference is also made to the distribution obtained by the inverse method (in which the conductivity distribution is obtained from the head distribution). On the basis of the application, the conditional simulation (in which the generated data are both consistent with field values and the field statistical structure) is deemed to be the best. It is also found that different methods of interpolation give widely different distributions in the case of hydraulic conductivity. It is suggested that the kriged map or conditional map of the transmissivity should serve as the basis for regional discretization to which corrections via the inverse model may be made.  相似文献   

2.
We present a nonlinear stochastic inverse algorithm that allows conditioning estimates of transient hydraulic heads, fluxes and their associated uncertainty on information about hydraulic conductivity (K) and hydraulic head (h  ) data collected in a randomly heterogeneous confined aquifer. Our algorithm is based on Laplace-transformed recursive finite-element approximations of exact nonlocal first and second conditional stochastic moment equations of transient flow. It makes it possible to estimate jointly spatial variations in natural log-conductivity (Y=lnK)(Y=lnK), the parameters of its underlying variogram, and the variance–covariance of these estimates. Log-conductivity is parameterized geostatistically based on measured values at discrete locations and unknown values at discrete “pilot points”. Whereas prior values of Y at pilot point are obtained by generalized kriging, posterior estimates at pilot points are obtained through a maximum likelihood fit of computed and measured transient heads. These posterior estimates are then projected onto the computational grid by kriging. Optionally, the maximum likelihood function may include a regularization term reflecting prior information about Y. The relative weight assigned to this term is evaluated separately from other model parameters to avoid bias and instability. We illustrate and explore our algorithm by means of a synthetic example involving a pumping well. We find that whereas Y and h can be reproduced quite well with parameters estimated on the basis of zero-order mean flow equations, all model quality criteria identify the second-order results as being superior to zero-order results. Identifying the weight of the regularization term and variogram parameters can be done with much lesser ambiguity based on second- than on zero-order results. A second-order model is required to compute predictive error variances of hydraulic head (and flux) a posteriori. Conditioning the inversion jointly on conductivity and hydraulic head data results in lesser predictive uncertainty than conditioning on conductivity or head data alone.  相似文献   

3.
Nonlocal moment equations allow one to render deterministically optimum predictions of flow in randomly heterogeneous media and to assess predictive uncertainty conditional on measured values of medium properties. We present a geostatistical inverse algorithm for steady-state flow that makes it possible to further condition such predictions and assessments on measured values of hydraulic head (and/or flux). Our algorithm is based on recursive finite-element approximations of exact first and second conditional moment equations. Hydraulic conductivity is parameterized via universal kriging based on unknown values at pilot points and (optionally) measured values at other discrete locations. Optimum unbiased inverse estimates of natural log hydraulic conductivity, head and flux are obtained by minimizing a residual criterion using the Levenberg-Marquardt algorithm. We illustrate the method for superimposed mean uniform and convergent flows in a bounded two-dimensional domain. Our examples illustrate how conductivity and head data act separately or jointly to reduce parameter estimation errors and model predictive uncertainty.This work is supported in part by NSF/ITR Grant EAR-0110289. The first author was additionally supported by scholarships from CONACYT and Instituto de Investigaciones Electricas of Mexico. Additional support was provided by the European Commission under Contract EVK1-CT-1999-00041 (W-SAHaRA-Stochastic Analysis of Well Head Protection and Risk Assessment).  相似文献   

4.
In geostatistical inverse modeling, hydrogeological parameters, such as hydraulic conductivity, are estimated as spatial fields. Upon discretization this results in several thousand (log-)hydraulic conductivity values to be estimated. Common inversion schemes rely on gradient-based parameter estimation methods which require the sensitivity of all measurements with respect to all parameters. Point-like measurements of steady-state concentration in aquifers are generally not well suited for gradient-based methods, because typical plumes exhibit only a very narrow fringe at which the concentration decreases from a maximal value to zero. Only here the sensitivity of concentration with respect to hydraulic conductivity significantly differs from zero. Thus, if point-like measurements of steady-state concentration do not lie in this narrow fringe, their sensitivity with respect to hydraulic conductivity is zero. Observations of concentrations averaged over a larger control volume, by contrast, show a more regular sensitivity pattern. We thus suggest artificially increasing the sampling volume of steady-state concentration measurements for the evaluation of sensitivities in early stages of an iterative parameter estimation scheme. We present criteria for the extent of artificially increasing the sampling volume and for decreasing it when the simulation results converge to the measurements. By this procedure, we achieve high stability in geostatistical inversion of steady-state concentration measurements. The uncertainty of the estimated parameter fields is evaluated by generating conditional realizations.  相似文献   

5.
The Karhunen-Loeve (KL) decomposition and the polynomial chaos (PC) expansion are elegant and efficient tools for uncertainty propagation in porous media. Over recent years, KL/PC-based frameworks have successfully been applied in several contributions for the flow problem in the subsurface context. It was also shown, however, that the accurate solution of the transport problem with KL/PC techniques is more challenging. We propose a framework that utilizes KL/PC in combination with sparse Smolyak quadrature for the flow problem only. In a subsequent step, a Lagrangian sampling technique is used for transport. The flow field samples are calculated based on a PC expansion derived from the solutions at relatively few quadrature points. To increase the computational efficiency of the PC-based flow field sampling, a new reduction method is applied. For advection dominated transport scenarios, where a Lagrangian approach is applicable, the proposed PC/Monte Carlo method (PCMCM) is very efficient and avoids accuracy problems that arise when applying KL/PC techniques to both flow and transport. The applicability of PCMCM is demonstrated for transport simulations in multivariate Gaussian log-conductivity fields that are unconditional and conditional on conductivity measurements.  相似文献   

6.
Statistical approach to inverse distance interpolation   总被引:1,自引:0,他引:1  
Inverse distance interpolation is a robust and widely used estimation technique. Variants of kriging are often proposed as statistical techniques with superior mathematical properties such as minimum error variance; however, the robustness and simplicity of inverse distance interpolation motivate its continued use. This paper presents an approach to integrate statistical controls such as minimum error variance into inverse distance interpolation. The optimal exponent and number of data may be calculated globally or locally. Measures of uncertainty and local smoothness may be derived from inverse distance estimates.  相似文献   

7.
Estimating and mapping spatial uncertainty of environmental variables is crucial for environmental evaluation and decision making. For a continuous spatial variable, estimation of spatial uncertainty may be conducted in the form of estimating the probability of (not) exceeding a threshold value. In this paper, we introduced a Markov chain geostatistical approach for estimating threshold-exceeding probabilities. The differences of this approach compared to the conventional indicator approach lie with its nonlinear estimators—Markov chain random field models and its incorporation of interclass dependencies through transiograms. We estimated threshold-exceeding probability maps of clay layer thickness through simulation (i.e., using a number of realizations simulated by Markov chain sequential simulation) and interpolation (i.e., direct conditional probability estimation using only the indicator values of sample data), respectively. To evaluate the approach, we also estimated those probability maps using sequential indicator simulation and indicator kriging interpolation. Our results show that (i) the Markov chain approach provides an effective alternative for spatial uncertainty assessment of environmental spatial variables and the probability maps from this approach are more reasonable than those from conventional indicator geostatistics, and (ii) the probability maps estimated through sequential simulation are more realistic than those through interpolation because the latter display some uneven transitions caused by spatial structures of the sample data.  相似文献   

8.
 Being a non-linear method based on a rigorous formalism and an efficient processing of various information sources, the Bayesian maximum entropy (BME) approach has proven to be a very powerful method in the context of continuous spatial random fields, providing much more satisfactory estimates than those obtained from traditional linear geostatistics (i.e., the various kriging techniques). This paper aims at presenting an extension of the BME formalism in the context of categorical spatial random fields. In the first part of the paper, the indicator kriging and cokriging methods are briefly presented and discussed. A special emphasis is put on their inherent limitations, both from the theoretical and practical point of view. The second part aims at presenting the theoretical developments of the BME approach for the case of categorical variables. The three-stage procedure is explained and the formulations for obtaining prior joint distributions and computing posterior conditional distributions are given for various typical cases. The last part of the paper consists in a simulation study for assessing the performance of BME over the traditional indicator (co)kriging techniques. The results of these simulations highlight the theoretical limitations of the indicator approach (negative probability estimates, probability distributions that do not sum up to one, etc.) as well as the much better performance of the BME approach. Estimates are very close to the theoretical conditional probabilities, that can be computed according to the stated simulation hypotheses.  相似文献   

9.
This study introduces Bayesian model averaging (BMA) to deal with model structure uncertainty in groundwater management decisions. A robust optimized policy should take into account model parameter uncertainty as well as uncertainty in imprecise model structure. Due to a limited amount of groundwater head data and hydraulic conductivity data, multiple simulation models are developed based on different head boundary condition values and semivariogram models of hydraulic conductivity. Instead of selecting the best simulation model, a variance-window-based BMA method is introduced to the management model to utilize all simulation models to predict chloride concentration. Given different semivariogram models, the spatially correlated hydraulic conductivity distributions are estimated by the generalized parameterization (GP) method that combines the Voronoi zones and the ordinary kriging (OK) estimates. The model weights of BMA are estimated by the Bayesian information criterion (BIC) and the variance window in the maximum likelihood estimation. The simulation models are then weighted to predict chloride concentrations within the constraints of the management model. The methodology is implemented to manage saltwater intrusion in the “1,500-foot” sand aquifer in the Baton Rouge area, Louisiana. The management model aims to obtain optimal joint operations of the hydraulic barrier system and the saltwater extraction system to mitigate saltwater intrusion. A genetic algorithm (GA) is used to obtain the optimal injection and extraction policies. Using the BMA predictions, higher injection rates and pumping rates are needed to cover more constraint violations, which do not occur if a single best model is used.  相似文献   

10.
The unconditional stochastic studies on groundwater flow and solute transport in a nonstationary conductivity field show that the standard deviations of the hydraulic head and solute flux are very large in comparison with their mean values (Zhang et al. in Water Resour Res 36:2107–2120, 2000; Wu et al. in J Hydrol 275:208–228, 2003; Hu et al. in Adv Water Resour 26:513–531, 2003). In this study, we develop a numerical method of moments conditioning on measurements of hydraulic conductivity and head to reduce the variances of the head and the solute flux. A Lagrangian perturbation method is applied to develop the framework for solute transport in a nonstationary flow field. Since analytically derived moments equations are too complicated to solve analytically, a numerical finite difference method is implemented to obtain the solutions. Instead of using an unconditional conductivity field as an input to calculate groundwater velocity, we combine a geostatistical method and a method of moment for flow to conditionally simulate the distributions of head and velocity based on the measurements of hydraulic conductivity and head at some points. The developed theory is applied in several case studies to investigate the influences of the measurements of hydraulic conductivity and/or the hydraulic head on the variances of the predictive head and the solute flux in nonstationary flow fields. The study results show that the conditional calculation will significantly reduce the head variance. Since the hydraulic head measurement points are treated as the interior boundary (Dirichlet boundary) conditions, conditioning on both the hydraulic conductivity and the head measurements is much better than conditioning only on conductivity measurements for reduction of head variance. However, for solute flux, variance reduction by the conditional study is not so significant.  相似文献   

11.
It is common in geostatistics to use the variogram to describe the spatial dependence structure and to use kriging as the spatial prediction methodology. Both methods are sensitive to outlying observations and are strongly influenced by the marginal distribution of the underlying random field. Hence, they lead to unreliable results when applied to extreme value or multimodal data. As an alternative to traditional spatial modeling and interpolation we consider the use of copula functions. This paper extends existing copula-based geostatistical models. We show how location dependent covariates e.g. a spatial trend can be accounted for in spatial copula models. Furthermore, we introduce geostatistical copula-based models that are able to deal with random fields having discrete marginal distributions. We propose three different copula-based spatial interpolation methods. By exploiting the relationship between bivariate copulas and indicator covariances, we present indicator kriging and disjunctive kriging. As a second method we present simple kriging of the rank-transformed data. The third method is a plug-in prediction and generalizes the frequently applied trans-Gaussian kriging. Finally, we report on the results obtained for the so-called Helicopter data set which contains extreme radioactivity measurements.  相似文献   

12.
Stauffer F 《Ground water》2005,43(6):843-849
A method is proposed to estimate the uncertainty of the location of pathlines in two-dimensional, steady-state confined or unconfined flow in aquifers due to the uncertainty of the spatially variable unconditional hydraulic conductivity or transmissivity field. The method is based on concepts of the semianalytical first-order theory given in Stauffer et al. (2002, 2004), which allows estimates of the lateral second moment (variance) of the location of a moving particle. However, this method is reformulated in order to account for nonuniform recharge and nonuniform aquifer thickness. One prominent application is the uncertainty estimation of the catchment of a pumping well by considering the boundary pathlines starting at a stagnation point. In this method, the advective transport of particles is considered, based on the velocity field. In the case of a well catchment, backtracking is applied by using the reversed velocity field. Spatial variability of hydraulic conductivity or transmissivity is considered by taking into account an isotropic exponential covariance function of log-transformed values with parameters describing the variance and correlation length. The method allows postprocessing of results from ground water models with respect to uncertainty estimation. The code PPPath, which was developed for this purpose, provides a postprocessing of pathline computations under PMWIN, which is based on MODFLOW. In order to test the methodology, it was applied to results from Monte Carlo simulations for catchments of pumping wells. The results correspond well. Practical applications illustrate the use of the method in aquifers.  相似文献   

13.
Gaussian conditional realizations are routinely used for risk assessment and planning in a variety of Earth sciences applications. Assuming a Gaussian random field, conditional realizations can be obtained by first creating unconditional realizations that are then post-conditioned by kriging. Many efficient algorithms are available for the first step, so the bottleneck resides in the second step. Instead of doing the conditional simulations with the desired covariance (F approach) or with a tapered covariance (T approach), we propose to use the taper covariance only in the conditioning step (half-taper or HT approach). This enables to speed up the computations and to reduce memory requirements for the conditioning step but also to keep the right short scale variations in the realizations. A criterion based on mean square error of the simulation is derived to help anticipate the similarity of HT to F. Moreover, an index is used to predict the sparsity of the kriging matrix for the conditioning step. Some guides for the choice of the taper function are discussed. The distributions of a series of 1D, 2D and 3D scalar response functions are compared for F, T and HT approaches. The distributions obtained indicate a much better similarity to F with HT than with T.  相似文献   

14.
The multivariate Gaussian random function model is commonly used in stochastic hydrogeology to model spatial variability of log-conductivity. The multi-Gaussian model is attractive because it is fully characterized by an expected value and a covariance function or matrix, hence its mathematical simplicity and easy inference. Field data may support a Gaussian univariate distribution for log hydraulic conductivity, but, in general, there are not enough field data to support a multi-Gaussian distribution. A univariate Gaussian distribution does not imply a multi-Gaussian model. In fact, many multivariate models can share the same Gaussian histogram and covariance function, yet differ by their patterns of spatial continuity at different threshold values. Hence the decision to use a multi-Gaussian model to represent the uncertainty associated with the spatial heterogeneity of log-conductivity is not databased. Of greatest concern is the fact that a multi-Gaussian model implies the minimal spatial correlation of extreme values, a feature critical for mass transport and a feature that may be in contradiction with some geological settings, e.g. channeling. The possibility for high conductivity values to be spatially correlated should not be discarded by adopting a congenial model just because data shortage prevents refuting it. In this study, three alternatives to a multi-Gaussian model, all sharing the same Gaussian histogram and the same covariance function, but with different continuity patterns for extreme values, were considered to model the spatial variability of log-conductivity. The three alternative models, plus the traditional multi-Gaussian model, are used to perform Monte Carlo analyses of groundwater travel times from a hypothetical nuclear repository to the ground surface through a synthetic formation similar to the Finnsjön site in Sweden. The results show that the groundwater travel times predicted by the multi-Gaussian model could be ten times slower than those predicted by the other models. The probabilities of very short travel times could be severely underestimated using the multi-Gaussian model. Consequently, if field measured data are not sufficient to determine the higher-order moments necessary to validate the multi-Gaussian model — which is the usual situation in practice — other alternative models to the multi-Gaussian one ought to be considered.  相似文献   

15.
Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.  相似文献   

16.
 3D groundwater flow at the fractured site of Asp? (Sweden) is simulated. The aim was to characterise the site as adequately as possible and to provide measures on the uncertainty of the estimates. A stochastic continuum model is used to simulate both groundwater flow in the major fracture planes and in the background. However, the positions of the major fracture planes are deterministically incorporated in the model and the statistical distribution of the hydraulic conductivity is modelled by the concept of multiple statistical populations; each fracture plane is an independent statistical population. Multiple equally likely realisations are built that are conditioned to geological information on the positions of the major fracture planes, hydraulic conductivity data, steady state head data and head responses to six different interference tests. The experimental information could be reproduced closely. The results of the conditioning are analysed in terms of ensemble averaged average fracture plane conductivities, the ensemble variance of average fracture plane conductivities and the statistical distribution of the hydraulic conductivity in the fracture planes. These results are evaluated after each conditioning stage. It is found that conditioning to hydraulic head data results in an increase of the hydraulic conductivity variance while the statistical distribution of log hydraulic conductivity, initially Gaussian, becomes more skewed for many of the fracture planes in most of the realisations.  相似文献   

17.
The ensemble Kalman filter (EnKF) is a commonly used real-time data assimilation algorithm in various disciplines. Here, the EnKF is applied, in a hydrogeological context, to condition log-conductivity realizations on log-conductivity and transient piezometric head data. In this case, the state vector is made up of log-conductivities and piezometric heads over a discretized aquifer domain, the forecast model is a groundwater flow numerical model, and the transient piezometric head data are sequentially assimilated to update the state vector. It is well known that all Kalman filters perform optimally for linear forecast models and a multiGaussian-distributed state vector. Of the different Kalman filters, the EnKF provides a robust solution to address non-linearities; however, it does not handle well non-Gaussian state-vector distributions. In the standard EnKF, as time passes and more state observations are assimilated, the distributions become closer to Gaussian, even if the initial ones are clearly non-Gaussian. A new method is proposed that transforms the original state vector into a new vector that is univariate Gaussian at all times. Back transforming the vector after the filtering ensures that the initial non-Gaussian univariate distributions of the state-vector components are preserved throughout. The proposed method is based in normal-score transforming each variable for all locations and all time steps. This new method, termed the normal-score ensemble Kalman filter (NS-EnKF), is demonstrated in a synthetic bimodal aquifer resembling a fluvial deposit, and it is compared to the standard EnKF. The proposed method performs better than the standard EnKF in all aspects analyzed (log-conductivity characterization and flow and transport predictions).  相似文献   

18.
The goal of quantile regression is to estimate conditional quantiles for specified values of quantile probability using linear or nonlinear regression equations. These estimates are prone to “quantile crossing”, where regression predictions for different quantile probabilities do not increase as probability increases. In the context of the environmental sciences, this could, for example, lead to estimates of the magnitude of a 10-year return period rainstorm that exceed the 20-year storm, or similar nonphysical results. This problem, as well as the potential for overfitting, is exacerbated for small to moderate sample sizes and for nonlinear quantile regression models. As a remedy, this study introduces a novel nonlinear quantile regression model, the monotone composite quantile regression neural network (MCQRNN), that (1) simultaneously estimates multiple non-crossing, nonlinear conditional quantile functions; (2) allows for optional monotonicity, positivity/non-negativity, and generalized additive model constraints; and (3) can be adapted to estimate standard least-squares regression and non-crossing expectile regression functions. First, the MCQRNN model is evaluated on synthetic data from multiple functions and error distributions using Monte Carlo simulations. MCQRNN outperforms the benchmark models, especially for non-normal error distributions. Next, the MCQRNN model is applied to real-world climate data by estimating rainfall Intensity–Duration–Frequency (IDF) curves at locations in Canada. IDF curves summarize the relationship between the intensity and occurrence frequency of extreme rainfall over storm durations ranging from minutes to a day. Because annual maximum rainfall intensity is a non-negative quantity that should increase monotonically as the occurrence frequency and storm duration decrease, monotonicity and non-negativity constraints are key constraints in IDF curve estimation. In comparison to standard QRNN models, the ability of the MCQRNN model to incorporate these constraints, in addition to non-crossing, leads to more robust and realistic estimates of extreme rainfall.  相似文献   

19.
Radar‐based estimates of rainfall are affected by many sources of uncertainties, which would propagate through the hydrological model when radar rainfall estimates are used as input or initial conditions. An elegant solution to quantify these uncertainties is to model the empirical relationship between radar measurements and rain gauge observations (as the ‘ground reference’). However, most current studies only use a fixed and uniform model to represent the uncertainty of radar rainfall, without consideration of its variation under different synoptic regimes. Wind is such a typical weather factor, as it not only induces error in rain gauge measurements but also causes the raindrops observed by weather radar to drift when they reach the ground. For this reason, as a first attempt, this study introduces the wind field into the uncertainty model and designs the radar rainfall uncertainty model under different wind conditions. We separate the original dataset into three subsamples according to wind speed, which are named as WDI (0–2 m/s), WDII (2–4 m/s) and WDIII (>4 m/s). The multivariate distributed ensemble generator is introduced and established for each subsample. Thirty typical events (10 at each wind range) are selected to explore the behaviours of uncertainty under different wind ranges. In each time step, 500 ensemble members are generated, and the values of 5th to 95th percentile values are used to produce the uncertainty bands. Two basic features of uncertainty bands, namely dispersion and ensemble bias, increase significantly with the growth of wind speed, demonstrating that wind speed plays a considerable role in influencing the behaviour of the uncertainty band. On the basis of these pieces of evidence, we conclude that the radar rainfall uncertainty model established under different wind conditions should be more realistic in representing the radar rainfall uncertainty. This study is only a start in incorporating synoptic regimes into rainfall uncertainty analysis, and a great deal of more effort is still needed to build a realistic and comprehensive uncertainty model for radar rainfall data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, we describe carefully conducted numerical experiments, in which a dense salt solution vertically displaces fresh water in a stable manner. The two-dimensional porous media are weakly heterogeneous at a small scale. The purpose of these simulations, conducted for a range of density differences, is to obtain accurate concentration profiles that can be used to validate nonlinear models for high-concentration-gradient dispersion. In this part we focus on convergence of the computations, in numerical and statistical sense, to ensure that the uncertainty in the results is small enough.Concentration variances are computed, which give estimates of the uncertainty in local concentration values. These local variations decrease with increasing density contrast. For tracer transport, obtained longitudinal dispersivities are in accordance with analytical findings. In the case of high-density contrasts, stabilizing gravity forces counteract the growth of dispersive fingers, decreasing the effective width of the transition zone. For small log-permeability variances, the decrease of the apparent dispersivity that is found is in agreement with laboratory results for homogeneous columns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号