首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract

Abstract Characterization of heterogeneity at the field scale generally requires detailed aquifer properties such as transmissivity and hydraulic head. An accurate delineation of these properties is expensive and time consuming, and for many if not most groundwater systems, is not practical. As an alternative approach, stochastic representation of random fields is used and presented in this paper. Specifically, an iterative stochastic conditional simulation approach was applied to a hypothetical and highly heterogeneous pre-designed aquifer system. The approach is similar to the classical co-kriging technique; it uses a linear estimator that depends on the covariance functions of transmissivity (T), and hydraulic head (h), as well as their cross-covariances. A linearized flow equation along with a conditional random field generator constitutes the iterative process of the conditional simulation. One hundred equally likely realizations of transmissivity fields with pre-specified geostatistical parameters were generated, and conditioned to both limited transmissivity and head data. The successful implementation of the approach resulted in conditioned flow paths and travel-time distribution under different degrees of aquifer heterogeneity. This approach worked well for fields exhibiting small variances. However, for random fields exhibiting large variances (greater than 1.0), an iterative procedure was used. The results show that, as the variance of the ln[T] increases, the flow paths tend to diverge, resulting in a wide spectrum of flow conditions, with no direct discernable relationship between the degree of heterogeneity and travel time. The applied approach indicates that high errors may result when estimation of particle travel times in a heterogeneous medium is approximated by an equivalent homogeneous medium.  相似文献   

2.
The self-calibrated method has been extended for the generation of equally likely realizations of transmissivity and storativity conditional to transmissivity and storativity data and to steady-state and transient hydraulic head data. Conditioning to transmissivity and storativity data is achieved by means of standard geostatistical co-simulation algorithms, whereas conditioning to hydraulic head data, given its non-linear relation to transmissivity and storativity, is achieved through non-linear optimization, similar to standard inverse algorithms. The algorithm is demonstrated in a synthetic study based on data from the WIPP site in New Mexico. Seven alternative scenarios are investigated, generating 100 realizations for each of them. The differences among the scenarios range from the number of conditioning data, to their spatial configuration, to the pumping strategies at the pumping wells. In all scenarios, the self-calibrated algorithm is able to generate transmissivity–storativity realization couples conditional to all the sample data. For the specific case studied here the results are not surprising. Of the piezometric head data, the steady-state values are the most consequential for transmissivity characterization. Conditioning to transient head data only introduces local adjustments on the transmissivity fields and serves to improve the characterization of the storativity fields.  相似文献   

3.
We focus on the Bayesian estimation of strongly heterogeneous transmissivity fields conditional on data sampled at a set of locations in an aquifer. Log-transmissivity, Y, is modeled as a stochastic Gaussian process, parameterized through a truncated Karhunen–Loève (KL) expansion. We consider Y fields characterized by a short correlation scale as compared to the size of the observed domain. These systems are associated with a KL decomposition which still requires a high number of parameters, thus hampering the efficiency of the Bayesian estimation of the underlying stochastic field. The distinctive aim of this work is to present an efficient approach for the stochastic inverse modeling of fully saturated groundwater flow in these types of strongly heterogeneous domains. The methodology is grounded on the construction of an optimal sparse KL decomposition which is achieved by retaining only a limited set of modes in the expansion. Mode selection is driven by model selection criteria and is conditional on available data of hydraulic heads and (optionally) Y. Bayesian inversion of the optimal sparse KLE is then inferred using Markov Chain Monte Carlo (MCMC) samplers. As a test bed, we illustrate our approach by way of a suite of computational examples where noisy head and Y values are sampled from a given randomly generated system. Our findings suggest that the proposed methodology yields a globally satisfactory inversion of the stochastic head and Y fields. Comparison of reference values against the corresponding MCMC predictive distributions suggests that observed values are well reproduced in a probabilistic sense. In a few cases, reference values at some unsampled locations (typically far from measurements) are not captured by the posterior probability distributions. In these cases, the quality of the estimation could be improved, e.g., by increasing the number of measurements and/or the threshold for the selection of KL modes.  相似文献   

4.
A common approach for the performance assessment of radionuclide migration from a nuclear waste repository is by means of Monte-Carlo techniques. Multiple realizations of the parameters controlling radionuclide transport are generated and each one of these realizations is used in a numerical model to provide a transport prediction. The statistical analysis of all transport predictions is then used in performance assessment. In order to reduce the uncertainty on the predictions is necessary to incorporate as much information as possible in the generation of the parameter fields. In this regard, this paper focuses in the impact that conditioning the transmissivity fields to geophysical data and/or piezometric head data has on convective transport predictions in a two-dimensional heterogeneous formation. The Walker Lake data based is used to produce a heterogeneous log-transmissivity field with distinct non-Gaussian characteristics and a secondary variable that represents some geophysical attribute. In addition, the piezometric head field resulting from the steady-state solution of the groundwater flow equation is computed. These three reference fields are sampled to mimic a sampling campaign. Then, a series of Monte-Carlo exercises using different combinations of sampled data shows the relative worth of secondary data with respect to piezometric head data for transport predictions. The analysis shows that secondary data allows to reproduce the main spatial patterns of the reference transmissivity field and improves the mass transport predictions with respect to the case in which only transmissivity data is used. However, a few piezometric head measurements could be equally effective for the characterization of transport predictions.  相似文献   

5.
A common approach for the performance assessment of radionuclide migration from a nuclear waste repository is by means of Monte-Carlo techniques. Multiple realizations of the parameters controlling radionuclide transport are generated and each one of these realizations is used in a numerical model to provide a transport prediction. The statistical analysis of all transport predictions is then used in performance assessment. In order to reduce the uncertainty on the predictions is necessary to incorporate as much information as possible in the generation of the parameter fields. In this regard, this paper focuses in the impact that conditioning the transmissivity fields to geophysical data and/or piezometric head data has on convective transport predictions in a two-dimensional heterogeneous formation. The Walker Lake data based is used to produce a heterogeneous log-transmissivity field with distinct non-Gaussian characteristics and a secondary variable that represents some geophysical attribute. In addition, the piezometric head field resulting from the steady-state solution of the groundwater flow equation is computed. These three reference fields are sampled to mimic a sampling campaign. Then, a series of Monte-Carlo exercises using different combinations of sampled data shows the relative worth of secondary data with respect to piezometric head data for transport predictions. The analysis shows that secondary data allows to reproduce the main spatial patterns of the reference transmissivity field and improves the mass transport predictions with respect to the case in which only transmissivity data is used. However, a few piezometric head measurements could be equally effective for the characterization of transport predictions.  相似文献   

6.
We present a geostatistically based inverse model for characterizing heterogeneity in parameters of unsaturated hydraulic conductivity for three-dimensional flow. Pressure and moisture content are related to perturbations in hydraulic parameters through cross-covariances, which are calculated to first-order. Sensitivities needed for covariance calculations are derived using the adjoint state sensitivity method. Approximations of the conditional mean parameter fields are then obtained from the cokriging estimator. Correlation between parameters and pressure – moisture content perturbations is seen to be strongly dependent on mean pressure or moisture content. High correlation between parameters and pressure data was obtained under saturated or near saturated flow conditions, providing accurate estimation of saturated hydraulic conductivity, while moisture content measurements provided accurate estimation of the pore size distribution parameter under unsaturated flow conditions.  相似文献   

7.
We present a geostatistically based inverse model for characterizing heterogeneity in parameters of unsaturated hydraulic conductivity for three-dimensional flow. Pressure and moisture content are related to perturbations in hydraulic parameters through cross-covariances, which are calculated to first-order. Sensitivities needed for covariance calculations are derived using the adjoint state sensitivity method. Approximations of the conditional mean parameter fields are then obtained from the cokriging estimator. Correlation between parameters and pressure – moisture content perturbations is seen to be strongly dependent on mean pressure or moisture content. High correlation between parameters and pressure data was obtained under saturated or near saturated flow conditions, providing accurate estimation of saturated hydraulic conductivity, while moisture content measurements provided accurate estimation of the pore size distribution parameter under unsaturated flow conditions.  相似文献   

8.
Estimating and mapping spatial uncertainty of environmental variables is crucial for environmental evaluation and decision making. For a continuous spatial variable, estimation of spatial uncertainty may be conducted in the form of estimating the probability of (not) exceeding a threshold value. In this paper, we introduced a Markov chain geostatistical approach for estimating threshold-exceeding probabilities. The differences of this approach compared to the conventional indicator approach lie with its nonlinear estimators—Markov chain random field models and its incorporation of interclass dependencies through transiograms. We estimated threshold-exceeding probability maps of clay layer thickness through simulation (i.e., using a number of realizations simulated by Markov chain sequential simulation) and interpolation (i.e., direct conditional probability estimation using only the indicator values of sample data), respectively. To evaluate the approach, we also estimated those probability maps using sequential indicator simulation and indicator kriging interpolation. Our results show that (i) the Markov chain approach provides an effective alternative for spatial uncertainty assessment of environmental spatial variables and the probability maps from this approach are more reasonable than those from conventional indicator geostatistics, and (ii) the probability maps estimated through sequential simulation are more realistic than those through interpolation because the latter display some uneven transitions caused by spatial structures of the sample data.  相似文献   

9.
In studies involving environmental risk assessment, Gaussian random field generators are often used to yield realizations of a Gaussian random field, and then realizations of the non-Gaussian target random field are obtained by an inverse-normal transformation. Such simulation process requires a set of observed data for estimation of the empirical cumulative distribution function (ECDF) and covariance function of the random field under investigation. However, if realizations of a non-Gaussian random field with specific probability density and covariance function are needed, such observed-data-based simulation process will not work when no observed data are available. In this paper we present details of a gamma random field simulation approach which does not require a set of observed data. A key element of the approach lies on the theoretical relationship between the covariance functions of a gamma random field and its corresponding standard normal random field. Through a set of devised simulation scenarios, the proposed technique is shown to be capable of generating realizations of the given gamma random fields.  相似文献   

10.
Gaussian conditional realizations are routinely used for risk assessment and planning in a variety of Earth sciences applications. Assuming a Gaussian random field, conditional realizations can be obtained by first creating unconditional realizations that are then post-conditioned by kriging. Many efficient algorithms are available for the first step, so the bottleneck resides in the second step. Instead of doing the conditional simulations with the desired covariance (F approach) or with a tapered covariance (T approach), we propose to use the taper covariance only in the conditioning step (half-taper or HT approach). This enables to speed up the computations and to reduce memory requirements for the conditioning step but also to keep the right short scale variations in the realizations. A criterion based on mean square error of the simulation is derived to help anticipate the similarity of HT to F. Moreover, an index is used to predict the sparsity of the kriging matrix for the conditioning step. Some guides for the choice of the taper function are discussed. The distributions of a series of 1D, 2D and 3D scalar response functions are compared for F, T and HT approaches. The distributions obtained indicate a much better similarity to F with HT than with T.  相似文献   

11.
The use of data to condition single random fields has a well-established history. However, the joint use of data from several cross-correlated random fields is not as well developed. For example, the use of both transmissivity and head data in a steady state 2-d stochastic flow problem is essentially an inverse problem that is very important for both flow and transport predictions. This problem is addressed here by using a combination of numerical simulation and analytical methods and its application illustrated. The type of information conveyed by the different data categories is explored. The results presented are especially interesting in that head and transmissivity each give different information: Head values would appear to constrain the geometry of the paths while transmissivity data yields information about travel times. The linearized model is expanded to an iterative procedure and the true conditional distribution at several locations is compared with the iterative solution.The problem mentioned above is one with a special transfer function specified by the flow equation. In the second part of the paper a Fast Fourier Transform method for generation and conditioning of two or more random fields is introduced. This procedure is simple to implement, fast and very flexible.  相似文献   

12.
The use of data to condition single random fields has a well-established history. However, the joint use of data from several cross-correlated random fields is not as well developed. For example, the use of both transmissivity and head data in a steady state 2-d stochastic flow problem is essentially an inverse problem that is very important for both flow and transport predictions. This problem is addressed here by using a combination of numerical simulation and analytical methods and its application illustrated. The type of information conveyed by the different data categories is explored. The results presented are especially interesting in that head and transmissivity each give different information: Head values would appear to constrain the geometry of the paths while transmissivity data yields information about travel times. The linearized model is expanded to an iterative procedure and the true conditional distribution at several locations is compared with the iterative solution.The problem mentioned above is one with a special transfer function specified by the flow equation. In the second part of the paper a Fast Fourier Transform method for generation and conditioning of two or more random fields is introduced. This procedure is simple to implement, fast and very flexible.  相似文献   

13.
For good groundwater flow and solute transport numerical modeling, it is important to characterize the formation properties. In this paper, we analyze the performance and important implementation details of a new approach for stochastic inverse modeling called inverse sequential simulation (iSS). This approach is capable of characterizing conductivity fields with heterogeneity patterns difficult to capture by standard multiGaussian-based inverse approaches. The method is based on the multivariate sequential simulation principle, but the covariances and cross-covariances used to compute the local conditional probability distributions are computed by simple co-kriging which are derived from an ensemble of conductivity and piezometric head fields, in a similar manner as the experimental covariances are computed in an ensemble Kalman filtering. A sensitivity analysis is performed on a synthetic aquifer regarding the number of members of the ensemble of realizations, the number of conditioning data, the number of piezometers at which piezometric heads are observed, and the number of nodes retained within the search neighborhood at the moment of computing the local conditional probabilities. The results show the importance of having a sufficiently large number of all of the mentioned parameters for the algorithm to characterize properly hydraulic conductivity fields with clear non-multiGaussian features.  相似文献   

14.
The use of a physiologically based toxicokinetic (PBTK) model to reconstruct chemical exposure using human biomonitoring data, urinary metabolites in particular, has not been fully explored. In this paper, the trichloroethylene (TCE) exposure dataset by Fisher et al. (Toxicol Appl Pharm 152:339–359, 1998) was reanalyzed to investigate this new approach. By treating exterior chemical exposure as an unknown model parameter, a PBTK model was used to estimate exposure and model parameters by measuring the cumulative amount of trichloroethanol glucuronide (TCOG), a metabolite of TCE, in voided urine and a single blood sample of the study subjects by Markov chain Monte Carlo (MCMC) simulations. An estimated exterior exposure of 0.532 mg/l successfully reconstructed the true inhalation concentration of 0.538 mg/l with a 95% CI (0.441–0.645) mg/l. Based on the simulation results, a feasible urine sample collection period would be 12–16 h after TCE exposure, with blood sampling at the end of the exposure period. Given the known metabolic pathway and exposure duration, the proposed computational procedure provides a simple and reliable method for environmental (occupational) exposure and PBTK model parameter estimation, which is more feasible than repeated blood sampling.  相似文献   

15.
 Geostatistical simulation algorithms are routinely used to generate conditional realizations of the spatial distribution of petrophysical properties, which are then fed into complex transfer functions, e.g. a flow simulator, to yield a distribution of responses, such as the time to recover a given proportion of the oil. This latter distribution, often referred to as the space of uncertainty, cannot be defined analytically because of the complexity (non-linearity) of transfer functions, but it can be characterized algorithmically through the generation of many realizations. This paper compares the space of uncertainty generated by four of the most commonly used algorithms: sequential Gaussian simulation, sequential indicator simulation, p-field simulation and simulated annealing. Conditional to 80 sample permeability values randomly drawn from an exhaustive 40×40 image, 100 realizations of the spatial distribution of permeability values are generated using each algorithm and fed into a pressure solver and a flow simulator. Principal component analysis is used to display the sets of realizations into the joint space of uncertainty of the response variables (effective permeability, times to reach 5% and 95% water cuts and to recover 10% and 50% of the oil). The attenuation of ergodic fluctuations through a rank-preserving transform of permeability values reduces substantially the extent of the space of uncertainty for sequential indicator simulation and p-field simulation, while improving the prediction of the response variable by the mean of the output distribution. Differences between simulation algorithms are the most pronounced for long-term responses (95% water cut and 50% oil recovery), with sequential Gaussian simulation yielding the most accurate prediction. In this example, utilizing more than 20 realizations generally increases only slightly the size of the space of uncertainty.  相似文献   

16.
A comparison of two stochastic inverse methods in a field-scale application   总被引:1,自引:0,他引:1  
Inverse modeling is a useful tool in ground water flow modeling studies. The most frequent difficulties encountered when using this technique are the lack of conditioning information (e.g., heads and transmissivities), the uncertainty in available data, and the nonuniqueness of the solution. These problems can be addressed and quantified through a stochastic Monte Carlo approach. The aim of this work was to compare the applicability of two stochastic inverse modeling approaches in a field-scale application. The multi-scaling (MS) approach uses a downscaling parameterization procedure that is not based on geostatistics. The pilot point (PP) approach uses geostatistical random fields as initial transmissivity values and an experimental variogram to condition the calibration. The studied area (375 km2) is part of a regional aquifer, northwest of Montreal in the St. Lawrence lowlands (southern Québec). It is located in limestone, dolomite, and sandstone formations, and is mostly a fractured porous medium. The MS approach generated small errors on heads, but the calibrated transmissivity fields did not reproduce the variogram of observed transmissivities. The PP approach generated larger errors on heads but better reproduced the spatial structure of observed transmissivities. The PP approach was also less sensitive to uncertainty in head measurements. If reliable heads are available but no transmissivities are measured, the MS approach provides useful results. If reliable transmissivities with a well inferred spatial structure are available, then the PP approach is a better alternative. This approach however must be used with caution if measured transmissivities are not reliable.  相似文献   

17.
We investigated the effect of conditioning transient, two-dimensional groundwater flow simulations, where the transmissivity was a spatial random field, on time dependent head data. The random fields, representing perturbations in log transmissivity, were generated using a known covariance function and then conditioned to match head data by iteratively cokriging and solving the flow model numerically. A new approximation to the cross-covariance of log transmissivity perturbations with time dependent head data and head data at different times, that greatly increased the computational efficiency, was introduced. The most noticeable effect of head data on the estimation of head and log transmissivity perturbations occurred from conditioning only on spatially distributed head measurements during steady flow. The additional improvement in the estimation of the log transmissivity and head perturbations obtained by conditioning on time dependent head data was fairly small. On the other hand, conditioning on temporal head data had a significant effect on particle tracks and reduced the lateral spreading around the center of the paths.  相似文献   

18.
We investigated the effect of conditioning transient, two-dimensional groundwater flow simulations, where the transmissivity was a spatial random field, on time dependent head data. The random fields, representing perturbations in log transmissivity, were generated using a known covariance function and then conditioned to match head data by iteratively cokriging and solving the flow model numerically. A new approximation to the cross-covariance of log transmissivity perturbations with time dependent head data and head data at different times, that greatly increased the computational efficiency, was introduced. The most noticeable effect of head data on the estimation of head and log transmissivity perturbations occurred from conditioning only on spatially distributed head measurements during steady flow. The additional improvement in the estimation of the log transmissivity and head perturbations obtained by conditioning on time dependent head data was fairly small. On the other hand, conditioning on temporal head data had a significant effect on particle tracks and reduced the lateral spreading around the center of the paths.  相似文献   

19.
20.
The estimation of probability densities of variables described by stochastic differential equations has long been done using forward time estimators, which rely on the generation of forward in time realizations of the model. Recently, an estimator based on the combination of forward and reverse time estimators has been developed. This estimator has a higher order of convergence than the classical one. In this article, we explore the new estimator and compare the forward and forward–reverse estimators by applying them to a biochemical oxygen demand model. Finally, we show that the computational efficiency of the forward–reverse estimator is superior to the classical one, and discuss the algorithmic aspects of the estimator.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号