首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
We present a parallel framework for history matching and uncertainty characterization based on the Kalman filter update equation for the application of reservoir simulation. The main advantages of ensemble-based data assimilation methods are that they can handle large-scale numerical models with a high degree of nonlinearity and large amount of data, making them perfectly suited for coupling with a reservoir simulator. However, the sequential implementation is computationally expensive as the methods require relatively high number of reservoir simulation runs. Therefore, the main focus of this work is to develop a parallel data assimilation framework with minimum changes into the reservoir simulator source code. In this framework, multiple concurrent realizations are computed on several partitions of a parallel machine. These realizations are further subdivided among different processors, and communication is performed at data assimilation times. Although this parallel framework is general and can be used for different ensemble techniques, we discuss the methodology and compare results of two algorithms, the ensemble Kalman filter (EnKF) and the ensemble smoother (ES). Computational results show that the absolute runtime is greatly reduced using a parallel implementation versus a serial one. In particular, a parallel efficiency of about 35 % is obtained for the EnKF, and an efficiency of more than 50 % is obtained for the ES.  相似文献   

3.
Two methods for generating representative realizations from Gaussian and lognormal random field models are studied in this paper, with term representative implying realizations efficiently spanning the range of possible attribute values corresponding to the multivariate (log)normal probability distribution. The first method, already established in the geostatistical literature, is multivariate Latin hypercube sampling, a form of stratified random sampling aiming at marginal stratification of simulated values for each variable involved under the constraint of reproducing a known covariance matrix. The second method, scarcely known in the geostatistical literature, is stratified likelihood sampling, in which representative realizations are generated by exploring in a systematic way the structure of the multivariate distribution function itself. The two sampling methods are employed for generating unconditional realizations of saturated hydraulic conductivity in a hydrogeological context via a synthetic case study involving physically-based simulation of flow and transport in a heterogeneous porous medium; their performance is evaluated for different sample sizes (number of realizations) in terms of the reproduction of ensemble statistics of hydraulic conductivity and solute concentration computed from a very large ensemble set generated via simple random sampling. The results show that both Latin hypercube and stratified likelihood sampling are more efficient than simple random sampling, in that overall they can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than the simple random sampling.  相似文献   

4.
In this paper, we discuss several possible approaches to improving the performance of the ensemble Kalman filter (EnKF) through improved sampling of the initial ensemble. Each of the approaches addresses a different limitation of the standard method. All methods, however, attempt to make the results from a small ensemble as reliable as possible. The validity and usefulness of each method for creating the initial ensemble is based on three criteria: (1) does the sampling result in unbiased Monte Carlo estimates for nonlinear flow problems, (2) does the sampling reduce the variability of estimates compared to ensembles of realizations from the prior, and (3) does the sampling improve the performance of the EnKF? In general, we conclude that the use of dominant eigenvectors ensures the orthogonality of the generated realizations, but results in biased forecasts of the fractional flow of water. We show that the addition of high frequencies from remaining eigenvectors can be used to remove the bias without affecting the orthogonality of the realizations, but the method did not perform significantly better than standard Monte Carlo sampling. It was possible to identify an appropriate importance weighting to reduce the variance in estimates of the fractional flow of water, but it does not appear to be possible to use the importance weighted realizations in standard EnKF when the data relationship is nonlinear. The biggest improvement came from use of the pseudo-data with corrections to the variance of the actual observations.  相似文献   

5.
重质非水相有机污染物(DNAPL)泄漏到地下后,其运移与分布特征受渗透率非均质性影响显著。为刻画DNAPL污染源区结构特征,需进行参数估计以描述水文地质参数的非均质性。本研究构建了基于集合卡尔曼滤波方法(EnKF)与多相流运移模型的同化方案,通过融合DNAPL饱和度观测数据推估非均质介质渗透率空间分布。通过二维砂箱实际与理想算例,验证了同化方法的推估效果,并探讨了不同因素对同化的影响。研究结果表明:基于EnKF方法同化饱和度观测资料可有效地推估非均质渗透率场;参数推估精度随观测时空密度的增大而提高;观测点位置分布对同化效果有所影响,布置在污染集中区域的观测数据对于参数估计具有较高的数据价值。  相似文献   

6.
The ensemble Kalman filter has been successfully applied for data assimilation in very large models, including those in reservoir simulation and weather. Two problems become critical in a standard implementation of the ensemble Kalman filter, however, when the ensemble size is small. The first is that the ensemble approximation to cross-covariances of model and state variables to data can indicate the presence of correlations that are not real. These spurious correlations give rise to model or state variable updates in regions that should not be updated. The second problem is that the number of degrees of freedom in the ensemble is only as large as the size of the ensemble, so the assimilation of large amounts of precise, independent data is impossible. Localization of the Kalman gain is almost universal in the weather community, but applications of localization for the ensemble Kalman filter in porous media flow have been somewhat rare. It has been shown, however, that localization of updates to regions of non-zero sensitivity or regions of non-zero cross-covariance improves the performance of the EnKF when the ensemble size is small. Localization is necessary for assimilation of large amounts of independent data. The problem is to define appropriate localization functions for different types of data and different types of variables. We show that the knowledge of sensitivity alone is not sufficient for determination of the region of localization. The region depends also on the prior covariance for model variables and on the past history of data assimilation. Although the goal is to choose localization functions that are large enough to include the true region of non-zero cross-covariance, for EnKF applications, the choice of localization function needs to balance the harm done by spurious covariance resulting from small ensembles and the harm done by excluding real correlations. In this paper, we focus on the distance-based localization and provide insights for choosing suitable localization functions for data assimilation in multiphase flow problems. In practice, we conclude that it is reasonable to choose localization functions based on well patterns, that localization function should be larger than regions of non-zero sensitivity and should extend beyond a single well pattern.  相似文献   

7.
8.
The ensemble Kalman filter (EnKF) has been shown repeatedly to be an effective method for data assimilation in large-scale problems, including those in petroleum engineering. Data assimilation for multiphase flow in porous media is particularly difficult, however, because the relationships between model variables (e.g., permeability and porosity) and observations (e.g., water cut and gas–oil ratio) are highly nonlinear. Because of the linear approximation in the update step and the use of a limited number of realizations in an ensemble, the EnKF has a tendency to systematically underestimate the variance of the model variables. Various approaches have been suggested to reduce the magnitude of this problem, including the application of ensemble filter methods that do not require perturbations to the observed data. On the other hand, iterative least-squares data assimilation methods with perturbations of the observations have been shown to be fairly robust to nonlinearity in the data relationship. In this paper, we present EnKF with perturbed observations as a square root filter in an enlarged state space. By imposing second-order-exact sampling of the observation errors and independence constraints to eliminate the cross-covariance with predicted observation perturbations, we show that it is possible in linear problems to obtain results from EnKF with observation perturbations that are equivalent to ensemble square-root filter results. Results from a standard EnKF, EnKF with second-order-exact sampling of measurement errors that satisfy independence constraints (EnKF (SIC)), and an ensemble square-root filter (ETKF) are compared on various test problems with varying degrees of nonlinearity and dimensions. The first test problem is a simple one-variable quadratic model in which the nonlinearity of the observation operator is varied over a wide range by adjusting the magnitude of the coefficient of the quadratic term. The second problem has increased observation and model dimensions to test the EnKF (SIC) algorithm. The third test problem is a two-dimensional, two-phase reservoir flow problem in which permeability and porosity of every grid cell (5,000 model parameters) are unknown. The EnKF (SIC) and the mean-preserving ETKF (SRF) give similar results when applied to linear problems, and both are better than the standard EnKF. Although the ensemble methods are expected to handle the forecast step well in nonlinear problems, the estimates of the mean and the variance from the analysis step for all variants of ensemble filters are also surprisingly good, with little difference between ensemble methods when applied to nonlinear problems.  相似文献   

9.
In earth and environmental sciences applications, uncertainty analysis regarding the outputs of models whose parameters are spatially varying (or spatially distributed) is often performed in a Monte Carlo framework. In this context, alternative realizations of the spatial distribution of model inputs, typically conditioned to reproduce attribute values at locations where measurements are obtained, are generated via geostatistical simulation using simple random (SR) sampling. The environmental model under consideration is then evaluated using each of these realizations as a plausible input, in order to construct a distribution of plausible model outputs for uncertainty analysis purposes. In hydrogeological investigations, for example, conditional simulations of saturated hydraulic conductivity are used as input to physically-based simulators of flow and transport to evaluate the associated uncertainty in the spatial distribution of solute concentration. Realistic uncertainty analysis via SR sampling, however, requires a large number of simulated attribute realizations for the model inputs in order to yield a representative distribution of model outputs; this often hinders the application of uncertainty analysis due to the computational expense of evaluating complex environmental models. Stratified sampling methods, including variants of Latin hypercube sampling, constitute more efficient sampling aternatives, often resulting in a more representative distribution of model outputs (e.g., solute concentration) with fewer model input realizations (e.g., hydraulic conductivity), thus reducing the computational cost of uncertainty analysis. The application of stratified and Latin hypercube sampling in a geostatistical simulation context, however, is not widespread, and, apart from a few exceptions, has been limited to the unconditional simulation case. This paper proposes methodological modifications for adopting existing methods for stratified sampling (including Latin hypercube sampling), employed to date in an unconditional geostatistical simulation context, for the purpose of efficient conditional simulation of Gaussian random fields. The proposed conditional simulation methods are compared to traditional geostatistical simulation, based on SR sampling, in the context of a hydrogeological flow and transport model via a synthetic case study. The results indicate that stratified sampling methods (including Latin hypercube sampling) are more efficient than SR, overall reproducing to a similar extent statistics of the conductivity (and subsequently concentration) fields, yet with smaller sampling variability. These findings suggest that the proposed efficient conditional sampling methods could contribute to the wider application of uncertainty analysis in spatially distributed environmental models using geostatistical simulation.  相似文献   

10.
Geologic uncertainties and limited well data often render recovery forecasting a difficult undertaking in typical appraisal and early development settings. Recent advances in geologic modeling algorithms permit automation of the model generation process via macros and geostatistical tools. This allows rapid construction of multiple alternative geologic realizations. Despite the advances in geologic modeling, computation of the reservoir dynamic response via full-physics reservoir simulation remains a computationally expensive task. Therefore, only a few of the many probable realizations are simulated in practice. Experimental design techniques typically focus on a few discrete geologic realizations as they are inherently more suitable for continuous engineering parameters and can only crudely approximate the impact of geology. A flow-based pattern recognition algorithm (FPRA) has been developed for quantifying the forecast uncertainty as an alternative. The proposed algorithm relies on the rapid characterization of the geologic uncertainty space represented by an ensemble of sufficiently diverse static model realizations. FPRA characterizes the geologic uncertainty space by calculating connectivity distances, which quantify how different each individual realization is from all others in terms of recovery response. Fast streamline simulations are employed in evaluating these distances. By applying pattern recognition techniques to connectivity distances, a few representative realizations are identified within the model ensemble for full-physics simulation. In turn, the recovery factor probability distribution is derived from these intelligently selected simulation runs. Here, FPRA is tested on an example case where the objective is to accurately compute the recovery factor statistics as a function of geologic uncertainty in a channelized turbidite reservoir. Recovery factor cumulative distribution functions computed by FPRA compare well to the one computed via exhaustive full-physics simulations.  相似文献   

11.
In this paper, a stochastic collocation-based Kalman filter (SCKF) is developed to estimate the hydraulic conductivity from direct and indirect measurements. It combines the advantages of the ensemble Kalman filter (EnKF) for dynamic data assimilation and the polynomial chaos expansion (PCE) for efficient uncertainty quantification. In this approach, the random log hydraulic conductivity field is first parameterized by the Karhunen–Loeve (KL) expansion and the hydraulic pressure is expressed by the PCE. The coefficients of PCE are solved with a collocation technique. Realizations are constructed by choosing collocation point sets in the random space. The stochastic collocation method is non-intrusive in that such realizations are solved forward in time via an existing deterministic solver independently as in the Monte Carlo method. The needed entries of the state covariance matrix are approximated with the coefficients of PCE, which can be recovered from the collocation results. The system states are updated by updating the PCE coefficients. A 2D heterogeneous flow example is used to demonstrate the applicability of the SCKF with respect to different factors, such as initial guess, variance, correlation length, and the number of observations. The results are compared with those from the EnKF method. It is shown that the SCKF is computationally more efficient than the EnKF under certain conditions. Each approach has its own advantages and limitations. The performance of the SCKF decreases with larger variance, smaller correlation ratio, and fewer observations. Hence, the choice between the two methods is problem dependent. As a non-intrusive method, the SCKF can be easily extended to multiphase flow problems.  相似文献   

12.
Reservoir simulation models are used both in the development of new fields and in developed fields where production forecasts are needed for investment decisions. When simulating a reservoir, one must account for the physical and chemical processes taking place in the subsurface. Rock and fluid properties are crucial when describing the flow in porous media. In this paper, the authors are concerned with estimating the permeability field of a reservoir. The problem of estimating model parameters such as permeability is often referred to as a history-matching problem in reservoir engineering. Currently, one of the most widely used methodologies which address the history-matching problem is the ensemble Kalman filter (EnKF). EnKF is a Monte Carlo implementation of the Bayesian update problem. Nevertheless, the EnKF methodology has certain limitations that encourage the search for an alternative method.For this reason, a new approach based on graphical models is proposed and studied. In particular, the graphical model chosen for this purpose is a dynamic non-parametric Bayesian network (NPBN). This is the first attempt to approach a history-matching problem in reservoir simulation using a NPBN-based method. A two-phase, two-dimensional flow model was implemented for a synthetic reservoir simulation exercise, and initial results are shown. The methods’ performances are evaluated and compared. This paper features a completely novel approach to history matching and constitutes only the first part (part I) of a more detailed investigation. For these reasons (novelty and incompleteness), many questions are left open and a number of recommendations are formulated, to be investigated in part II of the same paper.  相似文献   

13.
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations) and the updated parameter field becomes scattered and loses important geological features (for example, the contact between high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time, the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems.  相似文献   

14.
The performance of the Ensemble Kalman Filter method (EnKF) depends on the sample size compared to the dimension of the parameters space. In real applications insufficient sampling may result in spurious correlations which reduce the accuracy of the filter with a strong underestimation of the uncertainty. Covariance localization and inflation are common solutions to these problems. The Ensemble Square Root Filters (ESRF) is also better to estimate uncertainty with respect to the EnKF. In this work we propose a method that limits the consequences of sampling errors by means of a convenient generation of the initial ensemble. This regeneration is based on a Stationary Orthogonal-Base Representation (SOBR) obtained via a singular value decomposition of a stationary covariance matrix estimated from the ensemble. The technique is tested on a 2D single phase reservoir and compared with the other common techniques. The evaluation is based on a reference solution obtained with a very large ensemble (one million members) which remove the spurious correlations. The example gives evidence that the SOBR technique is a valid alternative to reduce the effect of sampling error. In addition, when the SOBR method is applied in combination with the ESRF and inflation, it gives the best performance in terms of uncertainty estimation and oil production forecast.  相似文献   

15.
Stochastic geostatistical techniques are essential tools for groundwater flow and transport modelling in highly heterogeneous media. Typically, these techniques require massive numbers of realizations to accurately simulate the high variability and account for the uncertainty. These massive numbers of realizations imposed several constraints on the stochastic techniques (e.g. increasing the computational effort, limiting the domain size, grid resolution, time step and convergence issues). Understanding the connectivity of the subsurface layers gives an opportunity to overcome these constraints. This research presents a sampling framework to reduce the number of the required Monte Carlo realizations utilizing the connectivity properties of the hydraulic conductivity distributions in a three-dimensional domain. Different geostatistical distributions were tested in this study including exponential distribution with the Turning Bands (TBM) algorithm and spherical distribution using Sequential Gaussian Simulation (SGSIM). It is found that the total connected fraction of the largest clusters and its tortuosity are highly correlated with the percentage of mass arrival and the first arrival quantiles at different control planes. Applying different sampling techniques together with several indicators suggested that a compact sample representing only 10% of the total number of realizations can be used to produce results that are close to the results of the full set of realizations. Also, the proposed sampling techniques specially utilizing the low conductivity clustering show very promising results in terms of matching the full range of realizations. Finally, the size of selected clusters relative to domain size significantly affects transport characteristics and the connectivity indicators.  相似文献   

16.
17.
Representing Spatial Uncertainty Using Distances and Kernels   总被引:8,自引:7,他引:1  
Assessing uncertainty of a spatial phenomenon requires the analysis of a large number of parameters which must be processed by a transfer function. To capture the possibly of a wide range of uncertainty in the transfer function response, a large set of geostatistical model realizations needs to be processed. Stochastic spatial simulation can rapidly provide multiple, equally probable realizations. However, since the transfer function is often computationally demanding, only a small number of models can be evaluated in practice, and are usually selected through a ranking procedure. Traditional ranking techniques for selection of probabilistic ranges of response (P10, P50 and P90) are highly dependent on the static property used. In this paper, we propose to parameterize the spatial uncertainty represented by a large set of geostatistical realizations through a distance function measuring “dissimilarity” between any two geostatistical realizations. The distance function allows a mapping of the space of uncertainty. The distance can be tailored to the particular problem. The multi-dimensional space of uncertainty can be modeled using kernel techniques, such as kernel principal component analysis (KPCA) or kernel clustering. These tools allow for the selection of a subset of representative realizations containing similar properties to the larger set. Without losing accuracy, decisions and strategies can then be performed applying a transfer function on the subset without the need to exhaustively evaluate each realization. This method is applied to a synthetic oil reservoir, where spatial uncertainty of channel facies is modeled through multiple realizations generated using a multi-point geostatistical algorithm and several training images.  相似文献   

18.
Ensemble Kalman filtering with shrinkage regression techniques   总被引:1,自引:0,他引:1  
The classical ensemble Kalman filter (EnKF) is known to underestimate the prediction uncertainty. This can potentially lead to low forecast precision and an ensemble collapsing into a single realisation. In this paper, we present alternative EnKF updating schemes based on shrinkage methods known from multivariate linear regression. These methods reduce the effects caused by collinear ensemble members and have the same computational properties as the fastest EnKF algorithms previously suggested. In addition, the importance of model selection and validation for prediction purposes is investigated, and a model selection scheme based on cross-validation is introduced. The classical EnKF scheme is compared with the suggested procedures on two-toy examples and one synthetic reservoir case study. Significant improvements are seen, both in terms of forecast precision and prediction uncertainty estimates.  相似文献   

19.
The role of heterogeneity and uncertainty in hydraulic conductivity on hillslope runoff production was evaluated using the fully integrated hydrologic model ParFlow. Simulations were generated using idealized high-resolution hillslopes configured both with a deep water table and a water table equal to the outlet to isolate surface and subsurface flow, respectively. Heterogeneous, correlated random fields were used to create spatial variability in the hydraulic conductivity. Ensembles, generated by multiple realizations of hydraulic conductivity, were used to evaluate how this uncertainty propagates to runoff. Ensemble averages were used to determine the effective runoff for a given hillslope as a function of rainfall rate and degree of subsurface heterogeneity. Cases where the water table is initialized at the outlet show runoff behavior with little sensitivity to variance in hydraulic conductivity. A technique is presented that explicitly interrogates individual realizations at every simulation timestep to partition overland and subsurface flow contributions. This hydrograph separation technique shows that the degree of heterogeneity can play a role in determining proportions of surface and subsurface flow, even when effective hillslope outflow is seen. This method is also used to evaluate current hydrograph separation techniques and demonstrates that recursive filters can accurately proportion overland and base-flow for certain cases.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号