首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The ensemble Kalman filter (EnKF) has become a popular method for history matching production and seismic data in petroleum reservoir models. However, it is known that EnKF may fail to give acceptable data matches especially for highly nonlinear problems. In this paper, we introduce a procedure to improve EnKF data matches based on assimilating the same data multiple times with the covariance matrix of the measurement errors multiplied by the number of data assimilations. We prove the equivalence between single and multiple data assimilations for the linear-Gaussian case and present computational evidence that multiple data assimilations can improve EnKF estimates for the nonlinear case. The proposed procedure was tested by assimilating time-lapse seismic data in two synthetic reservoir problems, and the results show significant improvements compared to the standard EnKF. In addition, we review the inversion schemes used in the EnKF analysis and present a rescaling procedure to avoid loss of information during the truncation of small singular values.  相似文献   

2.
Over the last years, the ensemble Kalman filter (EnKF) has become a very popular tool for history matching petroleum reservoirs. EnKF is an alternative to more traditional history matching techniques as it is computationally fast and easy to implement. Instead of seeking one best model estimate, EnKF is a Monte Carlo method that represents the solution with an ensemble of state vectors. Lately, several ensemble-based methods have been proposed to improve upon the solution produced by EnKF. In this paper, we compare EnKF with one of the most recently proposed methods, the adaptive Gaussian mixture filter (AGM), on a 2D synthetic reservoir and the Punq-S3 test case. AGM was introduced to loosen up the requirement of a Gaussian prior distribution as implicitly formulated in EnKF. By combining ideas from particle filters with EnKF, AGM extends the low-rank kernel particle Kalman filter. The simulation study shows that while both methods match the historical data well, AGM is better at preserving the geostatistics of the prior distribution. Further, AGM also produces estimated fields that have a higher empirical correlation with the reference field than the corresponding fields obtained with EnKF.  相似文献   

3.
4.
We present a parallel framework for history matching and uncertainty characterization based on the Kalman filter update equation for the application of reservoir simulation. The main advantages of ensemble-based data assimilation methods are that they can handle large-scale numerical models with a high degree of nonlinearity and large amount of data, making them perfectly suited for coupling with a reservoir simulator. However, the sequential implementation is computationally expensive as the methods require relatively high number of reservoir simulation runs. Therefore, the main focus of this work is to develop a parallel data assimilation framework with minimum changes into the reservoir simulator source code. In this framework, multiple concurrent realizations are computed on several partitions of a parallel machine. These realizations are further subdivided among different processors, and communication is performed at data assimilation times. Although this parallel framework is general and can be used for different ensemble techniques, we discuss the methodology and compare results of two algorithms, the ensemble Kalman filter (EnKF) and the ensemble smoother (ES). Computational results show that the absolute runtime is greatly reduced using a parallel implementation versus a serial one. In particular, a parallel efficiency of about 35 % is obtained for the EnKF, and an efficiency of more than 50 % is obtained for the ES.  相似文献   

5.
The ensemble Kalman filter (EnKF) has been shown repeatedly to be an effective method for data assimilation in large-scale problems, including those in petroleum engineering. Data assimilation for multiphase flow in porous media is particularly difficult, however, because the relationships between model variables (e.g., permeability and porosity) and observations (e.g., water cut and gas–oil ratio) are highly nonlinear. Because of the linear approximation in the update step and the use of a limited number of realizations in an ensemble, the EnKF has a tendency to systematically underestimate the variance of the model variables. Various approaches have been suggested to reduce the magnitude of this problem, including the application of ensemble filter methods that do not require perturbations to the observed data. On the other hand, iterative least-squares data assimilation methods with perturbations of the observations have been shown to be fairly robust to nonlinearity in the data relationship. In this paper, we present EnKF with perturbed observations as a square root filter in an enlarged state space. By imposing second-order-exact sampling of the observation errors and independence constraints to eliminate the cross-covariance with predicted observation perturbations, we show that it is possible in linear problems to obtain results from EnKF with observation perturbations that are equivalent to ensemble square-root filter results. Results from a standard EnKF, EnKF with second-order-exact sampling of measurement errors that satisfy independence constraints (EnKF (SIC)), and an ensemble square-root filter (ETKF) are compared on various test problems with varying degrees of nonlinearity and dimensions. The first test problem is a simple one-variable quadratic model in which the nonlinearity of the observation operator is varied over a wide range by adjusting the magnitude of the coefficient of the quadratic term. The second problem has increased observation and model dimensions to test the EnKF (SIC) algorithm. The third test problem is a two-dimensional, two-phase reservoir flow problem in which permeability and porosity of every grid cell (5,000 model parameters) are unknown. The EnKF (SIC) and the mean-preserving ETKF (SRF) give similar results when applied to linear problems, and both are better than the standard EnKF. Although the ensemble methods are expected to handle the forecast step well in nonlinear problems, the estimates of the mean and the variance from the analysis step for all variants of ensemble filters are also surprisingly good, with little difference between ensemble methods when applied to nonlinear problems.  相似文献   

6.
One of the major limitations of the classical ensemble Kalman filter (EnKF) is the assumption of a linear relationship between the state vector and the observed data. Thus, the classical EnKF algorithm can suffer from poor performance when considering highly non-linear and non-Gaussian likelihood models. In this paper, we have formulated the EnKF based on kernel-shrinkage regression techniques. This approach makes it possible to handle highly non-linear likelihood models efficiently. Moreover, a solution to the pre-image problem, essential in previously suggested EnKF schemes based on kernel methods, is not required. Testing the suggested procedure on a simple, illustrative problem with a non-linear likelihood model, we were able to obtain good results when the classical EnKF failed.  相似文献   

7.
Ensemble Kalman filtering with shrinkage regression techniques   总被引:1,自引:0,他引:1  
The classical ensemble Kalman filter (EnKF) is known to underestimate the prediction uncertainty. This can potentially lead to low forecast precision and an ensemble collapsing into a single realisation. In this paper, we present alternative EnKF updating schemes based on shrinkage methods known from multivariate linear regression. These methods reduce the effects caused by collinear ensemble members and have the same computational properties as the fastest EnKF algorithms previously suggested. In addition, the importance of model selection and validation for prediction purposes is investigated, and a model selection scheme based on cross-validation is introduced. The classical EnKF scheme is compared with the suggested procedures on two-toy examples and one synthetic reservoir case study. Significant improvements are seen, both in terms of forecast precision and prediction uncertainty estimates.  相似文献   

8.
The use of the ensemble smoother (ES) instead of the ensemble Kalman filter increases the nonlinearity of the update step during data assimilation and the need for iterative assimilation methods. A previous version of the iterative ensemble smoother based on Gauss–Newton formulation was able to match data relatively well but only after a large number of iterations. A multiple data assimilation method (MDA) was generally more efficient for large problems but lacked ability to continue “iterating” if the data mismatch was too large. In this paper, we develop an efficient, iterative ensemble smoother algorithm based on the Levenberg–Marquardt (LM) method of regularizing the update direction and choosing the step length. The incorporation of the LM damping parameter reduces the tendency to add model roughness at early iterations when the update step is highly nonlinear, as it often is when all data are assimilated simultaneously. In addition, the ensemble approximation of the Hessian is modified in a way that simplifies computation and increases stability. We also report on a simplified algorithm in which the model mismatch term in the updating equation is neglected. We thoroughly evaluated the new algorithm based on the modified LM method, LM-ensemble randomized maximum likelihood (LM-EnRML), and the simplified version of the algorithm, LM-EnRML (approx), on three test cases. The first is a highly nonlinear single-variable problem for which results can be compared against the true conditional pdf. The second test case is a one-dimensional two-phase flow problem in which the permeability of 31 grid cells is uncertain. In this case, Markov chain Monte Carlo results are available for comparison with ensemble-based results. The third test case is the Brugge benchmark case with both 10 and 20 years of history. The efficiency and quality of results of the new algorithms were compared with the standard ES (without iteration), the ensemble-based Gauss–Newton formulation, the standard ensemble-based LM formulation, and the MDA. Because of the high level of nonlinearity, the standard ES performed poorly on all test cases. The MDA often performed well, especially at early iterations where the reduction in data mismatch was quite rapid. The best results, however, were always achieved with the new iterative ensemble smoother algorithms, LM-EnRML and LM-EnRML (approx).  相似文献   

9.
Ensemble methods present a practical framework for parameter estimation, performance prediction, and uncertainty quantification in subsurface flow and transport modeling. In particular, the ensemble Kalman filter (EnKF) has received significant attention for its promising performance in calibrating heterogeneous subsurface flow models. Since an ensemble of model realizations is used to compute the statistical moments needed to perform the EnKF updates, large ensemble sizes are needed to provide accurate updates and uncertainty assessment. However, for realistic problems that involve large-scale models with computationally demanding flow simulation runs, the EnKF implementation is limited to small-sized ensembles. As a result, spurious numerical correlations can develop and lead to inaccurate EnKF updates, which tend to underestimate or even eliminate the ensemble spread. Ad hoc practical remedies, such as localization, local analysis, and covariance inflation schemes, have been developed and applied to reduce the effect of sampling errors due to small ensemble sizes. In this paper, a fast linear approximate forecast method is proposed as an alternative approach to enable the use of large ensemble sizes in operational settings to obtain more improved sample statistics and EnKF updates. The proposed method first clusters a large number of initial geologic model realizations into a small number of groups. A representative member from each group is used to run a full forward flow simulation. The flow predictions for the remaining realizations in each group are approximated by a linearization around the full simulation results of the representative model (centroid) of the respective cluster. The linearization can be performed using either adjoint-based or ensemble-based gradients. Results from several numerical experiments with two-phase and three-phase flow systems in this paper suggest that the proposed method can be applied to improve the EnKF performance in large-scale problems where the number of full simulation is constrained.  相似文献   

10.
In this paper we present an extension of the ensemble Kalman filter (EnKF) specifically designed for multimodal systems. EnKF data assimilation scheme is less accurate when it is used to approximate systems with multimodal distribution such as reservoir facies models. The algorithm is based on the assumption that both prior and posterior distribution can be approximated by Gaussian mixture and it is validated by the introduction of the concept of finite ensemble representation. The effectiveness of the approach is shown with two applications. The first example is based on Lorenz model. In the second example, the proposed methodology combined with a localization technique is used to update a 2D reservoir facies models. Both applications give evidence of an improved performance of the proposed method respect to the EnKF.  相似文献   

11.
集合卡曼滤波由于易于使用而被广泛地应用到陆面数据同化研究中,它是建立在模型为线性、误差为正态分布的假设上,而实际土壤湿度方程是高度非线性的,并且当土壤过干或过湿时会发生样本偏斜.为了全面评估它在同化表层土壤湿度观测来反演土壤湿度廓线的性能,特引入不需要上述假设的采样重要性重采样粒子滤波,比较非线性和偏斜性对同化算法的影响.结果显示:不管是小样本还是大样本,集合卡曼滤波都能快速、准确地逼近样本均值,而粒子滤波只有在大样本时才能缓慢地趋近;此外,集合卡曼滤波的粒子边缘概率密度及其偏度和峰度与粒子滤波完全不同,前者粒子虽不完全满足正态分布,但始终为单峰状态,而后者粒子随同化推进经历了单峰到双峰再到单峰的变化.  相似文献   

12.
In this paper, we discuss several possible approaches to improving the performance of the ensemble Kalman filter (EnKF) through improved sampling of the initial ensemble. Each of the approaches addresses a different limitation of the standard method. All methods, however, attempt to make the results from a small ensemble as reliable as possible. The validity and usefulness of each method for creating the initial ensemble is based on three criteria: (1) does the sampling result in unbiased Monte Carlo estimates for nonlinear flow problems, (2) does the sampling reduce the variability of estimates compared to ensembles of realizations from the prior, and (3) does the sampling improve the performance of the EnKF? In general, we conclude that the use of dominant eigenvectors ensures the orthogonality of the generated realizations, but results in biased forecasts of the fractional flow of water. We show that the addition of high frequencies from remaining eigenvectors can be used to remove the bias without affecting the orthogonality of the realizations, but the method did not perform significantly better than standard Monte Carlo sampling. It was possible to identify an appropriate importance weighting to reduce the variance in estimates of the fractional flow of water, but it does not appear to be possible to use the importance weighted realizations in standard EnKF when the data relationship is nonlinear. The biggest improvement came from use of the pseudo-data with corrections to the variance of the actual observations.  相似文献   

13.
In recent years, data assimilation techniques have been applied to an increasingly wider specter of problems. Monte Carlo variants of the Kalman filter, in particular, the ensemble Kalman filter (EnKF), have gained significant popularity. EnKF is used for a wide variety of applications, among them for updating reservoir simulation models. EnKF is a Monte Carlo method, and its reliability depends on the actual size of the sample. In applications, a moderately sized sample (40–100 members) is used for computational convenience. Problems due to the resulting Monte Carlo effects require a more thorough analysis of the EnKF. Earlier we presented a method for the assessment of the error emerging at the EnKF update step (Kovalenko et al., SIAM J Matrix Anal Appl, in press). A particular energy norm of the EnKF error after a single update step was studied. The energy norm used to assess the error is hard to interpret. In this paper, we derive the distribution of the Euclidean norm of the sampling error under the same assumptions as before, namely normality of the forecast distribution and negligibility of the observation error. The distribution depends on the ensemble size, the number and spatial arrangement of the observations, and the prior covariance. The distribution is used to study the error propagation in a single update step on several synthetic examples. The examples illustrate the changes in reliability of the EnKF, when the parameters governing the error distribution vary.  相似文献   

14.
The degrees of freedom (DOF) in standard ensemble-based data assimilation is limited by the ensemble size. Successful assimilation of a data set with large information content (IC) therefore requires that the DOF is sufficiently large. A too small number of DOF with respect to the IC may result in ensemble collapse, or at least in unwarranted uncertainty reduction in the estimation results. In this situation, one has two options to restore a proper balance between the DOF and the IC: to increase the DOF or to decrease the IC. Spatially dense data sets typically have a large IC. Within subsurface applications, inverted time-lapse seismic data used for reservoir history matching is an example of a spatially dense data set. Such data are considered to have great potential due to their large IC, but they also contain errors that are challenging to characterize properly. The computational cost of running the forward simulations for reservoir history matching with any kind of data is large for field cases, such that a moderately large ensemble size is standard. Realization of the potential in seismic data for ensemble-based reservoir history matching is therefore not straightforward, not only because of the unknown character of the associated data errors, but also due to the imbalance between a large IC and a too small number of DOF. Distance-based localization is often applied to increase the DOF but is example specific and involves cumbersome implementation work. We consider methods to obtain a proper balance between the IC and the DOF when assimilating inverted seismic data for reservoir history matching. To decrease the IC, we consider three ways to reduce the influence of the data space; subspace pseudo inversion, data coarsening, and a novel way of performing front extraction. To increase the DOF, we consider coarse-scale simulation, which allows for an increase in the DOF by increasing the ensemble size without increasing the total computational cost. We also consider a combination of decreasing the IC and increasing the DOF by proposing a novel method consisting of a combination of data coarsening and coarse-scale simulation. The methods were compared on one small and one moderately large example with seismic bulk-velocity fields at four assimilation times as data. The size of the examples allows for calculation of a reference solution obtained with standard ensemble-based data assimilation methodology and an unrealistically large ensemble size. With the reference solution as the yardstick with which the quality of other methods are measured, we find that the novel method combining data coarsening and coarse-scale simulations gave the best results. With very restricted computational resources available, this was the only method that gave satisfactory results.  相似文献   

15.
The performance of the ensemble Kalman filter (EnKF) for continuous updating of facies location and boundaries in a reservoir model based on production and facies data for a 3D synthetic problem is presented. The occurrence of the different facies types is treated as a random process and the initial distribution was obtained by truncating a bi-Gaussian random field. Because facies data are highly non-Gaussian, re-parameterization was necessary in order to use the EnKF algorithm for data assimilation; two Gaussian random fields are updated in lieu of the static facies parameters. The problem of history matching applied to facies is difficult due to (1) constraints to facies observations at wells are occasionally violated when productions data are assimilated; (2) excessive reduction of variance seems to be a bigger problem with facies than with Gaussian random permeability and porosity fields; and (3) the relationship between facies variables and data is so highly non-linear that the final facies field does not always honor early production data well. Consequently three issues are investigated in this work. Is it possible to iteratively enforce facies constraints when updates due to production data have caused them to be violated? Can localization of adjustments be used for facies to prevent collapse of the variance during the data-assimilation period? Is a forecast from the final state better than a forecast from time zero using the final parameter fields?To investigate these issues, a 3D reservoir simulation model is coupled with the EnKF technique for data assimilation. One approach to enforcing the facies constraint is continuous iteration on all available data, which may lead to inconsistent model states, incorrect weighting of the production data and incorrect adjustment of the state vector. A sequential EnKF where the dynamic and static data are assimilated sequentially is presented and this approach seems to have solved the highlighted problems above. When the ensemble size is small compared to the number of independent data, the localized adjustment of the state vector is a very important technique that may be used to mitigate loss of rank in the ensemble. Implementing a distance-based localization of the facies adjustment appears to mitigate the problem of variance deficiency in the ensembles by ensuring that sufficient variability in the ensemble is maintained throughout the data assimilation period. Finally, when data are assimilated without localization, the prediction results appear to be independent of the starting point. When localization is applied, it is better to predict from the start using the final parameter field rather than continue from the final state.  相似文献   

16.
In the past years, many applications of history-matching methods in general and ensemble Kalman filter in particular have been proposed, especially in order to estimate fields that provide uncertainty in the stochastic process defined by the dynamical system of hydrocarbon recovery. Such fields can be permeability fields or porosity fields, but can also fields defined by the rock type (facies fields). The estimation of the boundaries of the geologic facies with ensemble Kalman filter (EnKF) was made, in different papers, with the aid of Gaussian random fields, which were truncated using various schemes and introduced in a history-matching process. In this paper, we estimate, in the frame of the EnKF process, the locations of three facies types that occur into a reservoir domain, with the property that each two could have a contact. The geological simulation model is a form of the general truncated plurigaussian method. The difference with other approaches consists in how the truncation scheme is introduced and in the observation operator of the facies types at the well locations. The projection from the continuous space of the Gaussian fields into the discrete space of the facies fields is realized through in an intermediary space (space with probabilities). This space connects the observation operator of the facies types at the well locations with the geological simulation model. We will test the model using a 2D reservoir which is connected with the EnKF method as a data assimilation technique. We will use different geostatistical properties for the Gaussian fields and different levels of the uncertainty introduced in the model parameters and also in the construction of the Gaussian fields.  相似文献   

17.
Improving the Ensemble Estimate of the Kalman Gain by Bootstrap Sampling   总被引:1,自引:1,他引:0  
Using a small ensemble size in the ensemble Kalman filter methodology is efficient for updating numerical reservoir models but can result in poor updates following spurious correlations between observations and model variables. The most common approach for reducing the effect of spurious correlations on model updates is multiplication of the estimated covariance by a tapering function that eliminates all correlations beyond a prespecified distance. Distance-dependent tapering is not always appropriate, however. In this paper, we describe efficient methods for discriminating between the real and the spurious correlations in the Kalman gain matrix by using the bootstrap method to assess the confidence level of each element from the Kalman gain matrix. The new method is tested on a small linear problem, and on a water flooding reservoir history matching problem. For the water flooding example, a small ensemble size of 30 was used to compute the Kalman gain in both the screened EnKF and standard EnKF methods. The new method resulted in significantly smaller root mean squared errors of the estimated model parameters and greater variability in the final updated ensemble.  相似文献   

18.
The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our implementation of the MCMC method provides the gold standard against which the aforementioned Gaussian approximations are assessed. We present numerical synthetic experiments where we quantify the capability of each of the ad hoc Gaussian approximation in reproducing the mean and the variance of the posterior distribution (characterized via MCMC) associated to a data assimilation problem. Both single-phase and two-phase (oil–water) reservoir models are considered so that fundamental differences in the resulting forward operators are highlighted. The main objective of our controlled experiments was to exhibit the substantial discrepancies of the approximation properties of standard ad hoc Gaussian approximations. Numerical investigations of the type we present here will lead to the greater understanding of the cost-efficient, but ad hoc, Bayesian techniques used for data assimilation in petroleum reservoirs and hence ultimately to improved techniques with more accurate uncertainty quantification.  相似文献   

19.
20.
While 3D seismic has been the basis for geological model building for a long time, time-lapse seismic has primarily been used in a qualitative manner to assist in monitoring reservoir behavior. With the growing acceptance of assisted history matching methods has come an equally rising interest in incorporating 3D or time-lapse seismic data into the history matching process in a more quantitative manner. The common approach in recent studies has been to invert the seismic data to elastic or to dynamic reservoir properties, typically acoustic impedance or saturation changes. Here we consider the use of both 3D and time-lapse seismic amplitude data based on a forward modeling approach that does not require any inversion in the traditional sense. Advantages of such an approach may be better estimation and treatment of model and measurement errors, the combination of two inversion steps into one by removing the explicit inversion to state space variables, and more consistent dependence on the validity of assumptions underlying the inversion process. In this paper, we introduce this approach with the use of an assisted history matching method in mind. Two ensemble-based methods, the ensemble Kalman filter and the ensemble randomized maximum likelihood method, are used to investigate issues arising from the use of seismic amplitude data, and possible solutions are presented. Experiments with a 3D synthetic reservoir model show that additional information on the distribution of reservoir fluids, and on rock properties such as porosity and permeability, can be extracted from the seismic data. The role for localization and iterative methods are discussed in detail.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号