首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The conventional paradigm for predicting future reservoir performance from existing production data involves the construction of reservoir models that match the historical data through iterative history matching. This is generally an expensive and difficult task and often results in models that do not accurately assess the uncertainty of the forecast. We propose an alternative re-formulation of the problem, in which the role of the reservoir model is reconsidered. Instead of using the model to match the historical production, and then forecasting, the model is used in combination with Monte Carlo sampling to establish a statistical relationship between the historical and forecast variables. The estimated relationship is then used in conjunction with the actual production data to produce a statistical forecast. This allows quantifying posterior uncertainty on the forecast variable without explicit inversion or history matching. The main rationale behind this is that the reservoir model is highly complex and even so, still remains a simplified representation of the actual subsurface. As statistical relationships can generally only be constructed in low dimensions, compression and dimension reduction of the reservoir models themselves would result in further oversimplification. Conversely, production data and forecast variables are time series data, which are simpler and much more applicable for dimension reduction techniques. We present a dimension reduction approach based on functional data analysis (FDA), and mixed principal component analysis (mixed PCA), followed by canonical correlation analysis (CCA) to maximize the linear correlation between the forecast and production variables. Using these transformed variables, it is then possible to apply linear Gaussian regression and estimate the statistical relationship between the forecast and historical variables. This relationship is used in combination with the actual observed historical data to estimate the posterior distribution of the forecast variable. Sampling from this posterior and reconstructing the corresponding forecast time series, allows assessing uncertainty on the forecast. This workflow will be demonstrated on a case based on a Libyan reservoir and compared with traditional history matching.  相似文献   

2.
3.
The degrees of freedom (DOF) in standard ensemble-based data assimilation is limited by the ensemble size. Successful assimilation of a data set with large information content (IC) therefore requires that the DOF is sufficiently large. A too small number of DOF with respect to the IC may result in ensemble collapse, or at least in unwarranted uncertainty reduction in the estimation results. In this situation, one has two options to restore a proper balance between the DOF and the IC: to increase the DOF or to decrease the IC. Spatially dense data sets typically have a large IC. Within subsurface applications, inverted time-lapse seismic data used for reservoir history matching is an example of a spatially dense data set. Such data are considered to have great potential due to their large IC, but they also contain errors that are challenging to characterize properly. The computational cost of running the forward simulations for reservoir history matching with any kind of data is large for field cases, such that a moderately large ensemble size is standard. Realization of the potential in seismic data for ensemble-based reservoir history matching is therefore not straightforward, not only because of the unknown character of the associated data errors, but also due to the imbalance between a large IC and a too small number of DOF. Distance-based localization is often applied to increase the DOF but is example specific and involves cumbersome implementation work. We consider methods to obtain a proper balance between the IC and the DOF when assimilating inverted seismic data for reservoir history matching. To decrease the IC, we consider three ways to reduce the influence of the data space; subspace pseudo inversion, data coarsening, and a novel way of performing front extraction. To increase the DOF, we consider coarse-scale simulation, which allows for an increase in the DOF by increasing the ensemble size without increasing the total computational cost. We also consider a combination of decreasing the IC and increasing the DOF by proposing a novel method consisting of a combination of data coarsening and coarse-scale simulation. The methods were compared on one small and one moderately large example with seismic bulk-velocity fields at four assimilation times as data. The size of the examples allows for calculation of a reference solution obtained with standard ensemble-based data assimilation methodology and an unrealistically large ensemble size. With the reference solution as the yardstick with which the quality of other methods are measured, we find that the novel method combining data coarsening and coarse-scale simulations gave the best results. With very restricted computational resources available, this was the only method that gave satisfactory results.  相似文献   

4.
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations) and the updated parameter field becomes scattered and loses important geological features (for example, the contact between high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time, the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems.  相似文献   

5.
Over the last years, the ensemble Kalman filter (EnKF) has become a very popular tool for history matching petroleum reservoirs. EnKF is an alternative to more traditional history matching techniques as it is computationally fast and easy to implement. Instead of seeking one best model estimate, EnKF is a Monte Carlo method that represents the solution with an ensemble of state vectors. Lately, several ensemble-based methods have been proposed to improve upon the solution produced by EnKF. In this paper, we compare EnKF with one of the most recently proposed methods, the adaptive Gaussian mixture filter (AGM), on a 2D synthetic reservoir and the Punq-S3 test case. AGM was introduced to loosen up the requirement of a Gaussian prior distribution as implicitly formulated in EnKF. By combining ideas from particle filters with EnKF, AGM extends the low-rank kernel particle Kalman filter. The simulation study shows that while both methods match the historical data well, AGM is better at preserving the geostatistics of the prior distribution. Further, AGM also produces estimated fields that have a higher empirical correlation with the reference field than the corresponding fields obtained with EnKF.  相似文献   

6.
The ensemble Kalman filter (EnKF) appears to give good results for matching production data at existing wells. However, the predictive power of these models outside of the existing wells is much more uncertain. In this paper, for a channelized reservoir for five different cases with different levels of information the production history is matched using the EnKF. The predictive power of the resulting model is tested for the existing wells and for new wells. The results show a consistent improvement for the predictions at the existing wells after assimilation of the production data, but not for prediction of production at new well locations. The latter depended on the settings of the problem and prior information used. The results also showed that the fit during the history match was not always a good predictor for predictive capabilities of the history match model. This suggests that some form of validation outside of observed wells is essential.  相似文献   

7.
Reservoir simulation models are used both in the development of new fields and in developed fields where production forecasts are needed for investment decisions. When simulating a reservoir, one must account for the physical and chemical processes taking place in the subsurface. Rock and fluid properties are crucial when describing the flow in porous media. In this paper, the authors are concerned with estimating the permeability field of a reservoir. The problem of estimating model parameters such as permeability is often referred to as a history-matching problem in reservoir engineering. Currently, one of the most widely used methodologies which address the history-matching problem is the ensemble Kalman filter (EnKF). EnKF is a Monte Carlo implementation of the Bayesian update problem. Nevertheless, the EnKF methodology has certain limitations that encourage the search for an alternative method.For this reason, a new approach based on graphical models is proposed and studied. In particular, the graphical model chosen for this purpose is a dynamic non-parametric Bayesian network (NPBN). This is the first attempt to approach a history-matching problem in reservoir simulation using a NPBN-based method. A two-phase, two-dimensional flow model was implemented for a synthetic reservoir simulation exercise, and initial results are shown. The methods’ performances are evaluated and compared. This paper features a completely novel approach to history matching and constitutes only the first part (part I) of a more detailed investigation. For these reasons (novelty and incompleteness), many questions are left open and a number of recommendations are formulated, to be investigated in part II of the same paper.  相似文献   

8.

Data assimilation in reservoir modeling often involves model variables that are multimodal, such as porosity and permeability. Well established data assimilation methods such as ensemble Kalman filter and ensemble smoother approaches, are based on Gaussian assumptions that are not applicable to multimodal random variables. The selection ensemble smoother is introduced as an alternative to traditional ensemble methods. In the proposed method, the prior distribution of the model variables, for example the porosity field, is a selection-Gaussian distribution, which allows modeling of the multimodal behavior of the posterior ensemble. The proposed approach is applied for validation on a two-dimensional synthetic channelized reservoir. In the application, an unknown reservoir model of porosity and permeability is estimated from the measured data. Seismic and production data are assumed to be repeatedly measured in time and the reservoir model is updated every time new data are assimilated. The example shows that the selection ensemble Kalman model improves the characterisation of the bimodality of the model parameters compared to the results of the ensemble smoother.

  相似文献   

9.
We present a parallel framework for history matching and uncertainty characterization based on the Kalman filter update equation for the application of reservoir simulation. The main advantages of ensemble-based data assimilation methods are that they can handle large-scale numerical models with a high degree of nonlinearity and large amount of data, making them perfectly suited for coupling with a reservoir simulator. However, the sequential implementation is computationally expensive as the methods require relatively high number of reservoir simulation runs. Therefore, the main focus of this work is to develop a parallel data assimilation framework with minimum changes into the reservoir simulator source code. In this framework, multiple concurrent realizations are computed on several partitions of a parallel machine. These realizations are further subdivided among different processors, and communication is performed at data assimilation times. Although this parallel framework is general and can be used for different ensemble techniques, we discuss the methodology and compare results of two algorithms, the ensemble Kalman filter (EnKF) and the ensemble smoother (ES). Computational results show that the absolute runtime is greatly reduced using a parallel implementation versus a serial one. In particular, a parallel efficiency of about 35 % is obtained for the EnKF, and an efficiency of more than 50 % is obtained for the ES.  相似文献   

10.
Recent progress on reservoir history matching: a review   总被引:3,自引:0,他引:3  
History matching is a type of inverse problem in which observed reservoir behavior is used to estimate reservoir model variables that caused the behavior. Obtaining even a single history-matched reservoir model requires a substantial amount of effort, but the past decade has seen remarkable progress in the ability to generate reservoir simulation models that match large amounts of production data. Progress can be partially attributed to an increase in computational power, but the widespread adoption of geostatistics and Monte Carlo methods has also contributed indirectly. In this review paper, we will summarize key developments in history matching and then review many of the accomplishments of the past decade, including developments in reparameterization of the model variables, methods for computation of the sensitivity coefficients, and methods for quantifying uncertainty. An attempt has been made to compare representative procedures and to identify possible limitations of each.  相似文献   

11.
Inferring reservoir data from dynamic production data has long been done through matching the production history. However, proper integration of available production history has always been a challenge. Different production history data such as well pressure and water cut often occur at different scales making their joint inversion difficult. Furthermore, production data obtained from the same well or even the same reservoir are often correlated making a significant portion of the dataset redundant. Thirdly, the massiveness of the data recorded from wells in a large reservoir over a long period of time makes the nonlinear inversion of such data computational demanding. In this paper, we propose the integration of multiwell production data using wavelet transform. The method involves the use of a two-dimensional wavelet transformation of the data space in order to integrate multiple production data and reduce the correlation between multiwell data. Multiple datasets from different wells, representing different production responses (pressure, water cut, etc.), were treated as a single matrix of data rather than separate vectors that assume no correlation amongst datasets. This enabled us to transform the multiwell production data into a two-dimensional wavelet domain and subsequently select the most important wavelets for history match. By minimizing the square of the Frobenius norm of the residual matrix we were able to match the calculated response to the observed response. We derived the relationship that allows us to replace a conventional minimization of the sum of squares of the l 2 norms of multi-objective functions with the minimization of the square of the Frobenius norm of the integrated data. The usefulness of the approach is demonstrated using two examples. The approach proved very effective at reducing correlation between multiwell data. In addition, the method helped to reduce the cost of computing sensitivity coefficients. However, the method gave poor prediction of water cut when the datasets were not scaled before inverse modeling.  相似文献   

12.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

13.
煤储层数值模拟技术是进行产能预测、地面开发前景评价和生产工艺优选等的重要手段。基于煤储层数值模拟软件的发展历史,阐述了运用第三代专用软件(COMET3)进行煤储层数值模拟的主要步骤,通过实例展示了煤储层排采历史拟合和煤层气井产能预测的效果。结果显示,COMET3软件考虑了三重孔隙结构、双扩散特性、煤基质收缩膨胀效应等煤储层特点,可在较大程度上反演和修正煤储层测试数据,有利于提高煤储层特性分析和煤层气井产能预测的客观性。  相似文献   

14.
While 3D seismic has been the basis for geological model building for a long time, time-lapse seismic has primarily been used in a qualitative manner to assist in monitoring reservoir behavior. With the growing acceptance of assisted history matching methods has come an equally rising interest in incorporating 3D or time-lapse seismic data into the history matching process in a more quantitative manner. The common approach in recent studies has been to invert the seismic data to elastic or to dynamic reservoir properties, typically acoustic impedance or saturation changes. Here we consider the use of both 3D and time-lapse seismic amplitude data based on a forward modeling approach that does not require any inversion in the traditional sense. Advantages of such an approach may be better estimation and treatment of model and measurement errors, the combination of two inversion steps into one by removing the explicit inversion to state space variables, and more consistent dependence on the validity of assumptions underlying the inversion process. In this paper, we introduce this approach with the use of an assisted history matching method in mind. Two ensemble-based methods, the ensemble Kalman filter and the ensemble randomized maximum likelihood method, are used to investigate issues arising from the use of seismic amplitude data, and possible solutions are presented. Experiments with a 3D synthetic reservoir model show that additional information on the distribution of reservoir fluids, and on rock properties such as porosity and permeability, can be extracted from the seismic data. The role for localization and iterative methods are discussed in detail.  相似文献   

15.
This paper shows a history matching workflow with both production and 4D seismic data where the uncertainty of seismic data for history matching comes from Bayesian seismic waveform inversion. We use a synthetic model and perform two seismic surveys, one before start of production and the second after 1 year of production. From the first seismic survey, we estimate the contrast in slowness squared (with uncertainty) and use this estimate to generate an initial estimate of porosity and permeability fields. This ensemble is then updated using the second seismic survey (after inversion to contrasts) and production data with an iterative ensemble smoother. The impact on history matching results from using different uncertainty estimates for the seismic data is investigated. From the Bayesian seismic inversion, we get a covariance matrix for the uncertainty and we compare using the full covariance matrix with using only the diagonal. We also compare with using a simplified uncertainty estimate that does not come from the seismic inversion. The results indicate that it is important not to underestimate the noise in seismic data and that having information about the correlation in the error in seismic data can in some cases improve the results.  相似文献   

16.
The performance of the ensemble Kalman filter (EnKF) for continuous updating of facies location and boundaries in a reservoir model based on production and facies data for a 3D synthetic problem is presented. The occurrence of the different facies types is treated as a random process and the initial distribution was obtained by truncating a bi-Gaussian random field. Because facies data are highly non-Gaussian, re-parameterization was necessary in order to use the EnKF algorithm for data assimilation; two Gaussian random fields are updated in lieu of the static facies parameters. The problem of history matching applied to facies is difficult due to (1) constraints to facies observations at wells are occasionally violated when productions data are assimilated; (2) excessive reduction of variance seems to be a bigger problem with facies than with Gaussian random permeability and porosity fields; and (3) the relationship between facies variables and data is so highly non-linear that the final facies field does not always honor early production data well. Consequently three issues are investigated in this work. Is it possible to iteratively enforce facies constraints when updates due to production data have caused them to be violated? Can localization of adjustments be used for facies to prevent collapse of the variance during the data-assimilation period? Is a forecast from the final state better than a forecast from time zero using the final parameter fields?To investigate these issues, a 3D reservoir simulation model is coupled with the EnKF technique for data assimilation. One approach to enforcing the facies constraint is continuous iteration on all available data, which may lead to inconsistent model states, incorrect weighting of the production data and incorrect adjustment of the state vector. A sequential EnKF where the dynamic and static data are assimilated sequentially is presented and this approach seems to have solved the highlighted problems above. When the ensemble size is small compared to the number of independent data, the localized adjustment of the state vector is a very important technique that may be used to mitigate loss of rank in the ensemble. Implementing a distance-based localization of the facies adjustment appears to mitigate the problem of variance deficiency in the ensembles by ensuring that sufficient variability in the ensemble is maintained throughout the data assimilation period. Finally, when data are assimilated without localization, the prediction results appear to be independent of the starting point. When localization is applied, it is better to predict from the start using the final parameter field rather than continue from the final state.  相似文献   

17.
The ensemble Kalman filter (EnKF) has become a popular method for history matching production and seismic data in petroleum reservoir models. However, it is known that EnKF may fail to give acceptable data matches especially for highly nonlinear problems. In this paper, we introduce a procedure to improve EnKF data matches based on assimilating the same data multiple times with the covariance matrix of the measurement errors multiplied by the number of data assimilations. We prove the equivalence between single and multiple data assimilations for the linear-Gaussian case and present computational evidence that multiple data assimilations can improve EnKF estimates for the nonlinear case. The proposed procedure was tested by assimilating time-lapse seismic data in two synthetic reservoir problems, and the results show significant improvements compared to the standard EnKF. In addition, we review the inversion schemes used in the EnKF analysis and present a rescaling procedure to avoid loss of information during the truncation of small singular values.  相似文献   

18.
There are several issues to consider when we use ensemble smoothers to condition reservoir models on rate data. The values in a time series of rate data contain redundant information that may lead to poorly conditioned inversions and thereby influence the stability of the numerical computation of the update. A time series of rate data typically has correlated measurement errors in time, and negligence of the correlations leads to a too strong impact from conditioning on the rate data and possible ensemble collapse. The total number of rate data included in the smoother update will typically exceed the ensemble size, and special care needs to be taken to ensure numerically stable results. We force the reservoir model with production rate data derived from the observed production, and the further conditioning on the same rate data implies that we use the data twice. This paper discusses strategies for conditioning reservoir models on rate data using ensemble smoothers. In particular, a significant redundancy in the rate data makes it possible to subsample the rate data. The alternative to subsampling is to model the unknown measurement error correlations and specify the full measurement error covariance matrix. We demonstrate the proposed strategies using different ensemble smoothers with the Norne full-field reservoir model.  相似文献   

19.
根据三维地震地质模型对地震数据进行模拟是从勘探到生产的周期内决策过程中的一个不可或缺的组成部分。虽然对于在储层内的动力过程和地震地质的模型表述已经取得很大进展,但如何从这些模型得到地震数据的精确模拟仍面临很多挑战。通常是在地球模型范围内根据物性用一维褶积方法来模拟地震数据。然而这个过程一般不考虑地震勘探布局和盖层对地震信号的影响。我们审视了为什么这些因素会制约三维地球模型的有效性,并考虑了为什么需要把盖层和地震勘探布局对三维覆盖和分辨率的影响加进模拟过程之中。我们提出了一种新方法,把建立物性模型和一种新的地震模拟技术结合起来,给出一个工作流程;利用这个流程,勘探工作者可以很快模拟出三维的PSDM数据,这些数据加进了盖层和地震勘探布局对覆盖及分辨率的影响。我们利用从远离挪威海岸的一个油田得到的数据,在考虑覆盖和分辨率效应的地震数据模拟之前,对岩石物性做了一些扰动,然后进行地震数据模拟,以此来说明如何可以用这种方法提高三维地球模型的精确性和增进我们对储层的了解。  相似文献   

20.
Ensemble-based methods are becoming popular assisted history matching techniques with a growing number of field applications. These methods use an ensemble of model realizations, typically constructed by means of geostatistics, to represent the prior uncertainty. The performance of the history matching is very dependent on the quality of the initial ensemble. However, there is a significant level of uncertainty in the parameters used to define the geostatistical model. From a Bayesian viewpoint, the uncertainty in the geostatistical modeling can be represented by a hyper-prior in a hierarchical formulation. This paper presents the first steps towards a general parametrization to address the problem of uncertainty in the prior modeling. The proposed parametrization is inspired in Gaussian mixtures, where the uncertainty in the prior mean and prior covariance is accounted by defining weights for combining multiple Gaussian ensembles, which are estimated during the data assimilation. The parametrization was successfully tested in a simple reservoir problem where the orientation of the major anisotropic direction of the permeability field was unknown.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号