首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we describe a method of history matching in which changes to the reservoir model are constructed from a limited set of basis vectors. The purpose of this reparameterization is to reduce the cost of a Newton iteration, without altering the final estimate of model parameters and without substantially slowing the rate of convergence. The utility of a subspace method depends on several factors, including the choice and number of the subspace vectors to be used. Computational gains in efficiency result partly from a reduction in the size of the matrix system that must be solved in a Newton iteration. More important contributions, however, result from a reduction in the number of sensitivity coefficients that must be computed, reduction in the dimensions of the matrices that must be multiplied, and elimination of matrix products involving the inverse of the prior model covariance matrix. These factors affect the efficiency of each Newton iteration. Although computation of the optimal set of subspace vectors may be expensive, we show that the rate of convergence and the final results are somewhat insensitive to the choice of subspace vectors. We also show that it is desirable to start with a small number of subspace vectors and gradually increase the number at each Newton iteration until an acceptable level of data mismatch is obtained.  相似文献   

2.
3.
Recent progress on reservoir history matching: a review   总被引:3,自引:0,他引:3  
History matching is a type of inverse problem in which observed reservoir behavior is used to estimate reservoir model variables that caused the behavior. Obtaining even a single history-matched reservoir model requires a substantial amount of effort, but the past decade has seen remarkable progress in the ability to generate reservoir simulation models that match large amounts of production data. Progress can be partially attributed to an increase in computational power, but the widespread adoption of geostatistics and Monte Carlo methods has also contributed indirectly. In this review paper, we will summarize key developments in history matching and then review many of the accomplishments of the past decade, including developments in reparameterization of the model variables, methods for computation of the sensitivity coefficients, and methods for quantifying uncertainty. An attempt has been made to compare representative procedures and to identify possible limitations of each.  相似文献   

4.
Matching seismic data in assisted history matching processes can be a challenging task. One main idea is to bring flexibility in the choice of the parameters to be perturbed, focusing on the information provided by seismic data. Local parameterization techniques such as pilot-point or gradual deformation methods can be introduced, considering their high adaptability. However, the choice of the spatial supports associated to the perturbed parameters is crucial to successfully reduce the seismic mismatch. The information related to seismic data is sometimes considered to initialize such local methods. Recent attempts to define the regions adaptively have been proposed, focusing on the mismatch between simulated and reference seismic data. However, the regions are defined manually for each optimization process. Therefore, we propose to drive the definition of the parameter support by performing an automatic definition of the regions to be perturbed from the residual maps related to the 3D seismic data. Two methods are developed in this paper. The first one consists in clustering the residual map with classification algorithms. The second method proposes to drive the generation of pilot point locations in an adaptive way. Residual maps, after proper normalization, are considered as probability density functions of the pilot point locations. Both procedures lead to a complete adaptive and highly flexible perturbation technique for 3D seismic matching. A synthetic study based on the PUNQ test case is introduced to illustrate the potential of these adaptive strategies.  相似文献   

5.
Defining representative reservoir models usually calls for a huge number of fluid flow simulations, which may be very time-consuming. Meta-models are built to lessen this issue. They approximate a scalar function from the values simulated for a set of uncertain parameters. For time-dependent outputs, a reduced-basis approach can be considered. If the resulting meta-models are accurate, they can be called instead of the flow simulator. We propose here to investigate a specific approach named multi-fidelity meta-modeling to reduce further the simulation time. We assume that the outputs of interest are known at various levels of resolution: a fine reference level, and coarser levels for which computations are faster but less accurate. Multi-fidelity meta-models refer to co-kriging to approximate the outputs at the fine level using the values simulated at all levels. Such an approach can save simulation time by limiting the number of fine level simulations. The objective of this paper is to investigate the potential of multi-fidelity for reservoir engineering. The reduced-basis approach for time-dependent outputs is extended to the multi-fidelity context. Then, comparisons with the more usual kriging approach are proposed on a synthetic case, both in terms of computation time and predictivity. Meta-models are computed to evaluate the production responses at wells and the mismatch between the data and the simulated responses (history matching error), considering two levels of resolution. The results show that the multi-fidelity approach can outperform kriging if the target simulation time is small. Last, its potential is evidenced when used for history matching.  相似文献   

6.
The conventional paradigm for predicting future reservoir performance from existing production data involves the construction of reservoir models that match the historical data through iterative history matching. This is generally an expensive and difficult task and often results in models that do not accurately assess the uncertainty of the forecast. We propose an alternative re-formulation of the problem, in which the role of the reservoir model is reconsidered. Instead of using the model to match the historical production, and then forecasting, the model is used in combination with Monte Carlo sampling to establish a statistical relationship between the historical and forecast variables. The estimated relationship is then used in conjunction with the actual production data to produce a statistical forecast. This allows quantifying posterior uncertainty on the forecast variable without explicit inversion or history matching. The main rationale behind this is that the reservoir model is highly complex and even so, still remains a simplified representation of the actual subsurface. As statistical relationships can generally only be constructed in low dimensions, compression and dimension reduction of the reservoir models themselves would result in further oversimplification. Conversely, production data and forecast variables are time series data, which are simpler and much more applicable for dimension reduction techniques. We present a dimension reduction approach based on functional data analysis (FDA), and mixed principal component analysis (mixed PCA), followed by canonical correlation analysis (CCA) to maximize the linear correlation between the forecast and production variables. Using these transformed variables, it is then possible to apply linear Gaussian regression and estimate the statistical relationship between the forecast and historical variables. This relationship is used in combination with the actual observed historical data to estimate the posterior distribution of the forecast variable. Sampling from this posterior and reconstructing the corresponding forecast time series, allows assessing uncertainty on the forecast. This workflow will be demonstrated on a case based on a Libyan reservoir and compared with traditional history matching.  相似文献   

7.
We present a parallel framework for history matching and uncertainty characterization based on the Kalman filter update equation for the application of reservoir simulation. The main advantages of ensemble-based data assimilation methods are that they can handle large-scale numerical models with a high degree of nonlinearity and large amount of data, making them perfectly suited for coupling with a reservoir simulator. However, the sequential implementation is computationally expensive as the methods require relatively high number of reservoir simulation runs. Therefore, the main focus of this work is to develop a parallel data assimilation framework with minimum changes into the reservoir simulator source code. In this framework, multiple concurrent realizations are computed on several partitions of a parallel machine. These realizations are further subdivided among different processors, and communication is performed at data assimilation times. Although this parallel framework is general and can be used for different ensemble techniques, we discuss the methodology and compare results of two algorithms, the ensemble Kalman filter (EnKF) and the ensemble smoother (ES). Computational results show that the absolute runtime is greatly reduced using a parallel implementation versus a serial one. In particular, a parallel efficiency of about 35 % is obtained for the EnKF, and an efficiency of more than 50 % is obtained for the ES.  相似文献   

8.
In geosciences, complex forward problems met in geophysics, petroleum system analysis, and reservoir engineering problems often require replacing these forward problems by proxies, and these proxies are used for optimizations problems. For instance, history matching of observed field data requires a so large number of reservoir simulation runs (especially when using geostatistical geological models) that it is often impossible to use the full reservoir simulator. Therefore, several techniques have been proposed to mimic the reservoir simulations using proxies. Due to the use of experimental approach, most authors propose to use second-order polynomials. In this paper, we demonstrate that (1) neural networks can also be second-order polynomials. Therefore, the use of a neural network as a proxy is much more flexible and adaptable to the nonlinearity of the problem to be solved; (2) first-order and second-order derivatives of the neural network can be obtained providing gradients and Hessian for optimizers. For inverse problems met in seismic inversion, well by well production data, optimal well locations, source rock generation, etc., most of the time, gradient methods are used for finding an optimal solution. The paper will describe how to calculate these gradients from a neural network built as a proxy. When needed, the Hessian can also be obtained from the neural network approach. On a real case study, the ability of neural networks to reproduce complex phenomena (water cuts, production rates, etc.) is shown. Comparisons with second polynomials (and kriging methods) will be done demonstrating the superiority of the neural network approach as soon as nonlinearity behaviors are present in the responses of the simulator. The gradients and the Hessian of the neural network will be compared to those of the real response function.  相似文献   

9.
A method for multiscale parameter estimation with application to reservoir history matching is presented. Starting from a given fine-scale model, coarser models are generated using a global upscaling technique where the coarse models are tuned to match the solution of the fine model. Conditioning to dynamic data is done by history-matching the coarse model. Using consistently the same resolution both for the forward and inverse problems, this model is successively refined using a combination of downscaling and history matching until model-matching dynamic data are obtained at the finest scale. Large-scale corrections are obtained using fast models, which, combined with a downscaling procedure, provide a better initial model for the final adjustment on the fine scale. The result is thus a series of models with different resolution, all matching history as good as possible with this grid. Numerical examples show that this method may significantly reduce the computational effort and/or improve the quality of the solution when achieving a fine-scale match as compared to history-matching directly on the fine scale.  相似文献   

10.
11.
A method for history matching of an in-house petroleum reservoir compositional simulator with multipoint flux approximation is presented. This method is used for the estimation of unknown reservoir parameters, such as permeability and porosity, based on production data and inverted seismic data. The limited-memory Broyden–Fletcher–Goldfarb–Shanno method is employed for minimization of the objective function, which represents the difference between simulated and observed data. In this work, we present the key features of the algorithm for calculations of the gradients of the objective function based on adjoint variables. The test example shows that the method is applicable to cases with anisotropic permeability fields, multipoint flux approximation, and arbitrary fluid compositions.  相似文献   

12.
Gradient-based history matching algorithms can be used to adapt the uncertain parameters in a reservoir model using production data. They require, however, the implementation of an adjoint model to compute the gradients, which is usually an enormous programming effort. We propose a new approach to gradient-based history matching which is based on model reduction, where the original (nonlinear and high-order) forward model is replaced by a linear reduced-order forward model and, consequently, the adjoint of the tangent linear approximation of the original forward model is replaced by the adjoint of a linear reduced-order forward model. The reduced-order model is constructed with the aid of the proper orthogonal decomposition method. Due to the linear character of the reduced model, the corresponding adjoint model is easily obtained. The gradient of the objective function is approximated, and the minimization problem is solved in the reduced space; the procedure is iterated with the updated estimate of the parameters if necessary. The proposed approach is adjoint-free and can be used with any reservoir simulator. The method was evaluated for a waterflood reservoir with channelized permeability field. A comparison with an adjoint-based history matching procedure shows that the model-reduced approach gives a comparable quality of history matches and predictions. The computational efficiency of the model-reduced approach is lower than of an adjoint-based approach, but higher than of an approach where the gradients are obtained with simple finite differences.  相似文献   

13.
Hydrocarbon reservoir modelling and characterisation is a challenging subject within the oil and gas industry due to the lack of well data and the natural heterogeneities of the Earth’s subsurface. Integrating historical production data into the geo-modelling workflow, commonly designated by history matching, allows better reservoir characterisation and the possibility of predicting the reservoir behaviour. We present herein a geostatistical-based multi-objective history matching methodology. It starts with the generation of an initial ensemble of the subsurface petrophysical property of interest through stochastic sequential simulation. Each model is then ranked according the match between its dynamic response, after fluid flow simulation, and the observed available historical production data. This enables building regionalised Pareto fronts and the definition of a large ensemble of optimal subsurface Earth models that fit all the observed production data without compromising the exploration of the uncertainty space. The proposed geostatistical multi-objective history matching technique is successfully implemented in a benchmark synthetic reservoir dataset, the PUNQ-S3, where 12 objectives are targeted.  相似文献   

14.
15.
We use a simple 2D model of a layered reservoir with three unknown parameters: the throw of a fault, and high and low permeabilities. Then consider three different cases where in each case two parameters are kept fixed and the third one is varied within a specific range. Using a weighted sum of squares of the difference in production for the objective function, we plot it against the varying parameter for each case. It mainly shows a complex function with multiple minima. We see that geological ‘symmetry’ and also vertical spreading are some sources of non-monotonicity in the production and transmissibility curves. These result in a multi-modal objective function and consequently non-unique history matches. The behaviour of the system in the forecast period is also studied, which shows that a good history matched model could give a bad forecast.  相似文献   

16.
Calculating derivatives for automatic history matching   总被引:1,自引:0,他引:1  
Automatic history matching is based on minimizing an objective function that quantifies the mismatch between observed and simulated data. When using gradient-based methods for solving this optimization problem, a key point for the overall procedure is how the simulator delivers the necessary derivative information. In this paper, forward and adjoint methods for derivative calculation are discussed. Procedures for sensitivity matrix building, sensitivity matrix and transpose sensitivity matrix vector products are fully described. To show the usefulness of the derivative calculation algorithms, a new variant of the gradzone analysis, which tries to address the problem of selecting the most relevant parameters for a history matching, is proposed using the singular value decomposition of the sensitivity matrix. Application to a simple synthetic case shows that this procedure can reveal important information about the nature of the history-matching problem.  相似文献   

17.
梁彩芳 《新疆地质》2003,21(3):375-376
近几年,我们充分利用先进的地震技术特别是三维可视化技术、二维和三维交会图分析技术,分析各种三维地震数据的异常特征,从研究古岩溶发育的控制因素(古构造即古地貌、古水系)入手[1],在塔河油田碳酸盐岩储层预测中做了有益的初步尝试,取得了阶段性成果,成果应用于油田勘探开发中获得了很好的效果. 1 基本工作方法 准备数据体 基本数据需有三维地震保幅数据体、偏移数据体、地质层位解释数据体.利用地震处理解释软件从三维地震保幅数据体中提取相干参数数据体,并进行波阻抗反演,得到三维波阻抗数据体. 解释判断标志层 结合区域地质、钻井…  相似文献   

18.
济阳坳陷孤东油田曲流河河道储集层构型三维建模   总被引:1,自引:0,他引:1       下载免费PDF全文
济阳坳陷孤东油田七区西馆陶组上段为典型的曲流河沉积,其中Ng522单层发育2个大型点坝砂体,在系统的点坝砂体构型表征的基础上,探索了储集层构型界面的几何建模方法,将构型界面模型嵌入到基于三维结构化网格体的相模型中,建立了研究区26-295井区真正意义上的、更符合地下实际的三维储集层构型模型,再现了成因微相内部构型单元及界面的空间分布特征,满足了三维油藏数值模拟的需要。点坝内部的侧积层向废弃河道方向倾斜,延伸到距点坝顶三分之二的位置,点坝砂体表现为“半连通体”特征。建立了点坝内部剩余油分布模式,并指出挖潜措施。该方法在研究区应用效果较好,并可以推广到其他相似油田,这对丰富储集层地质学理论及提高油田开发效率均具有重要意义。  相似文献   

19.
在新疆西达里亚T—Ⅱ油气藏的储层表征研究中,首先进行了地质及测井解释,在此基础上应用地质统计学方法进行了三维储层地质建模,建模过程中采用先建立砂层格架,再建立属性参数的“两步建模”研究思路,理论上减少了砂体与属性参数的解释矛盾与误差,提高了属性模型精确度.通过实际生产动态对属性模型进行检验,证明建模结果是正确的.  相似文献   

20.
A new procedure to integrate critical state models including Cam–Clay and modified Cam–Clay is proposed here. The proposed procedure makes use of the linearity of the virgin isotropic compression curve and the parallel anisotropic consolidation lines in e–ln p space which are basic features of the formulation of critical state models. Using this algorithm, a unique final stress state may be found as a function of a single unknown for elastoplastic loading. The key equations are given in this article for the Cam–Clay and modified Cam–Clay models. The use of the Newton–Raphson iterative method to minimize residuals and obtain a converged solution is described here. This new algorithm may be applied using the assumptions of linear elasticity or non‐linear elasticity within a given loading step. The new algorithm proposed here is internally consistent and has computational advantages over the current numerical integration procedures. Numerical examples are presented to show the performance of the algorithm as compared to other integration algorithms. Published in 2005 by John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号