首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Reservoir simulation models are used both in the development of new fields and in developed fields where production forecasts are needed for investment decisions. When simulating a reservoir, one must account for the physical and chemical processes taking place in the subsurface. Rock and fluid properties are crucial when describing the flow in porous media. In this paper, the authors are concerned with estimating the permeability field of a reservoir. The problem of estimating model parameters such as permeability is often referred to as a history-matching problem in reservoir engineering. Currently, one of the most widely used methodologies which address the history-matching problem is the ensemble Kalman filter (EnKF). EnKF is a Monte Carlo implementation of the Bayesian update problem. Nevertheless, the EnKF methodology has certain limitations that encourage the search for an alternative method.For this reason, a new approach based on graphical models is proposed and studied. In particular, the graphical model chosen for this purpose is a dynamic non-parametric Bayesian network (NPBN). This is the first attempt to approach a history-matching problem in reservoir simulation using a NPBN-based method. A two-phase, two-dimensional flow model was implemented for a synthetic reservoir simulation exercise, and initial results are shown. The methods’ performances are evaluated and compared. This paper features a completely novel approach to history matching and constitutes only the first part (part I) of a more detailed investigation. For these reasons (novelty and incompleteness), many questions are left open and a number of recommendations are formulated, to be investigated in part II of the same paper.  相似文献   

2.
The history-matching inverse problem from petroleum engineering is analysed using the Imperial College fault model. This fault model produces a challenging inverse problem and is designed to show some of the problems which can occur whilst performing history-matching calculations on complicated geologies. It is shown that there can be multiple distinct geologies which match the history data. Furthermore, it is shown that the maximum-a-posteriori estimate does not correspond to the true geology in some cases. Both of these statements are corroborated via numerical examples where the parameter spaces are ?, ?3, ?7 and ?13. In addition, it is shown that the number of matches which agree with the data increases with dimension for these examples. It is also shown that the different matches can result in different reservoir management decision which, if incorrectly taken, would incur substantial financial penalties. All of these analyses are performed in a systematic manner, where it is shown that the standard algorithms can give a misleading answer. The history-matching problem is written as a minimisation problem, and it is shown that knowledge of all of the local minima is required. This presents significant computational issues as the resulting objective function is highly nonlinear, expensive to evaluate and multimodal. Previously used algorithms have been proved to be inadequate. Parallel tempering is a method which, if run for long enough, can find all the local minima. However, as the objective is expensive, a number of algorithm modifications had to be used to ensure convergence within a reasonable time. This new information is outlined in the paper. The algorithm as implemented produced results and new insights into this problem which were not suspected before. The results produced by this algorithm for the multimodal history-matching problem are superior to all other results of which we are aware. However, a considered amount of computation time was used within this paper, so this result does not infer that the algorithm cannot be improved upon. This algorithm not only produces good results but can be applied to all other history-matching problems. We have shown that this method provides a robust route of finding multiple local optima/solutions to the inverse problem, which is of considerable benefit to the petroleum industry. Furthermore, it is an entirely parallel algorithm which is becoming computationally feasible for other history-matching problems.  相似文献   

3.
计算机能力的提升和历史拟合方面的最新进展促进了对先前建立的储层模型的重新检验。为了节省工程师和CPU的时间,我们开发了4种独特的算法,来允许无需重新进行储层研究而重建现有模型。这些算法涉及的技术包括:优化、松弛、Wiener滤波或序贯重构。基本上,它们被用来确定一个随机函数和一系列随机数。给定一个随机函数,一族随机数将产生一个实现,这个实现和现有的储层模型十分接近。一旦随机数已知,现有的储层模型将被提交到一个历史拟合过程中,以此来改进数据拟合度或者考虑新收集到的数据。我们关注的是先前建立的相储层模型。虽然我们对模型模拟的方式一无所知,但是我们可以确定一系列随机数,再用多点统计模拟方法来建造一个和现有储层模型十分接近的实现。然后运行一种新的历史拟合程序来更新现有的储层模型,使其拟合两口新生产井的流量数据。  相似文献   

4.
Geostatistically based history-matching methods make it possible to devise history-matching strategies that will honor geologic knowledge about the reservoir. However, the performance of these methods is known to be impeded by slow convergence rates resulting from the stochastic nature of the algorithm. It is the purpose of this paper to introduce a method that integrates qualitative gradient information into the probability perturbation method to improve convergence. The potential of the proposed method is demonstrated on a synthetic history-matching example. The results indicate that inclusion of qualitative gradient information improves the performance of the probability perturbation method.  相似文献   

5.
Calculating derivatives for automatic history matching   总被引:1,自引:0,他引:1  
Automatic history matching is based on minimizing an objective function that quantifies the mismatch between observed and simulated data. When using gradient-based methods for solving this optimization problem, a key point for the overall procedure is how the simulator delivers the necessary derivative information. In this paper, forward and adjoint methods for derivative calculation are discussed. Procedures for sensitivity matrix building, sensitivity matrix and transpose sensitivity matrix vector products are fully described. To show the usefulness of the derivative calculation algorithms, a new variant of the gradzone analysis, which tries to address the problem of selecting the most relevant parameters for a history matching, is proposed using the singular value decomposition of the sensitivity matrix. Application to a simple synthetic case shows that this procedure can reveal important information about the nature of the history-matching problem.  相似文献   

6.
The process of reservoir history-matching is a costly task. Many available history-matching algorithms either fail to perform such a task or they require a large number of simulation runs. To overcome such struggles, we apply the Gaussian Process (GP) modeling technique to approximate the costly objective functions and to expedite finding the global optima. A GP model is a proxy, which is employed to model the input-output relationships by assuming a multi-Gaussian distribution on the output values. An infill criterion is used in conjunction with a GP model to help sequentially add the samples with potentially lower outputs. The IC fault model is used to compare the efficiency of GP-based optimization method with other typical optimization methods for minimizing the objective function. In this paper, we present the applicability of using a GP modeling approach for reservoir history-matching problems, which is exemplified by numerical analysis of production data from a horizontal multi-stage fractured tight gas condensate well. The results for the case that is studied here show a quick convergence to the lowest objective values in less than 100 simulations for this 20-dimensional problem. This amounts to an almost 10 times faster performance compared to the Differential Evolution (DE) algorithm that is also known to be a powerful optimization technique. The sensitivities are conducted to explain the performance of the GP-based optimization technique with various correlation functions.  相似文献   

7.
This paper reports the results of an investigation on the use of a deterministic analysis scheme combined with the method ensemble smoother with multiple data assimilation (ES-MDA) for the problem of assimilating a large number of correlated data points. This is the typical case when history-matching time-lapse seismic data in petroleum reservoir models. The motivation for the use of the deterministic analysis is twofold. First, it tends to result in a smaller underestimation of the ensemble variance after data assimilation. This is particularly important for problems with a large number of measurements. Second, the deterministic analysis avoids the factorization of a large covariance matrix required in the standard implementation of ES-MDA with the perturbed observations scheme. The deterministic analysis is tested in a synthetic history-matching problem to assimilate production and seismic data.  相似文献   

8.
The prediction of fluid flows within hydrocarbon reservoirs requires the characterization of petrophysical properties. Such characterization is performed on the basis of geostatistics and history-matching; in short, a reservoir model is first randomly drawn, and then sequentially adjusted until it reproduces the available dynamic data. Two main concerns typical of the problem under consideration are the heterogeneity of rocks occurring at all scales and the use of data of distinct resolution levels. Therefore, referring to sequential Gaussian simulation, this paper proposes a new stochastic simulation method able to handle several scales for both continuous or discrete random fields. This method adds flexibility to history-matching as it boils down to the multiscale parameterization of reservoir models. In other words, reservoir models can be updated at either coarse or fine scales, or both. Parameterization adapts to the available data; the coarser the scale targeted, the smaller the number of unknown parameters, and the more efficient the history-matching process. This paper focuses on the use of variational optimization techniques driven by the gradual deformation method to vary reservoir models. Other data assimilation methods and perturbation processes could have been envisioned as well. Last, a numerical application case is presented in order to highlight the advantages of the proposed method for conditioning permeability models to dynamic data. For simplicity, we focus on two-scale processes. The coarse scale describes the variations in the trend while the fine scale characterizes local variations around the trend. The relationships between data resolution and parameterization are investigated.  相似文献   

9.
Sampling errors can severely degrade the reliability of estimates of conditional means and uncertainty quantification obtained by the application of the ensemble Kalman filter (EnKF) for data assimilation. A standard recommendation for reducing the spurious correlations and loss of variance due to sampling errors is to use covariance localization. In distance-based localization, the prior (forecast) covariance matrix at each data assimilation step is replaced with the Schur product of a correlation matrix with compact support and the forecast covariance matrix. The most important decision to be made in this localization procedure is the choice of the critical length(s) used to generate this correlation matrix. Here, we give a simple argument that the appropriate choice of critical length(s) should be based both on the underlying principal correlation length(s) of the geological model and the range of the sensitivity matrices. Based on this result, we implement a procedure for covariance localization and demonstrate with a set of distinctive reservoir history-matching examples that this procedure yields improved results over the standard EnKF implementation and over covariance localization with other choices of critical length.  相似文献   

10.
11.
The chemistry of Earth's atmosphere during its first 2–2.5 Ga bears on several branches of geoscience including the origin of prebiotic molecules and life itself, early surface processes, the “faint young sun” problem, carbon isotope systematics, and the transition to an oxidized surface. The geologic record, as sparse as it is for this era, presents several difficulties for attempts to model the atmosphere and its changes through time. The prevailing view for the past 50 years has centered around a moderately oxidized atmosphere of CO2 and N2, and most modeling efforts have been directed at reconciling geologic data, and atmospheric and chemical constraints, with such a composition. Improvements in modeling of early Earth processes and increased knowledge of Archean geology, including new geochemical methods and data, have largely helped support this view of the early atmosphere over the last 25 years, but have also left several nagging questions unanswered. How was a sufficient reservoir (and concentration) of prebiotic molecules produced? What were the major reservoirs for carbon, and how did they develop their isotopic signatures? Is there a solution to the problem of the “faint young sun”? Why was surface oxidation delayed following the advent of oxygenic photosynthesis? Lately, some attempts at answering these questions have suggested the importance of more reducing capacity at the early Earth's surface, but without abandoning the idea of a mainly CO2–N2 atmosphere. It may be that returning to ideas of the early atmosphere current during the 1940s and earliest 1950s could help resolve some of these problems. Such an approach may not only be consistent with the atmospheres of the other terrestrial planets, but may help answer significant questions about the surface history of Mars.  相似文献   

12.
Gradient-based history matching algorithms can be used to adapt the uncertain parameters in a reservoir model using production data. They require, however, the implementation of an adjoint model to compute the gradients, which is usually an enormous programming effort. We propose a new approach to gradient-based history matching which is based on model reduction, where the original (nonlinear and high-order) forward model is replaced by a linear reduced-order forward model and, consequently, the adjoint of the tangent linear approximation of the original forward model is replaced by the adjoint of a linear reduced-order forward model. The reduced-order model is constructed with the aid of the proper orthogonal decomposition method. Due to the linear character of the reduced model, the corresponding adjoint model is easily obtained. The gradient of the objective function is approximated, and the minimization problem is solved in the reduced space; the procedure is iterated with the updated estimate of the parameters if necessary. The proposed approach is adjoint-free and can be used with any reservoir simulator. The method was evaluated for a waterflood reservoir with channelized permeability field. A comparison with an adjoint-based history matching procedure shows that the model-reduced approach gives a comparable quality of history matches and predictions. The computational efficiency of the model-reduced approach is lower than of an adjoint-based approach, but higher than of an approach where the gradients are obtained with simple finite differences.  相似文献   

13.
A cluster analysis methodology is developed to recover facies realizations from observed reservoir attributes. A maximum likelihood estimator allows us for identifying the most probable underlying facies using a spatial clustering algorithm. In seismic characterization, this algorithm can yield relevant geological models for subsequent history-matching studies. In history-matching procedures, it provides informative facies maps as well as starting points for further studies.  相似文献   

14.
A method for multiscale parameter estimation with application to reservoir history matching is presented. Starting from a given fine-scale model, coarser models are generated using a global upscaling technique where the coarse models are tuned to match the solution of the fine model. Conditioning to dynamic data is done by history-matching the coarse model. Using consistently the same resolution both for the forward and inverse problems, this model is successively refined using a combination of downscaling and history matching until model-matching dynamic data are obtained at the finest scale. Large-scale corrections are obtained using fast models, which, combined with a downscaling procedure, provide a better initial model for the final adjustment on the fine scale. The result is thus a series of models with different resolution, all matching history as good as possible with this grid. Numerical examples show that this method may significantly reduce the computational effort and/or improve the quality of the solution when achieving a fine-scale match as compared to history-matching directly on the fine scale.  相似文献   

15.
图元重叠和压盖是GIS数据中经常遇到的现象,针对MAPGIS 6.x平台系统所提供的查询方式由于频繁地逐个图元提问,而不能满足用户快速查询的需要。这里首次提出了图元列表查询技术来解决叠盖图元的查询问题,并给出了它的实现流程,讨论了关键实现技巧。图元列表查询不仅可在列表中直观地列出用户指定位置处所有叠盖图元的基本信息,还可进一步详细查询各个图元的信息(属性信息、图形参数信息),查询快速、操作简便。最后用实例证明了该技术的实用性。  相似文献   

16.
In the past years, many applications of history-matching methods in general and ensemble Kalman filter in particular have been proposed, especially in order to estimate fields that provide uncertainty in the stochastic process defined by the dynamical system of hydrocarbon recovery. Such fields can be permeability fields or porosity fields, but can also fields defined by the rock type (facies fields). The estimation of the boundaries of the geologic facies with ensemble Kalman filter (EnKF) was made, in different papers, with the aid of Gaussian random fields, which were truncated using various schemes and introduced in a history-matching process. In this paper, we estimate, in the frame of the EnKF process, the locations of three facies types that occur into a reservoir domain, with the property that each two could have a contact. The geological simulation model is a form of the general truncated plurigaussian method. The difference with other approaches consists in how the truncation scheme is introduced and in the observation operator of the facies types at the well locations. The projection from the continuous space of the Gaussian fields into the discrete space of the facies fields is realized through in an intermediary space (space with probabilities). This space connects the observation operator of the facies types at the well locations with the geological simulation model. We will test the model using a 2D reservoir which is connected with the EnKF method as a data assimilation technique. We will use different geostatistical properties for the Gaussian fields and different levels of the uncertainty introduced in the model parameters and also in the construction of the Gaussian fields.  相似文献   

17.
This paper proposes a novel history-matching method where reservoir structure is inverted from dynamic fluid flow response. The proposed workflow consists of searching for models that match production history from a large set of prior structural model realizations. This prior set represents the reservoir structural uncertainty because of interpretation uncertainty on seismic sections. To make such a search effective, we introduce a parameter space defined with a “similarity distance” for accommodating this large set of realizations. The inverse solutions are found using a stochastic search method. Realistic reservoir examples are presented to prove the applicability of the proposed method.  相似文献   

18.
In an attempt to derive more information on the parameters driving compaction, this paper explores the feasibility of a method utilizing data on compaction-induced subsidence. We commence by using a Bayesian inversion scheme to infer the reservoir compaction from subsidence observations. The method’s strength is that it incorporates all the spatial and temporal correlations imposed by the geology and reservoir data. Subsequently, the contributions of the driving parameters are unravelled. We apply the approach to a synthetic model of an upscaled gas field in the northern Netherlands. We find that the inversion procedure leads to coupling between the driving parameters, as it does not discriminate between the individual contributions to the compaction. The provisional assessment of the parameter values shows that, in order to identify adequate estimate ranges for the driving parameters, a proper parameter estimation procedure (Markov Chain Monte Carlo, data assimilation) is necessary.  相似文献   

19.
In history matching of lithofacies reservoir model, we attempt to find multiple realizations of lithofacies configuration that are conditional to dynamic data and representative of the model uncertainty space. This problem can be formalized in the Bayesian framework. Given a truncated Gaussian model as a prior and the dynamic data with its associated measurement error, we want to sample from the conditional distribution of the facies given the data. A relevant way to generate conditioned realizations is to use Markov chains Monte Carlo (MCMC). However, the dimensions of the model and the computational cost of each iteration are two important pitfalls for the use of MCMC. Furthermore, classical MCMC algorithms mix slowly, that is, they will not explore the whole support of the posterior in the time of the simulation. In this paper, we extend the methodology already described in a previous work to the problem of history matching of a Gaussian-related lithofacies reservoir model. We first show how to drastically reduce the dimension of the problem by using a truncated Karhunen-Loève expansion of the Gaussian random field underlying the lithofacies model. Moreover, we propose an innovative criterion of the choice of the number of components based on the connexity function. Then, we show how we improve the mixing properties of classical single MCMC, without increasing the global computational cost, by the use of parallel interacting Markov chains. Applying the dimension reduction and this innovative sampling method drastically lowers the number of iterations needed to sample efficiently from the posterior. We show the encouraging results obtained when applying the methodology to a synthetic history-matching case.  相似文献   

20.
应用模糊度进行测井曲线分层   总被引:1,自引:0,他引:1  
测井曲线分层,分层点放在何处是一个模糊问题,而应用模糊度能对这样一个模糊问题给出一个不模糊的、量的肯定与答复。这一方法,可将多条曲线组合在一起,给出一个统一的分层点,运算量小,速度快。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号