首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper proposes a novel history-matching method where reservoir structure is inverted from dynamic fluid flow response. The proposed workflow consists of searching for models that match production history from a large set of prior structural model realizations. This prior set represents the reservoir structural uncertainty because of interpretation uncertainty on seismic sections. To make such a search effective, we introduce a parameter space defined with a “similarity distance” for accommodating this large set of realizations. The inverse solutions are found using a stochastic search method. Realistic reservoir examples are presented to prove the applicability of the proposed method.  相似文献   

2.
双相介质叠前储层参数反演方法研究   总被引:1,自引:0,他引:1  
从双相介质的理论出发,建立了叠前储层参数反演的目标函数,研究了多维储层参数迭代优化算法,提出了双相介质叠前储层参数反演的技术流程。模型数据、实际地震资料的应用表明,该方法反演误差较小、精度高于叠后地震波形反演。   相似文献   

3.
The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our implementation of the MCMC method provides the gold standard against which the aforementioned Gaussian approximations are assessed. We present numerical synthetic experiments where we quantify the capability of each of the ad hoc Gaussian approximation in reproducing the mean and the variance of the posterior distribution (characterized via MCMC) associated to a data assimilation problem. Both single-phase and two-phase (oil–water) reservoir models are considered so that fundamental differences in the resulting forward operators are highlighted. The main objective of our controlled experiments was to exhibit the substantial discrepancies of the approximation properties of standard ad hoc Gaussian approximations. Numerical investigations of the type we present here will lead to the greater understanding of the cost-efficient, but ad hoc, Bayesian techniques used for data assimilation in petroleum reservoirs and hence ultimately to improved techniques with more accurate uncertainty quantification.  相似文献   

4.
The adaptive Gaussian mixture filter (AGM) was introduced as a robust filter technique for large-scale applications and an alternative to the well-known ensemble Kalman filter (EnKF). It consists of two analysis steps, one linear update and one weighting/resampling step. The bias of AGM is determined by two parameters, one adaptive weight parameter (forcing the weights to be more uniform to avoid filter collapse) and one predetermined bandwidth parameter which decides the size of the linear update. It has been shown that if the adaptive parameter approaches one and the bandwidth parameter decreases, as an increasing function of the sample size, the filter can achieve asymptotic optimality. For large-scale applications with a limited sample size, the filter solution may be far from optimal as the adaptive parameter gets close to zero depending on how well the samples from the prior distribution match the data. The bandwidth parameter must often be selected significantly different from zero in order to make large enough linear updates to match the data, at the expense of bias in the estimates. In the iterative AGM we introduce here, we take advantage of the fact that the history matching problem is usually estimation of parameters and initial conditions. If the prior distribution of initial conditions and parameters is close to the posterior distribution, it is possible to match the historical data with a small bandwidth parameter and an adaptive weight parameter that gets close to one. Hence, the bias of the filter solution is small. In order to obtain this scenario, we iteratively run the AGM throughout the data history with a very small bandwidth to create a new prior distribution from the updated samples after each iteration. After a few iterations, nearly all samples from the previous iteration match the data, and the above scenario is achieved. A simple toy problem shows that it is possible to reconstruct the true posterior distribution using the iterative version of the AGM. Then a 2D synthetic reservoir is revisited to demonstrate the potential of the new method on large-scale problems.  相似文献   

5.
6.
Optimization with the Gradual Deformation Method   总被引:1,自引:0,他引:1  
Building reservoir models consistent with production data and prior geological knowledge is usually carried out through the minimization of an objective function. Such optimization problems are nonlinear and may be difficult to solve because they tend to be ill-posed and to involve many parameters. The gradual deformation technique was introduced recently to simplify these problems. Its main feature is the preservation of the spatial structure: perturbed realizations exhibit the same spatial variability as the starting ones. It is shown that optimizations based on gradual deformation converge exponentially to the global minimum, at least for linear problems. In addition, it appears that combining the gradual deformation parameterization with optimizations may remove step by step the structure preservation capability of the gradual deformation method. This bias is negligible when deformation is restricted to a few realization chains, but grows increasingly when the chain number tends to infinity. As in practice, optimization of reservoir models is limited to a small number of iterations with respect to the number of gridblocks, the spatial variability is preserved. Last, the optimization processes are implemented on the basis of the Levenberg–Marquardt method. Although the objective functions, written in terms of Gaussian white noises, are reduced to the data mismatch term, the conditional realization space can be properly sampled.  相似文献   

7.
We present a method to determine lower and upper bounds to the predicted production or any other economic objective from history-matched reservoir models. The method consists of two steps: 1) performing a traditional computer-assisted history match of a reservoir model with the objective to minimize the mismatch between predicted and observed production data through adjusting the grid block permeability values of the model. 2) performing two optimization exercises to minimize and maximize an economic objective over the remaining field life, for a fixed production strategy, by manipulating the same grid block permeabilities, however without significantly changing the mismatch obtained under step 1. This is accomplished through a hierarchical optimization procedure that limits the solution space of a secondary optimization problem to the (approximate) null space of the primary optimization problem. We applied this procedure to two different reservoir models. We performed a history match based on synthetic data, starting from a uniform prior and using a gradient-based minimization procedure. After history matching, minimization and maximization of the net present value (NPV), using a fixed control strategy, were executed as secondary optimization problems by changing the model parameters while staying close to the null space of the primary optimization problem. In other words, we optimized the secondary objective functions, while requiring that optimality of the primary objective (a good history match) was preserved. This method therefore provides a way to quantify the economic consequences of the well-known problem that history matching is a strongly ill-posed problem. We also investigated how this method can be used as a means to assess the cost-effectiveness of acquiring different data types to reduce the uncertainty in the expected NPV.  相似文献   

8.
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations) and the updated parameter field becomes scattered and loses important geological features (for example, the contact between high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time, the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems.  相似文献   

9.
In this paper we present an extension of the ensemble Kalman filter (EnKF) specifically designed for multimodal systems. EnKF data assimilation scheme is less accurate when it is used to approximate systems with multimodal distribution such as reservoir facies models. The algorithm is based on the assumption that both prior and posterior distribution can be approximated by Gaussian mixture and it is validated by the introduction of the concept of finite ensemble representation. The effectiveness of the approach is shown with two applications. The first example is based on Lorenz model. In the second example, the proposed methodology combined with a localization technique is used to update a 2D reservoir facies models. Both applications give evidence of an improved performance of the proposed method respect to the EnKF.  相似文献   

10.
Simulation-based optimization methods have been recently proposed for calibrating geotechnical models from laboratory and field tests. In these methods, geotechnical parameters are identified by matching model predictions to experimental data, i.e. by minimizing an objective function that measures the difference between the two. Expensive computational models, such as finite difference or finite element models are often required to simulate laboratory or field geotechnical tests. In such cases, simulation-based optimization might prove demanding since every evaluation of the objective function requires a new model simulation until the optimum set of parameter values is achieved. This paper introduces a novel simulation-based “hybrid moving boundary particle swarm optimization” (hmPSO) algorithm that enables calibration of geotechnical models from laboratory or field data. The hmPSO has proven effective in searching for model parameter values and, unlike other optimization methods, does not require information about the gradient of the objective function. Serial and parallel implementations of hmPSO have been validated in this work against a number of benchmarks, including numerical tests, and a challenging geotechnical problem consisting of the calibration of a water infiltration model for unsaturated soils. The latter application demonstrates the potential of hmPSO for interpreting laboratory and field tests as well as a tool for general back-analysis of geotechnical case studies.  相似文献   

11.
12.
The amount of hydrocarbon recovered can be considerably increased by finding optimal placement of non-conventional wells. For that purpose, the use of optimization algorithms, where the objective function is evaluated using a reservoir simulator, is needed. Furthermore, for complex reservoir geologies with high heterogeneities, the optimization problem requires algorithms able to cope with the non-regularity of the objective function. In this paper, we propose an optimization methodology for determining optimal well locations and trajectories based on the covariance matrix adaptation evolution strategy (CMA-ES) which is recognized as one of the most powerful derivative-free optimizers for continuous optimization. In addition, to improve the optimization procedure, two new techniques are proposed: (a) adaptive penalization with rejection in order to handle well placement constraints and (b) incorporation of a meta-model, based on locally weighted regression, into CMA-ES, using an approximate stochastic ranking procedure, in order to reduce the number of reservoir simulations required to evaluate the objective function. The approach is applied to the PUNQ-S3 case and compared with a genetic algorithm (GA) incorporating the Genocop III technique for handling constraints. To allow a fair comparison, both algorithms are used without parameter tuning on the problem, and standard settings are used for the GA and default settings for CMA-ES. It is shown that our new approach outperforms the genetic algorithm: It leads in general to both a higher net present value and a significant reduction in the number of reservoir simulations needed to reach a good well configuration. Moreover, coupling CMA-ES with a meta-model leads to further improvement, which was around 20% for the synthetic case in this study.  相似文献   

13.
田泽润  李守巨  于申 《岩土力学》2014,35(Z2):508-513
根据白山抽水蓄能泵站地下厂房开挖过程中的变形观测数据,提出了一种基于响应面法的岩体力学参数反演方法。该方法利用响应面函数建立了岩体力学参数与围岩变形之间的非线性关系。通过有限元数值模拟确立了响应面函数中的系数。定义参数反演的目标函数,将参数反演问题转化为优化问题。分别采用拟牛顿优化算法和遗传算法求解参数反演的目标函数,得到了地下厂房的岩体力学参数。根据反演确定的岩体力学参数,对地下厂房围岩的开挖变形进行了数值模拟,研究表明,有限元模拟的地下厂房与现场观测值基本一致,验证了反演方法的有效性。  相似文献   

14.
15.
Bayesian modeling requires the specification of prior and likelihood models. In reservoir characterization, it is common practice to estimate the prior from a training image. This paper considers a multi-grid approach for the construction of prior models for binary variables. On each grid level we adopt a Markov random field (MRF) conditioned on values in previous levels. Parameter estimation in MRFs is complicated by a computationally intractable normalizing constant. To cope with this problem, we generate a partially ordered Markov model (POMM) approximation to the MRF and use this in the model fitting procedure. Approximate unconditional simulation from the fitted model can easily be done by again adopting the POMM approximation to the fitted MRF. Approximate conditional simulation, for a given and easy to compute likelihood function, can also be performed either by the Metropolis–Hastings algorithm based on an approximation to the fitted MRF or by constructing a new POMM approximation to this approximate conditional distribution. The proposed methods are illustrated using three frequently used binary training images.  相似文献   

16.
The multiple-point simulation (MPS) method has been increasingly used to describe the complex geologic features of petroleum reservoirs. The MPS method is based on multiple-point statistics from training images that represent geologic patterns of the reservoir heterogeneity. The traditional MPS algorithm, however, requires the training images to be stationary in space, although the spatial distribution of geologic patterns/features is usually nonstationary. Building geologically realistic but statistically stationary training images is somehow contradictory for reservoir modelers. In recent research on MPS, the concept of a training image has been widely extended. The MPS approach is no longer restricted by the size or the stationarity of training images; a training image can be a small geometrical element or a full-field reservoir model. In this paper, the different types of training images and their corresponding MPS algorithms are first reviewed. Then focus is placed on a case where a reservoir model exists, but needs to be conditioned to well data. The existing model can be built by process-based, object-based, or any other type of reservoir modeling approach. In general, the geologic patterns in a reservoir model are constrained by depositional environment, seismic data, or other trend maps. Thus, they are nonstationary, in the sense that they are location dependent. A new MPS algorithm is proposed that can use any existing model as training image and condition it to well data. In particular, this algorithm is a practical solution for conditioning geologic-process-based reservoir models to well data.  相似文献   

17.
In the analysis of petroleum reservoirs, one of the most challenging problems is to use inverse theory in the search for an optimal parameterization of the reservoir. Generally, scientists approach this problem by computing a sensitivity matrix and then perform a singular value decomposition in order to determine the number of degrees of freedom i.e. the number of independent parameters necessary to specify the configuration of the system. Here we propose a complementary approach: it uses the concept of refinement indicators to select those degrees which have the greatest sensitivity to an objective function quantifying the mismatch between measured and simulated data. We apply this approach to the problem of data integration for petrophysical reservoir charaterization where geoscientists are currently working with multimillion cell geological models. Data integration may be performed by gradually deforming (by a linear combination) a set of these multimillion grid geostatistical realizations during the optimization process. The inversion parameters are then reduced to the number of coefficients of this linear combination. However, there is an infinity of geostatistical realizations to choose from which may not be efficient regarding operational constraints. Following our new approach, we are able through a single objective function evaluation to compute refinement indicators that indicate which realizations might improve the iterative geological model in a significant way. This computation is extremely fast as it implies a single gradient computation through the adjoint state approach and dot products. Using only the most sensitive realizations from a given set, we are able to resolve quicker the optimization problem case. We applied this methodology to the integration of interference test data into 3D geostatistical models.  相似文献   

18.
There are several issues to consider when we use ensemble smoothers to condition reservoir models on rate data. The values in a time series of rate data contain redundant information that may lead to poorly conditioned inversions and thereby influence the stability of the numerical computation of the update. A time series of rate data typically has correlated measurement errors in time, and negligence of the correlations leads to a too strong impact from conditioning on the rate data and possible ensemble collapse. The total number of rate data included in the smoother update will typically exceed the ensemble size, and special care needs to be taken to ensure numerically stable results. We force the reservoir model with production rate data derived from the observed production, and the further conditioning on the same rate data implies that we use the data twice. This paper discusses strategies for conditioning reservoir models on rate data using ensemble smoothers. In particular, a significant redundancy in the rate data makes it possible to subsample the rate data. The alternative to subsampling is to model the unknown measurement error correlations and specify the full measurement error covariance matrix. We demonstrate the proposed strategies using different ensemble smoothers with the Norne full-field reservoir model.  相似文献   

19.
In this study, we introduce the application of data mining to petroleum exploration and development to obtain high-performance predictive models and optimal classifications of geology, reservoirs, reservoir beds, and fluid properties. Data mining is a practical method for finding characteristics of, and inherent laws in massive multi-dimensional data. The data mining method is primarily composed of three loops, which are feature selection, model parameter optimization, and model performance evaluation. The method’s key techniques involve applying genetic algorithms to carry out feature selection and parameter optimization and using repeated cross-validation methods to obtain unbiased estimation of generalization accuracy. The optimal model is finally selected from the various algorithms tested. In this paper, the evaluation of water-flooded layers and the classification of conglomerate reservoirs in Karamay oil field are selected as case studies to analyze comprehensively two important functions in data mining, namely predictive modeling and cluster analysis. For the evaluation of water-flooded layers, six feature subset schemes and five distinct types of data mining methods (decision trees, artificial neural networks, support vector machines, Bayesian networks, and ensemble learning) are analyzed and compared. The results clearly demonstrate that decision trees are superior to the other methods in terms of predictive model accuracy and interpretability. Therefore, a decision tree-based model is selected as the final model for identifying water-flooded layers in the conglomerate reservoir. For the reservoir classification, the reservoir classification standards from four types of clustering algorithms, such as those based on division, level, model, and density, are comparatively analyzed. The results clearly indicate that the clustering derived from applying the standard K-means algorithm, which is based on division, provides the best fit to the geological characteristics of the actual reservoir and the greatest accuracy of reservoir classification. Moreover, the internal measurement parameters of this algorithm, such as compactness, efficiency, and resolution, are all better than those of the other three algorithms. Compared with traditional methods from exploration geophysics, the data mining method has obvious advantages in solving problems involving calculation of reservoir parameters and reservoir classification using different specialized field data. Hence, the effective application of data mining methods can provide better services for petroleum exploration and development.  相似文献   

20.
A new procedure to integrate critical state models including Cam–Clay and modified Cam–Clay is proposed here. The proposed procedure makes use of the linearity of the virgin isotropic compression curve and the parallel anisotropic consolidation lines in e–ln p space which are basic features of the formulation of critical state models. Using this algorithm, a unique final stress state may be found as a function of a single unknown for elastoplastic loading. The key equations are given in this article for the Cam–Clay and modified Cam–Clay models. The use of the Newton–Raphson iterative method to minimize residuals and obtain a converged solution is described here. This new algorithm may be applied using the assumptions of linear elasticity or non‐linear elasticity within a given loading step. The new algorithm proposed here is internally consistent and has computational advantages over the current numerical integration procedures. Numerical examples are presented to show the performance of the algorithm as compared to other integration algorithms. Published in 2005 by John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号