首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In a previous paper, we developed a theoretical basis for parameterization of reservoir model parameters based on truncated singular value decomposition (SVD) of the dimensionless sensitivity matrix. Two gradient-based algorithms based on truncated SVD were developed for history matching. In general, the best of these “SVD” algorithms requires on the order of 1/2 the number of equivalent reservoir simulation runs that are required by the limited memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) algorithm. In this work, we show that when combining SVD parameterization with the randomized maximum likelihood method, we can achieve significant additional computational savings by history matching all models simultaneously using a SVD parameterization based on a particular sensitivity matrix at each iteration. We present two new algorithms based on this idea, one which relies only on updating the SVD parameterization at each iteration and one which combines an inner iteration based on an adjoint gradient where during the inner iteration the truncated SVD parameterization does not vary. Results generated with our algorithms are compared with results obtained from the ensemble Kalman filter (EnKF). Finally, we show that by combining EnKF with the SVD-algorithm, we can improve the reliability of EnKF estimates.  相似文献   

2.
With multiscale permeability estimation one does not select parameterization prior to the estimation. Instead, one performs a hierarchical search for the right parameterization while solving a sequence of estimation problems with an increasing parameterization dimension. In some previous works on the subject, the same refinement is applied all over the porous medium. This may lead to over-parameterization, and subsequently, to unrealistic permeability estimates and excessive computational work. With adaptive multiscale permeability estimation, the new parameterization at an arbitrary stage in the estimation sequence is such that new degrees of freedom are not necessarily introduced all over the porous medium. The aim is to introduce new degrees of freedom only where it is warranted by the data. In this paper, we introduce a novel adaptive multiscale estimation. The approach is used to estimate absolute permeability from two-phase pressure data in several numerical examples.  相似文献   

3.
An iterative ensemble Kalman filter for reservoir engineering applications   总被引:1,自引:0,他引:1  
The study has been focused on examining the usage and the applicability of ensemble Kalman filtering techniques to the history matching procedures. The ensemble Kalman filter (EnKF) is often applied nowadays to solving such a problem. Meanwhile, traditional EnKF requires assumption of the distribution’s normality. Besides, it is based on the linear update of the analysis equations. These facts may cause problems when filter is used in reservoir applications and result in sampling error. The situation becomes more problematic if the a priori information on the reservoir structure is poor and initial guess about the, e.g., permeability field is far from the actual one. The above circumstance explains a reason to perform some further research concerned with analyzing specific modification of the EnKF-based approach, namely, the iterative EnKF (IEnKF) scheme, which allows restarting the procedure with a new initial guess that is closer to the actual solution and, hence, requires less improvement by the algorithm while providing better estimation of the parameters. The paper presents some examples for which the IEnKF algorithm works better than traditional EnKF. The algorithms are compared while estimating the permeability field in relation to the two-phase, two-dimensional fluid flow model.  相似文献   

4.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

5.
Gradient-based history matching algorithms can be used to adapt the uncertain parameters in a reservoir model using production data. They require, however, the implementation of an adjoint model to compute the gradients, which is usually an enormous programming effort. We propose a new approach to gradient-based history matching which is based on model reduction, where the original (nonlinear and high-order) forward model is replaced by a linear reduced-order forward model and, consequently, the adjoint of the tangent linear approximation of the original forward model is replaced by the adjoint of a linear reduced-order forward model. The reduced-order model is constructed with the aid of the proper orthogonal decomposition method. Due to the linear character of the reduced model, the corresponding adjoint model is easily obtained. The gradient of the objective function is approximated, and the minimization problem is solved in the reduced space; the procedure is iterated with the updated estimate of the parameters if necessary. The proposed approach is adjoint-free and can be used with any reservoir simulator. The method was evaluated for a waterflood reservoir with channelized permeability field. A comparison with an adjoint-based history matching procedure shows that the model-reduced approach gives a comparable quality of history matches and predictions. The computational efficiency of the model-reduced approach is lower than of an adjoint-based approach, but higher than of an approach where the gradients are obtained with simple finite differences.  相似文献   

6.
Parameter identification is one of the key elements in the construction of models in geosciences. However, inherent difficulties such as the instability of ill-posed problems or the presence of multiple local optima may impede the execution of this task. Regularization methods and Bayesian formulations, such as the maximum a posteriori estimation approach, have been used to overcome those complications. Nevertheless, in some instances, a more in-depth analysis of the inverse problem is advisable before obtaining estimates of the optimal parameters. The Markov Chain Monte Carlo (MCMC) methods used in Bayesian inference have been applied in the last 10 years in several fields of geosciences such as hydrology, geophysics or reservoir engineering. In the present paper, a compilation of basic tools for inference and a case study illustrating the practical application of them are given. Firstly, an introduction to the Bayesian approach to the inverse problem is provided together with the most common sampling algorithms with MCMC chains. Secondly, a series of estimators for quantities of interest, such as the marginal densities or the normalization constant of the posterior distribution of the parameters, are reviewed. Those reduce the computational cost significantly, using only the time needed to obtain a sample of the posterior probability density function. The use of the information theory principles for the experimental design and for the ill-posedness diagnosis is also introduced. Finally, a case study based on a highly instrumented well test found in the literature is presented. The results obtained are compared with the ones computed by the maximum likelihood estimation approach.  相似文献   

7.
In geosciences, complex forward problems met in geophysics, petroleum system analysis, and reservoir engineering problems often require replacing these forward problems by proxies, and these proxies are used for optimizations problems. For instance, history matching of observed field data requires a so large number of reservoir simulation runs (especially when using geostatistical geological models) that it is often impossible to use the full reservoir simulator. Therefore, several techniques have been proposed to mimic the reservoir simulations using proxies. Due to the use of experimental approach, most authors propose to use second-order polynomials. In this paper, we demonstrate that (1) neural networks can also be second-order polynomials. Therefore, the use of a neural network as a proxy is much more flexible and adaptable to the nonlinearity of the problem to be solved; (2) first-order and second-order derivatives of the neural network can be obtained providing gradients and Hessian for optimizers. For inverse problems met in seismic inversion, well by well production data, optimal well locations, source rock generation, etc., most of the time, gradient methods are used for finding an optimal solution. The paper will describe how to calculate these gradients from a neural network built as a proxy. When needed, the Hessian can also be obtained from the neural network approach. On a real case study, the ability of neural networks to reproduce complex phenomena (water cuts, production rates, etc.) is shown. Comparisons with second polynomials (and kriging methods) will be done demonstrating the superiority of the neural network approach as soon as nonlinearity behaviors are present in the responses of the simulator. The gradients and the Hessian of the neural network will be compared to those of the real response function.  相似文献   

8.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or 4D seismic data. Although reservoir heterogeneities are commonly generated using geostatistical models, random realizations cannot generally match observed dynamic data. To constrain model realizations to reproduce measured dynamic data, an optimization procedure may be applied in an attempt to minimize an objective function, which quantifies the mismatch between real and simulated data. Such assisted history matching methods require a parameterization of the geostatistical model to allow the updating of an initial model realization. However, there are only a few parameterization methods available to update geostatistical models in a way consistent with the underlying geostatistical properties. This paper presents a local domain parameterization technique that updates geostatistical realizations using assisted history matching. This technique allows us to locally change model realizations through the variation of geometrical domains whose geometry and size can be easily controlled and parameterized. This approach provides a new way to parameterize geostatistical realizations in order to improve history matching efficiency.  相似文献   

9.
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations) and the updated parameter field becomes scattered and loses important geological features (for example, the contact between high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time, the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems.  相似文献   

10.
11.
Determination of well locations and their operational settings (controls) such as injection/production rates in heterogeneous subsurface reservoirs poses a challenging optimization problem that has a significant impact on the recovery performance and economic value of subsurface energy resources. The well placement optimization is often formulated as an integer-programming problem that is typically carried out assuming known well control settings. Similarly, identification of the optimal well settings is usually formulated and solved as a control problem in which the well locations are fixed. Solving each of the two problems individually without accounting for the coupling between them leads to suboptimal solutions. Here, we propose to solve the coupled well placement and control optimization problems for improved production performance. We present an alternating iterative solution of the decoupled well placement and control subproblems where each subproblem (e.g., well locations) is resolved after updating the decision variables of the other subproblem (e.g., solving for the control settings) from previous step. This approach allows for application of well-established methods in the literature to solve each subproblem individually. We show that significant improvements can be achieved when the well placement problem is solved by allowing for variable and optimized well controls. We introduce a well-distance constraint into the well placement objective function to avoid solutions containing well clusters in a small region. In addition, we present an efficient gradient-based method for solving the well control optimization problem. We illustrate the effectiveness of the proposed algorithms using several numerical experiments, including the three-dimensional PUNQ reservoir and the top layer of the SPE10 benchmark model.  相似文献   

12.
Stochastic inverse modeling deals with the estimation of functions from sparse data, which is a problem with a nonunique solution, with the objective to evaluate best estimates, measures of uncertainty, and sets of solutions that are consistent with the data. As finer resolutions become desirable, the computational requirements increase dramatically when using conventional solvers. A method is developed in this paper to solve large-scale stochastic linear inverse problems, based on the hierarchical matrix (or ? 2 matrix) approach. The proposed approach can also exploit the sparsity of the underlying measurement operator, which relates observations to unknowns. Conventional direct algorithms for solving large-scale linear inverse problems, using stochastic linear inversion techniques, typically scale as ??(n 2 m+nm 2), where n is the number of measurements and m is the number of unknowns. We typically have n ? m. In contrast, the algorithm presented here scales as ??(n 2 m), i.e., it scales linearly with the larger problem dimension m. The algorithm also allows quantification of uncertainty in the solution at a computational cost that also grows only linearly in the number of unknowns. The speedup gained is significant since the number of unknowns m is often large. The effectiveness of the algorithm is demonstrated by solving a realistic crosswell tomography problem by formulating it as a stochastic linear inverse problem. In the case of the crosswell tomography problem, the sparsity of the measurement operator allows us to further reduce the cost of our proposed algorithm from ??(n 2 m) to $\mathcal {O}(n^{2} \sqrt {m} + nm)$ . The computational speedup gained by using the new algorithm makes it easier, among other things, to optimize the location of sources and receivers, by minimizing the mean square error of the estimation. Without this fast algorithm, this optimization would be computationally impractical using conventional methods.  相似文献   

13.
14.
The application of a powerful evolutionary optimization technique for the estimation of intrinsic formation constants describing geologically relevant adsorption reactions at mineral surfaces is introduced. We illustrate the optimization power of a simple Genetic Algorithm (GA) for forward (aqueous chemical speciation calculations) and inverse (calibration of Surface Complexation Models, SCMs) modeling problems of varying degrees of complexity, including problems where conventional deterministic derivative-based root-finding techniques such as Newton–Raphson, implemented in popular programs such as FITEQL, fail to converge or yield poor data fits upon convergence. Subject to sound a priori physical–chemical constraints, adequate solution encoding schemes, and simple GA operators, the GA conducts an exhaustive probabilistic search in a broad solution space and finds a suitable solution regardless of the input values and without requiring sophisticated GA implementations (e.g., advanced GA operators, parallel genetic programming). The drawback of the GA approach is the large number of iterations that must be performed to obtain a satisfactory solution. Nevertheless, for computationally demanding problems, the efficiency of the optimization can be greatly improved by combining heuristic GA optimization with the Newton–Raphson approach to exploit the power of deterministic techniques after the evolutionary-driven set of potential solutions has reached a suitable level of numerical viability. Despite the computational requirements of the GA, its robustness, flexibility, and simplicity make it a very powerful, alternative tool for the calibration of SCMs, a critical step in the generation of a reliable thermodynamic database describing adsorption equilibria. The latter is fundamental to the forward modeling of the adsorption behavior of minerals and geologically based adsorbents in hydro-geological settings (e.g., aquifers, pore waters, water basins) and/or in engineered reactors (e.g., mining, hazardous waste disposal industries).  相似文献   

15.
Traditional ensemble-based history matching method, such as the ensemble Kalman filter and iterative ensemble filters, usually update reservoir parameter fields using numerical grid-based parameterization. Although a parameter constraint term in the objective function for deriving these methods exists, it is difficult to preserve the geological continuity of the parameter field in the updating process of these methods; this is especially the case in the estimation of statistically anisotropic fields (such as a statistically anisotropic Gaussian field and facies field with elongated facies) with uncertainties about the anisotropy direction. In this work, we propose a Karhunen-Loeve expansion-based global parameterization technique that is combined with the ensemble-based history matching method for inverse modeling of statistically anisotropic fields. By using the Karhunen-Loeve expansion, a Gaussian random field can be parameterized by a group of independent Gaussian random variables. For a facies field, we combine the Karhunen-Loeve expansion and the level set technique to perform the parameterization; that is, for each facies, we use a Gaussian random field and a level set algorithm to parameterize it, and the Gaussian random field is further parameterized by the Karhunen-Loeve expansion. We treat the independent Gaussian random variables in the Karhunen-Loeve expansion as the model parameters. When the anisotropy direction of the statistically anisotropic field is uncertain, we also treat it as a model parameter for updating. After model parameterization, we use the ensemble randomized maximum likelihood filter to perform history matching. Because of the nature of the Karhunen-Loeve expansion, the geostatistical characteristics of the parameter field can be preserved in the updating process. Synthetic cases are set up to test the performance of the proposed method. Numerical results show that the proposed method is suitable for estimating statistically anisotropic fields.  相似文献   

16.
There is no gainsaying that determining the optimal number, type, and location of hydrocarbon reservoir wells is a very important aspect of field development planning. The reason behind this fact is not farfetched—the objective of any field development exercise is to maximize the total hydrocarbon recovery, which for all intents and purposes, can be measured by an economic criterion such as the net present value of the reservoir during its estimated operational life-cycle. Since the cost of drilling and completion of wells can be significantly high (millions of dollars), there is need for some form of operational and economic justification of potential well configuration, so that the ultimate purpose of maximizing production and asset value is not defeated in the long run. The problem, however, is that well optimization problems are by no means trivial. Inherent drawbacks include the associated computational cost of evaluating the objective function, the high dimensionality of the search space, and the effects of a continuous range of geological uncertainty. In this paper, the differential evolution (DE) and the particle swarm optimization (PSO) algorithms are applied to well placement problems. The results emanating from both algorithms are compared with results obtained by applying a third algorithm called hybrid particle swarm differential evolution (HPSDE)—a product of the hybridization of DE and PSO algorithms. Three cases involving the placement of vertical wells in 2-D and 3-D reservoir models are considered. In two of the three cases, a max-mean objective robust optimization was performed to address geological uncertainty arising from the mismatch between real physical reservoir and the reservoir model. We demonstrate that the performance of DE and PSO algorithms is dependent on the total number of function evaluations performed; importantly, we show that in all cases, HPSDE algorithm outperforms both DE and PSO algorithms. Based on the evidence of these findings, we hold the view that hybridized metaheuristic optimization algorithms (such as HPSDE) are applicable in this problem domain and could be potentially useful in other reservoir engineering problems.  相似文献   

17.
A method for multiscale parameter estimation with application to reservoir history matching is presented. Starting from a given fine-scale model, coarser models are generated using a global upscaling technique where the coarse models are tuned to match the solution of the fine model. Conditioning to dynamic data is done by history-matching the coarse model. Using consistently the same resolution both for the forward and inverse problems, this model is successively refined using a combination of downscaling and history matching until model-matching dynamic data are obtained at the finest scale. Large-scale corrections are obtained using fast models, which, combined with a downscaling procedure, provide a better initial model for the final adjustment on the fine scale. The result is thus a series of models with different resolution, all matching history as good as possible with this grid. Numerical examples show that this method may significantly reduce the computational effort and/or improve the quality of the solution when achieving a fine-scale match as compared to history-matching directly on the fine scale.  相似文献   

18.
In history matching of lithofacies reservoir model, we attempt to find multiple realizations of lithofacies configuration that are conditional to dynamic data and representative of the model uncertainty space. This problem can be formalized in the Bayesian framework. Given a truncated Gaussian model as a prior and the dynamic data with its associated measurement error, we want to sample from the conditional distribution of the facies given the data. A relevant way to generate conditioned realizations is to use Markov chains Monte Carlo (MCMC). However, the dimensions of the model and the computational cost of each iteration are two important pitfalls for the use of MCMC. Furthermore, classical MCMC algorithms mix slowly, that is, they will not explore the whole support of the posterior in the time of the simulation. In this paper, we extend the methodology already described in a previous work to the problem of history matching of a Gaussian-related lithofacies reservoir model. We first show how to drastically reduce the dimension of the problem by using a truncated Karhunen-Loève expansion of the Gaussian random field underlying the lithofacies model. Moreover, we propose an innovative criterion of the choice of the number of components based on the connexity function. Then, we show how we improve the mixing properties of classical single MCMC, without increasing the global computational cost, by the use of parallel interacting Markov chains. Applying the dimension reduction and this innovative sampling method drastically lowers the number of iterations needed to sample efficiently from the posterior. We show the encouraging results obtained when applying the methodology to a synthetic history-matching case.  相似文献   

19.
The degrees of freedom (DOF) in standard ensemble-based data assimilation is limited by the ensemble size. Successful assimilation of a data set with large information content (IC) therefore requires that the DOF is sufficiently large. A too small number of DOF with respect to the IC may result in ensemble collapse, or at least in unwarranted uncertainty reduction in the estimation results. In this situation, one has two options to restore a proper balance between the DOF and the IC: to increase the DOF or to decrease the IC. Spatially dense data sets typically have a large IC. Within subsurface applications, inverted time-lapse seismic data used for reservoir history matching is an example of a spatially dense data set. Such data are considered to have great potential due to their large IC, but they also contain errors that are challenging to characterize properly. The computational cost of running the forward simulations for reservoir history matching with any kind of data is large for field cases, such that a moderately large ensemble size is standard. Realization of the potential in seismic data for ensemble-based reservoir history matching is therefore not straightforward, not only because of the unknown character of the associated data errors, but also due to the imbalance between a large IC and a too small number of DOF. Distance-based localization is often applied to increase the DOF but is example specific and involves cumbersome implementation work. We consider methods to obtain a proper balance between the IC and the DOF when assimilating inverted seismic data for reservoir history matching. To decrease the IC, we consider three ways to reduce the influence of the data space; subspace pseudo inversion, data coarsening, and a novel way of performing front extraction. To increase the DOF, we consider coarse-scale simulation, which allows for an increase in the DOF by increasing the ensemble size without increasing the total computational cost. We also consider a combination of decreasing the IC and increasing the DOF by proposing a novel method consisting of a combination of data coarsening and coarse-scale simulation. The methods were compared on one small and one moderately large example with seismic bulk-velocity fields at four assimilation times as data. The size of the examples allows for calculation of a reference solution obtained with standard ensemble-based data assimilation methodology and an unrealistically large ensemble size. With the reference solution as the yardstick with which the quality of other methods are measured, we find that the novel method combining data coarsening and coarse-scale simulations gave the best results. With very restricted computational resources available, this was the only method that gave satisfactory results.  相似文献   

20.
In the analysis of petroleum reservoirs, one of the most challenging problems is to use inverse theory in the search for an optimal parameterization of the reservoir. Generally, scientists approach this problem by computing a sensitivity matrix and then perform a singular value decomposition in order to determine the number of degrees of freedom i.e. the number of independent parameters necessary to specify the configuration of the system. Here we propose a complementary approach: it uses the concept of refinement indicators to select those degrees which have the greatest sensitivity to an objective function quantifying the mismatch between measured and simulated data. We apply this approach to the problem of data integration for petrophysical reservoir charaterization where geoscientists are currently working with multimillion cell geological models. Data integration may be performed by gradually deforming (by a linear combination) a set of these multimillion grid geostatistical realizations during the optimization process. The inversion parameters are then reduced to the number of coefficients of this linear combination. However, there is an infinity of geostatistical realizations to choose from which may not be efficient regarding operational constraints. Following our new approach, we are able through a single objective function evaluation to compute refinement indicators that indicate which realizations might improve the iterative geological model in a significant way. This computation is extremely fast as it implies a single gradient computation through the adjoint state approach and dot products. Using only the most sensitive realizations from a given set, we are able to resolve quicker the optimization problem case. We applied this methodology to the integration of interference test data into 3D geostatistical models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号