首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Performing a line search method in the direction given by the simplex gradient is a well-known method in the mathematical optimization community. For reservoir engineering optimization problems, both a modification of the simultaneous perturbation stochastic approximation (SPSA) and ensemble-based optimization (EnOpt) have recently been applied for estimating optimal well controls in the production optimization step of closed-loop reservoir management. The modified SPSA algorithm has also been applied to assisted history-matching problems. A recent comparison of the performance of EnOpt and a SPSA-type algorithm (G-SPSA) for a set of production optimization test problems showed that the two algorithms resulted in similar estimates of the optimal net-present-value and required roughly the same amount of computational time to achieve these estimates. Here, we show that, theoretically, this result is not surprising. In fact, we show that both the simplex, preconditioned simplex, and EnOpt algorithms can be derived directly from a modified SPSA-type algorithm where the preconditioned simplex algorithm is presented for the first time in this paper. We also show that the expectation of all these preconditioned stochastic gradients is a first-order approximation of the preconditioning covariance matrix times the true gradient or a covariance matrix squared times the true gradient.  相似文献   

3.
Constraining stochastic models of reservoir properties such as porosity and permeability can be formulated as an optimization problem. While an optimization based on random search methods preserves the spatial variability of the stochastic model, it is prohibitively computer intensive. In contrast, gradient search methods may be very efficient but it does not preserve the spatial variability of the stochastic model. The gradual deformation method allows for modifying a reservoir model (i.e., realization of the stochastic model) from a small number of parameters while preserving its spatial variability. It can be considered as a first step towards the merger of random and gradient search methods. The gradual deformation method yields chains of reservoir models that can be investigated successively to identify an optimal reservoir model. The investigation of each chain is based on gradient computations, but the building of chains of reservoir models is random. In this paper, we propose an algorithm that further improves the efficiency of the gradual deformation method. Contrary to the previous gradual deformation method, we also use gradient information to build chains of reservoir models. The idea is to combine the initial reservoir model or the previously optimized reservoir model with a compound reservoir model. This compound model is a linear combination of a set of independent reservoir models. The combination coefficients are calculated so that the search direction from the initial model is as close as possible to the gradient search direction. This new gradual deformation scheme allows us for reducing the number of optimization parameters while selecting an optimal search direction. The numerical example compares the performance of the new gradual deformation scheme with that of the traditional one.  相似文献   

4.
This paper describes a new method for gradually deforming realizations of Gaussian-related stochastic models while preserving their spatial variability. This method consists in building a stochastic process whose state space is the ensemble of the realizations of a spatial stochastic model. In particular, a stochastic process, built by combining independent Gaussian random functions, is proposed to perform the gradual deformation of realizations. Then, the gradual deformation algorithm is coupled with an optimization algorithm to calibrate realizations of stochastic models to nonlinear data. The method is applied to calibrate a continuous and a discrete synthetic permeability fields to well-test pressure data. The examples illustrate the efficiency of the proposed method. Furthermore, we present some extensions of this method (multidimensional gradual deformation, gradual deformation with respect to structural parameters, and local gradual deformation) that are useful in practice. Although the method described in this paper is operational only in the Gaussian framework (e.g., lognormal model, truncated Gaussian model, etc.), the idea of gradually deforming realizations through a stochastic process remains general and therefore promising even for calibrating non-Gaussian models.  相似文献   

5.
Based on the algorithm for gradual deformation of Gaussian stochastic models, we propose, in this paper, an extension of this method to gradually deforming realizations generated by sequential, not necessarily Gaussian, simulation. As in the Gaussian case, gradual deformation of a sequential simulation preserves spatial variability of the stochastic model and yields in general a regular objective function that can be minimized by an efficient optimization algorithm (e.g., a gradient-based algorithm). Furthermore, we discuss the local gradual deformation and the gradual deformation with respect to the structural parameters (mean, variance, and variogram range, etc.) of realizations generated by sequential simulation. Local gradual deformation may significantly improve calibration speed in the case where observations are scattered in different zones of a field. Gradual deformation with respect to structural parameters is necessary when these parameters cannot be inferred a priori and need to be determined using an inverse procedure. A synthetic example inspired from a real oil field is presented to illustrate different aspects of this approach. Results from this case study demonstrate the efficiency of the gradual deformation approach for constraining facies models generated by sequential indicator simulation. They also show the potential applicability of the proposed approach to complex real cases.  相似文献   

6.
Development of subsurface energy and environmental resources can be improved by tuning important decision variables such as well locations and operating rates to optimize a desired performance metric. Optimal well locations in a discretized reservoir model are typically identified by solving an integer programming problem while identification of optimal well settings (controls) is formulated as a continuous optimization problem. In general, however, the decision variables in field development optimization can include many design parameters such as the number, type, location, short-term and long-term operational settings (controls), and drilling schedule of the wells. In addition to the large number of decision variables, field optimization problems are further complicated by the existing technical and physical constraints as well as the uncertainty in describing heterogeneous properties of geologic formations. In this paper, we consider simultaneous optimization of well locations and dynamic rate allocations under geologic uncertainty using a variant of the simultaneous perturbation and stochastic approximation (SPSA). In addition, by taking advantage of the robustness of SPSA against errors in calculating the cost function, we develop an efficient field development optimization under geologic uncertainty, where an ensemble of models are used to describe important flow and transport reservoir properties (e.g., permeability and porosity). We use several numerical experiments, including a channel layer of the SPE10 model and the three-dimensional PUNQ-S3 reservoir, to illustrate the performance improvement that can be achieved by solving a combined well placement and control optimization using the SPSA algorithm under known and uncertain reservoir model assumptions.  相似文献   

7.
We present a two-step stochastic inversion approach for monitoring the distribution of CO2 injected into deep saline aquifers for the typical scenario of one single injection well and a database comprising a common suite of well logs as well as time-lapse vertical seismic profiling (VSP) data. In the first step, we compute several sets of stochastic models of the elastic properties using conventional sequential Gaussian co-simulations (SGCS) representing the considered reservoir before CO2 injection. All realizations within a set of models are then iteratively combined using a modified gradual deformation algorithm aiming at reducing the mismatch between the observed and simulated VSP data. In the second step, these optimal static models then serve as input for a history matching approach using the same modified gradual deformation algorithm for minimizing the mismatch between the observed and simulated VSP data following the injection of CO2. At each gradual deformation step, the injection and migration of CO2 is simulated and the corresponding seismic traces are computed and compared with the observed ones. The proposed stochastic inversion approach has been tested for a realistic, and arguably particularly challenging, synthetic case study mimicking the geological environment of a potential CO2 injection site in the Cambrian-Ordivician sedimentary sequence of the St. Lawrence platform in Southern Québec. The results demonstrate that the proposed two-step reservoir characterization approach is capable of adequately resolving and monitoring the distribution of the injected CO2. This finds its expression in optimized models of P- and S-wave velocities, density, and porosity, which, compared to conventional stochastic reservoir models, exhibit a significantly improved structural similarity with regard to the corresponding reference models. The proposed approach is therefore expected to allow for an optimal injection forecast by using a quantitative assimilation of all available data from the appraisal stage of a CO2 injection site.  相似文献   

8.
The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our implementation of the MCMC method provides the gold standard against which the aforementioned Gaussian approximations are assessed. We present numerical synthetic experiments where we quantify the capability of each of the ad hoc Gaussian approximation in reproducing the mean and the variance of the posterior distribution (characterized via MCMC) associated to a data assimilation problem. Both single-phase and two-phase (oil–water) reservoir models are considered so that fundamental differences in the resulting forward operators are highlighted. The main objective of our controlled experiments was to exhibit the substantial discrepancies of the approximation properties of standard ad hoc Gaussian approximations. Numerical investigations of the type we present here will lead to the greater understanding of the cost-efficient, but ad hoc, Bayesian techniques used for data assimilation in petroleum reservoirs and hence ultimately to improved techniques with more accurate uncertainty quantification.  相似文献   

9.
Assessment of uncertainty in the performance of fluvial reservoirs often requires the ability to generate realizations of channel sands that are conditional to well observations. For channels with low sinuosity this problem has been effectively solved. When the sinuosity is large, however, the standard stochastic models for fluvial reservoirs are not valid, because the deviation of the channel from a principal direction line is multivalued. In this paper, I show how the method of randomized maximum likelihood can be used to generate conditional realizations of channels with large sinuosity. In one example, a Gaussian random field model is used to generate an unconditional realization of a channel with large sinuosity, and this realization is then conditioned to well observations. Channels generated in the second approach are less realistic, but may be sufficient for modeling reservoir connectivity in a realistic way. In the second example, an unconditional realization of a channel is generated by a complex geologic model with random forcing. It is then adjusted in a meaningful way to honor well observations. The key feature in the solution is the use of channel direction instead of channel deviation as the characteristic random function describing the geometry of the channel.  相似文献   

10.
Calibrating a stochastic reservoir model on large, fine-grid to hydrodynamic data requires consistent methods to modify the petrophysical properties of the model. Several methods have been developed to address this problem. Recent methods include the Gradual Deformation Method (GDM) and the Probability Perturbation Method (PPM). The GDM has been applied to pixel-based models of continuous and categorical variables, as well as object-based models. Initially, the PPM has been applied to pixel-based models of categorical variables generated by sequential simulation. In addition, the PPM relies on an analytical formula (known as the tau-model) to approximate conditional probabilities. In this paper, an extension of the PPM to any type of probability distributions (discrete, continuous, or mixed) is presented. This extension is still constrained by the approximation using the tau-model. However, when applying the method to white noises, this approximation is no longer necessary. The result is an entirely new and rigorous method for perturbing any type of stochastic models, a modified PPM employed in similar manner to the GDM.  相似文献   

11.
The prediction of fluid flows within hydrocarbon reservoirs requires the characterization of petrophysical properties. Such characterization is performed on the basis of geostatistics and history-matching; in short, a reservoir model is first randomly drawn, and then sequentially adjusted until it reproduces the available dynamic data. Two main concerns typical of the problem under consideration are the heterogeneity of rocks occurring at all scales and the use of data of distinct resolution levels. Therefore, referring to sequential Gaussian simulation, this paper proposes a new stochastic simulation method able to handle several scales for both continuous or discrete random fields. This method adds flexibility to history-matching as it boils down to the multiscale parameterization of reservoir models. In other words, reservoir models can be updated at either coarse or fine scales, or both. Parameterization adapts to the available data; the coarser the scale targeted, the smaller the number of unknown parameters, and the more efficient the history-matching process. This paper focuses on the use of variational optimization techniques driven by the gradual deformation method to vary reservoir models. Other data assimilation methods and perturbation processes could have been envisioned as well. Last, a numerical application case is presented in order to highlight the advantages of the proposed method for conditioning permeability models to dynamic data. For simplicity, we focus on two-scale processes. The coarse scale describes the variations in the trend while the fine scale characterizes local variations around the trend. The relationships between data resolution and parameterization are investigated.  相似文献   

12.
Application of EM algorithms for seismic facices classification   总被引:1,自引:0,他引:1  
Identification of the geological facies and their distribution from seismic and other available geological information is important during the early stage of reservoir development (e.g. decision on initial well locations). Traditionally, this is done by manually inspecting the signatures of the seismic attribute maps, which is very time-consuming. This paper proposes an application of the Expectation-Maximization (EM) algorithm to automatically identify geological facies from seismic data. While the properties within a certain geological facies are relatively homogeneous, the properties between geological facies can be rather different. Assuming that noisy seismic data of a geological facies, which reflect rock properties, can be approximated with a Gaussian distribution, the seismic data of a reservoir composed of several geological facies are samples from a Gaussian mixture model. The mean of each Gaussian model represents the average value of the seismic data within each facies while the variance gives the variation of the seismic data within a facies. The proportions in the Gaussian mixture model represent the relative volumes of different facies in the reservoir. In this setting, the facies classification problem becomes a problem of estimating the parameters defining the Gaussian mixture model. The EM algorithm has long been used to estimate Gaussian mixture model parameters. As the standard EM algorithm does not consider spatial relationship among data, it can generate spatially scattered seismic facies which is physically unrealistic. We improve the standard EM algorithm by adding a spatial constraint to enhance spatial continuity of the estimated geological facies. By applying the EM algorithms to acoustic impedance and Poisson’s ratio data for two synthetic examples, we are able to identify the facies distribution.  相似文献   

13.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

14.
Optimization with the Gradual Deformation Method   总被引:1,自引:0,他引:1  
Building reservoir models consistent with production data and prior geological knowledge is usually carried out through the minimization of an objective function. Such optimization problems are nonlinear and may be difficult to solve because they tend to be ill-posed and to involve many parameters. The gradual deformation technique was introduced recently to simplify these problems. Its main feature is the preservation of the spatial structure: perturbed realizations exhibit the same spatial variability as the starting ones. It is shown that optimizations based on gradual deformation converge exponentially to the global minimum, at least for linear problems. In addition, it appears that combining the gradual deformation parameterization with optimizations may remove step by step the structure preservation capability of the gradual deformation method. This bias is negligible when deformation is restricted to a few realization chains, but grows increasingly when the chain number tends to infinity. As in practice, optimization of reservoir models is limited to a small number of iterations with respect to the number of gridblocks, the spatial variability is preserved. Last, the optimization processes are implemented on the basis of the Levenberg–Marquardt method. Although the objective functions, written in terms of Gaussian white noises, are reduced to the data mismatch term, the conditional realization space can be properly sampled.  相似文献   

15.
On optimization algorithms for the reservoir oil well placement problem   总被引:1,自引:0,他引:1  
Determining optimal locations and operation parameters for wells in oil and gas reservoirs has a potentially high economic impact. Finding these optima depends on a complex combination of geological, petrophysical, flow regimen, and economical parameters that are hard to grasp intuitively. On the other hand, automatic approaches have in the past been hampered by the overwhelming computational cost of running thousands of potential cases using reservoir simulators, given that each of these runs can take on the order of hours. Therefore, the key issue to such automatic optimization is the development of algorithms that find good solutions with a minimum number of function evaluations. In this work, we compare and analyze the efficiency, effectiveness, and reliability of several optimization algorithms for the well placement problem. In particular, we consider the simultaneous perturbation stochastic approximation (SPSA), finite difference gradient (FDG), and very fast simulated annealing (VFSA) algorithms. None of these algorithms guarantees to find the optimal solution, but we show that both SPSA and VFSA are very efficient in finding nearly optimal solutions with a high probability. We illustrate this with a set of numerical experiments based on real data for single and multiple well placement problems.  相似文献   

16.
There is a correspondence between flow in a reservoir and large scale permeability trends. This correspondence can be derived by constraining reservoir models using observed production data. One of the challenges in deriving the permeability distribution of a field using production data involves determination of the scale of resolution of the permeability. The Adaptive Multiscale Estimation (AME) seeks to overcome the problems related to choosing the resolution of the permeability field by a dynamic parameterisation selection. The standard AME uses a gradient algorithm in solving several optimisation problems with increasing permeability resolution. This paper presents a hybrid algorithm which combines a gradient search and a stochastic algorithm to improve the robustness of the dynamic parameterisation selection. At low dimension, we use the stochastic algorithm to generate several optimised models. We use information from all these produced models to find new optimal refinements, and start out new optimisations with several unequally suggested parameterisations. At higher dimensions we change to a gradient-type optimiser, where the initial solution is chosen from the ensemble of models suggested by the stochastic algorithm. The selection is based on a predefined criterion. We demonstrate the robustness of the hybrid algorithm on sample synthetic cases, which most of them were considered insolvable using the standard AME algorithm.  相似文献   

17.
The least squares Monte Carlo method is a decision evaluation method that can capture the effect of uncertainty and the value of flexibility of a process. The method is a stochastic approximate dynamic programming approach to decision making. It is based on a forward simulation coupled with a recursive algorithm which produces the near-optimal policy. It relies on the Monte Carlo simulation to produce convergent results. This incurs a significant computational requirement when using this method to evaluate decisions for reservoir engineering problems because this requires running many reservoir simulations. The objective of this study was to enhance the performance of the least squares Monte Carlo method by improving the sampling method used to generate the technical uncertainties used in obtaining the production profiles. The probabilistic collocation method has been proven to be a robust and efficient uncertainty quantification method. By using the sampling methods of the probabilistic collocation method to approximate the sampling of the technical uncertainties, it is possible to significantly reduce the computational requirement of running the decision evaluation method. Thus, we introduce the least squares probabilistic collocation method. The decision evaluation considered a number of technical and economic uncertainties. Three reservoir case studies were used: a simple homogeneous model, the PUNQ-S3 model, and a modified portion of the SPE10 model. The results show that using the sampling techniques of the probabilistic collocation method produced relatively accurate responses compared with the original method. Different possible enhancements were discussed in order to practically adapt the least squares probabilistic collocation method to more realistic and complex reservoir models. Furthermore, it is desired to perform the method to evaluate high-dimensional decision scenarios for different chemical enhanced oil recovery processes using real reservoir data.  相似文献   

18.
19.
A Bayesian linear inversion methodology based on Gaussian mixture models and its application to geophysical inverse problems are presented in this paper. The proposed inverse method is based on a Bayesian approach under the assumptions of a Gaussian mixture random field for the prior model and a Gaussian linear likelihood function. The model for the latent discrete variable is defined to be a stationary first-order Markov chain. In this approach, a recursive exact solution to an approximation of the posterior distribution of the inverse problem is proposed. A Markov chain Monte Carlo algorithm can be used to efficiently simulate realizations from the correct posterior model. Two inversion studies based on real well log data are presented, and the main results are the posterior distributions of the reservoir properties of interest, the corresponding predictions and prediction intervals, and a set of conditional realizations. The first application is a seismic inversion study for the prediction of lithological facies, P- and S-impedance, where an improvement of 30% in the root-mean-square error of the predictions compared to the traditional Gaussian inversion is obtained. The second application is a rock physics inversion study for the prediction of lithological facies, porosity, and clay volume, where predictions slightly improve compared to the Gaussian inversion approach.  相似文献   

20.
Bayesian modeling requires the specification of prior and likelihood models. In reservoir characterization, it is common practice to estimate the prior from a training image. This paper considers a multi-grid approach for the construction of prior models for binary variables. On each grid level we adopt a Markov random field (MRF) conditioned on values in previous levels. Parameter estimation in MRFs is complicated by a computationally intractable normalizing constant. To cope with this problem, we generate a partially ordered Markov model (POMM) approximation to the MRF and use this in the model fitting procedure. Approximate unconditional simulation from the fitted model can easily be done by again adopting the POMM approximation to the fitted MRF. Approximate conditional simulation, for a given and easy to compute likelihood function, can also be performed either by the Metropolis–Hastings algorithm based on an approximation to the fitted MRF or by constructing a new POMM approximation to this approximate conditional distribution. The proposed methods are illustrated using three frequently used binary training images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号