首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 672 毫秒
1.
2.
Ensemble methods present a practical framework for parameter estimation, performance prediction, and uncertainty quantification in subsurface flow and transport modeling. In particular, the ensemble Kalman filter (EnKF) has received significant attention for its promising performance in calibrating heterogeneous subsurface flow models. Since an ensemble of model realizations is used to compute the statistical moments needed to perform the EnKF updates, large ensemble sizes are needed to provide accurate updates and uncertainty assessment. However, for realistic problems that involve large-scale models with computationally demanding flow simulation runs, the EnKF implementation is limited to small-sized ensembles. As a result, spurious numerical correlations can develop and lead to inaccurate EnKF updates, which tend to underestimate or even eliminate the ensemble spread. Ad hoc practical remedies, such as localization, local analysis, and covariance inflation schemes, have been developed and applied to reduce the effect of sampling errors due to small ensemble sizes. In this paper, a fast linear approximate forecast method is proposed as an alternative approach to enable the use of large ensemble sizes in operational settings to obtain more improved sample statistics and EnKF updates. The proposed method first clusters a large number of initial geologic model realizations into a small number of groups. A representative member from each group is used to run a full forward flow simulation. The flow predictions for the remaining realizations in each group are approximated by a linearization around the full simulation results of the representative model (centroid) of the respective cluster. The linearization can be performed using either adjoint-based or ensemble-based gradients. Results from several numerical experiments with two-phase and three-phase flow systems in this paper suggest that the proposed method can be applied to improve the EnKF performance in large-scale problems where the number of full simulation is constrained.  相似文献   

3.
Geologic uncertainties and limited well data often render recovery forecasting a difficult undertaking in typical appraisal and early development settings. Recent advances in geologic modeling algorithms permit automation of the model generation process via macros and geostatistical tools. This allows rapid construction of multiple alternative geologic realizations. Despite the advances in geologic modeling, computation of the reservoir dynamic response via full-physics reservoir simulation remains a computationally expensive task. Therefore, only a few of the many probable realizations are simulated in practice. Experimental design techniques typically focus on a few discrete geologic realizations as they are inherently more suitable for continuous engineering parameters and can only crudely approximate the impact of geology. A flow-based pattern recognition algorithm (FPRA) has been developed for quantifying the forecast uncertainty as an alternative. The proposed algorithm relies on the rapid characterization of the geologic uncertainty space represented by an ensemble of sufficiently diverse static model realizations. FPRA characterizes the geologic uncertainty space by calculating connectivity distances, which quantify how different each individual realization is from all others in terms of recovery response. Fast streamline simulations are employed in evaluating these distances. By applying pattern recognition techniques to connectivity distances, a few representative realizations are identified within the model ensemble for full-physics simulation. In turn, the recovery factor probability distribution is derived from these intelligently selected simulation runs. Here, FPRA is tested on an example case where the objective is to accurately compute the recovery factor statistics as a function of geologic uncertainty in a channelized turbidite reservoir. Recovery factor cumulative distribution functions computed by FPRA compare well to the one computed via exhaustive full-physics simulations.  相似文献   

4.
In the development of naturally fractured reservoirs (NFRs), the existence of natural fractures induces severe fingering and breakthrough. To manage the flooding process and improve the ultimate recovery, we propose a numerical workflow to generate optimal production schedules for smart wells, in which the inflow control valve (ICV) settings can be controlled individually. To properly consider the uncertainty introduced by randomly distributed natural fractures, the robust optimization would require a large ensemble size and it would be computationally demanding. In this work, a hierarchical clustering method is proposed to select representative models for the robust optimization in order to avoid redundant simulation runs and improve the efficiency of the robust optimization. By reducing the full ensemble of models into a small subset ensemble, the efficiency of the robust optimization algorithm is significantly improved. The robust optimization is performed using the StoSAG scheme to find the optimal well controls that maximize the net-present-value (NPV) of the NFR’s development. Due to the discrete property of a natural fracture field, traditional feature extraction methods such as model-parameter-based clustering may not be directly applicable. Therefore, two different kinds of clustering-based optimization methods, a state-based (e.g., s w profiles) clustering and a response-based (e.g., production rates) clustering, are proposed and compared. The computational results show that the robust clustering optimization could increase the computational efficiency significantly without sacrificing much expected NPV of the robust optimization. Moreover, the performance of different clustering algorithms varies widely in correspondence to different selections of clustering features. By properly extracting model features, the clustered subset could adequately represent the uncertainty of the full ensemble.  相似文献   

5.
Representing Spatial Uncertainty Using Distances and Kernels   总被引:8,自引:7,他引:1  
Assessing uncertainty of a spatial phenomenon requires the analysis of a large number of parameters which must be processed by a transfer function. To capture the possibly of a wide range of uncertainty in the transfer function response, a large set of geostatistical model realizations needs to be processed. Stochastic spatial simulation can rapidly provide multiple, equally probable realizations. However, since the transfer function is often computationally demanding, only a small number of models can be evaluated in practice, and are usually selected through a ranking procedure. Traditional ranking techniques for selection of probabilistic ranges of response (P10, P50 and P90) are highly dependent on the static property used. In this paper, we propose to parameterize the spatial uncertainty represented by a large set of geostatistical realizations through a distance function measuring “dissimilarity” between any two geostatistical realizations. The distance function allows a mapping of the space of uncertainty. The distance can be tailored to the particular problem. The multi-dimensional space of uncertainty can be modeled using kernel techniques, such as kernel principal component analysis (KPCA) or kernel clustering. These tools allow for the selection of a subset of representative realizations containing similar properties to the larger set. Without losing accuracy, decisions and strategies can then be performed applying a transfer function on the subset without the need to exhaustively evaluate each realization. This method is applied to a synthetic oil reservoir, where spatial uncertainty of channel facies is modeled through multiple realizations generated using a multi-point geostatistical algorithm and several training images.  相似文献   

6.
Determining the optimum type and location of new wells is an essential component in the efficient development of oil and gas fields. The optimization problem is, however, demanding due to the potentially high dimension of the search space and the computational requirements associated with function evaluations, which, in this case, entail full reservoir simulations. In this paper, the particle swarm optimization (PSO) algorithm is applied for the determination of optimal well type and location. The PSO algorithm is a stochastic procedure that uses a population of solutions, called particles, which move in the search space. Particle positions are updated iteratively according to particle fitness (objective function value) and position relative to other particles. The general PSO procedure is first discussed, and then the particular variant implemented for well optimization is described. Four example cases are considered. These involve vertical, deviated, and dual-lateral wells and optimization over single and multiple reservoir realizations. For each case, both the PSO algorithm and the widely used genetic algorithm (GA) are applied to maximize net present value. Multiple runs of both algorithms are performed and the results are averaged in order to achieve meaningful comparisons. It is shown that, on average, PSO outperforms GA in all cases considered, though the relative advantages of PSO vary from case to case. Taken in total, these findings are very promising and demonstrate the applicability of PSO for this challenging problem.  相似文献   

7.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or 4D seismic data. Although reservoir heterogeneities are commonly generated using geostatistical models, random realizations cannot generally match observed dynamic data. To constrain model realizations to reproduce measured dynamic data, an optimization procedure may be applied in an attempt to minimize an objective function, which quantifies the mismatch between real and simulated data. Such assisted history matching methods require a parameterization of the geostatistical model to allow the updating of an initial model realization. However, there are only a few parameterization methods available to update geostatistical models in a way consistent with the underlying geostatistical properties. This paper presents a local domain parameterization technique that updates geostatistical realizations using assisted history matching. This technique allows us to locally change model realizations through the variation of geometrical domains whose geometry and size can be easily controlled and parameterized. This approach provides a new way to parameterize geostatistical realizations in order to improve history matching efficiency.  相似文献   

8.
This paper describes a new method for gradually deforming realizations of Gaussian-related stochastic models while preserving their spatial variability. This method consists in building a stochastic process whose state space is the ensemble of the realizations of a spatial stochastic model. In particular, a stochastic process, built by combining independent Gaussian random functions, is proposed to perform the gradual deformation of realizations. Then, the gradual deformation algorithm is coupled with an optimization algorithm to calibrate realizations of stochastic models to nonlinear data. The method is applied to calibrate a continuous and a discrete synthetic permeability fields to well-test pressure data. The examples illustrate the efficiency of the proposed method. Furthermore, we present some extensions of this method (multidimensional gradual deformation, gradual deformation with respect to structural parameters, and local gradual deformation) that are useful in practice. Although the method described in this paper is operational only in the Gaussian framework (e.g., lognormal model, truncated Gaussian model, etc.), the idea of gradually deforming realizations through a stochastic process remains general and therefore promising even for calibrating non-Gaussian models.  相似文献   

9.
Waterflooding using closed-loop control   总被引:2,自引:0,他引:2  
To fully exploit the possibilities of “smart” wells containing both measurement and control equipment, one can envision a system where the measurements are used for frequent updating of a reservoir model, and an optimal control strategy is computed based on this continuously updated model. We developed such a closed-loop control approach using an ensemble Kalman filter to obtain frequent updates of a reservoir model. Based on the most recent update of the reservoir model, the optimal control strategy is computed with the aid of an adjoint formulation. The objective is to maximize the economic value over the life of the reservoir. We demonstrate the methodology on a simple waterflooding example using one injector and one producer, each equipped with several individually controllable inflow control valves (ICVs). The parameters (permeabilities) and dynamic states (pressures and saturations) of the reservoir model are updated from pressure measurements in the wells. The control of the ICVs is rate-constrained, but the methodology is also applicable to a pressure-constrained situation. Furthermore, the methodology is not restricted to use with “smart” wells with down-hole control, but could also be used for flooding control with conventional wells, provided the wells are equipped with controllable chokes and with sensors for measurement of (wellhead or down hole) pressures and total flow rates. As the ensemble Kalman filter is a Monte Carlo approach, the final results will vary for each run. We studied the robustness of the methodology, starting from different initial ensembles. Moreover, we made a comparison of a case with low measurement noise to one with significantly higher measurement noise. In all examples considered, the resulting ultimate recovery was significantly higher than for the case of waterflooding using conventional wells. Furthermore, the results obtained using closed-loop control, starting from an unknown permeability field, were almost as good as those obtained assuming a priori knowledge of the permeability field.  相似文献   

10.
11.
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate (and expensive) forward simulations are run with fewer samples, while less accurate (and inexpensive) forward simulations are run with a larger number of samples. Selecting the number of expensive and inexpensive simulations based on the number of coarse degrees of freedom, one can show that MLMC methods can provide better accuracy at the same cost as Monte Carlo (MC) methods. The main objective of the paper is twofold. First, we would like to compare NLSO and LSO mixed MsFEMs. Further, we use both approaches in the context of MLMC to speedup MC calculations.  相似文献   

12.
In conventional waterflooding of an oil field, feedback based optimal control technologies may enable higher oil recovery than with a conventional reactive strategy in which producers are closed based on water breakthrough. To compensate for the inherent geological uncertainties in an oil field, robust optimization has been suggested to improve and robustify optimal control strategies. In robust optimization of an oil reservoir, the water injection and production borehole pressures (bhp) are computed such that the predicted net present value (NPV) of an ensemble of permeability field realizations is maximized. In this paper, we both consider an open-loop optimization scenario, with no feedback, and a closed-loop optimization scenario. The closed-loop scenario is implemented in a moving horizon manner and feedback is obtained using an ensemble Kalman filter for estimation of the permeability field from the production data. For open-loop implementations, previous test case studies presented in the literature, show that a traditional robust optimization strategy (RO) gives a higher expected NPV with lower NPV standard deviation than a conventional reactive strategy. We present and study a test case where the opposite happen: The reactive strategy gives a higher expected NPV with a lower NPV standard deviation than the RO strategy. To improve the RO strategy, we propose a modified robust optimization strategy (modified RO) that can shut in uneconomical producer wells. This strategy inherits the features of both the reactive and the RO strategy. Simulations reveal that the modified RO strategy results in operations with larger returns and less risk than the reactive strategy, the RO strategy, and the certainty equivalent strategy. The returns are measured by the expected NPV and the risk is measured by the standard deviation of the NPV. In closed-loop optimization, we investigate and compare the performance of the RO strategy, the reactive strategy, and the certainty equivalent strategy. The certainty equivalent strategy is based on a single realization of the permeability field. It uses the mean of the ensemble as its permeability field. Simulations reveal that the RO strategy and the certainty equivalent strategy give a higher NPV compared to the reactive strategy. Surprisingly, the RO strategy and the certainty equivalent strategy give similar NPVs. Consequently, the certainty equivalent strategy is preferable in the closed-loop situation as it requires significantly less computational resources than the robust optimization strategy. The similarity of the certainty equivalent and the robust optimization based strategies for the closed-loop situation challenges the intuition of most reservoir engineers. Feedback reduces the uncertainty and this is the reason for the similar performance of the two strategies.  相似文献   

13.
Direct Pattern-Based Simulation of Non-stationary Geostatistical Models   总被引:5,自引:2,他引:3  
Non-stationary models often capture better spatial variation of real world spatial phenomena than stationary ones. However, the construction of such models can be tedious as it requires modeling both statistical trend and stationary stochastic component. Non-stationary models are an important issue in the recent development of multiple-point geostatistical models. This new modeling paradigm, with its reliance on the training image as the source for spatial statistics or patterns, has had considerable practical appeal. However, the role and construction of the training image in the non-stationary case remains a problematic issue from both a modeling and practical point of view. In this paper, we provide an easy to use, computationally efficient methodology for creating non-stationary multiple-point geostatistical models, for both discrete and continuous variables, based on a distance-based modeling and simulation of patterns. In that regard, the paper builds on pattern-based modeling previously published by the authors, whereby a geostatistical realization is created by laying down patterns as puzzle pieces on the simulation grid, such that the simulated patterns are consistent (in terms of a similarity definition) with any previously simulated ones. In this paper we add the spatial coordinate to the pattern similarity calculation, thereby only borrowing patterns locally from the training image instead of globally. The latter would entail a stationary assumption. Two ways of adding the geographical coordinate are presented, (1) based on a functional that decreases gradually away from the location where the pattern is simulated and (2) based on an automatic segmentation of the training image into stationary regions. Using ample two-dimensional and three-dimensional case studies we study the behavior in terms of spatial and ensemble uncertainty of the generated realizations.  相似文献   

14.
We present a methodology that allows conditioning the spatial distribution of geological and petrophysical properties of reservoir model realizations on available production data. The approach is fully consistent with modern concepts depicting natural reservoirs as composite media where the distribution of both lithological units (or facies) and associated attributes are modeled as stochastic processes of space. We represent the uncertain spatial distribution of the facies through a Markov mesh (MM) model, which allows describing complex and detailed facies geometries in a rigorous Bayesian framework. The latter is then embedded within a history matching workflow based on an iterative form of the ensemble Kalman filter (EnKF). We test the proposed methodology by way of a synthetic study characterized by the presence of two distinct facies. We analyze the accuracy and computational efficiency of our algorithm and its ability with respect to the standard EnKF to properly estimate model parameters and assess future reservoir production. We show the feasibility of integrating MM in a data assimilation scheme. Our methodology is conducive to a set of updated model realizations characterized by a realistic spatial distribution of facies and their log permeabilities. Model realizations updated through our proposed algorithm correctly capture the production dynamics.  相似文献   

15.
Optimization with the Gradual Deformation Method   总被引:1,自引:0,他引:1  
Building reservoir models consistent with production data and prior geological knowledge is usually carried out through the minimization of an objective function. Such optimization problems are nonlinear and may be difficult to solve because they tend to be ill-posed and to involve many parameters. The gradual deformation technique was introduced recently to simplify these problems. Its main feature is the preservation of the spatial structure: perturbed realizations exhibit the same spatial variability as the starting ones. It is shown that optimizations based on gradual deformation converge exponentially to the global minimum, at least for linear problems. In addition, it appears that combining the gradual deformation parameterization with optimizations may remove step by step the structure preservation capability of the gradual deformation method. This bias is negligible when deformation is restricted to a few realization chains, but grows increasingly when the chain number tends to infinity. As in practice, optimization of reservoir models is limited to a small number of iterations with respect to the number of gridblocks, the spatial variability is preserved. Last, the optimization processes are implemented on the basis of the Levenberg–Marquardt method. Although the objective functions, written in terms of Gaussian white noises, are reduced to the data mismatch term, the conditional realization space can be properly sampled.  相似文献   

16.
We present a parallel framework for history matching and uncertainty characterization based on the Kalman filter update equation for the application of reservoir simulation. The main advantages of ensemble-based data assimilation methods are that they can handle large-scale numerical models with a high degree of nonlinearity and large amount of data, making them perfectly suited for coupling with a reservoir simulator. However, the sequential implementation is computationally expensive as the methods require relatively high number of reservoir simulation runs. Therefore, the main focus of this work is to develop a parallel data assimilation framework with minimum changes into the reservoir simulator source code. In this framework, multiple concurrent realizations are computed on several partitions of a parallel machine. These realizations are further subdivided among different processors, and communication is performed at data assimilation times. Although this parallel framework is general and can be used for different ensemble techniques, we discuss the methodology and compare results of two algorithms, the ensemble Kalman filter (EnKF) and the ensemble smoother (ES). Computational results show that the absolute runtime is greatly reduced using a parallel implementation versus a serial one. In particular, a parallel efficiency of about 35 % is obtained for the EnKF, and an efficiency of more than 50 % is obtained for the ES.  相似文献   

17.
The determination of the optimal type and placement of a nonconventional well in a heterogeneous reservoir represents a challenging optimization problem. This determination is significantly more complicated if uncertainty in the reservoir geology is included in the optimization. In this study, a genetic algorithm is applied to optimize the deployment of nonconventional wells. Geological uncertainty is accounted for by optimizing over multiple reservoir models (realizations) subject to a prescribed risk attitude. To reduce the excessive computational requirements of the base method, a new statistical proxy (which provides fast estimates of the objective function) based on cluster analysis is introduced into the optimization process. This proxy provides an estimate of the cumulative distribution function (CDF) of the scenario performance, which enables the quantification of proxy uncertainty. Knowledge of the proxy-based performance estimate in conjunction with the proxy CDF enables the systematic selection of the most appropriate scenarios for full simulation. Application of the overall method for the optimization of monobore and dual-lateral well placement demonstrates the performance of the hybrid optimization procedure. Specifically, it is shown that by simulating only 10% or 20% of the scenarios (as determined by application of the proxy), optimization results very close to those achieved by simulating all cases are obtained.  相似文献   

18.
A fast Fourier transform (FFT) moving average (FFT-MA) method for generating Gaussian stochastic processes is derived. Using discrete Fourier transforms makes the calculations easy and fast so that large random fields can be produced. On the other hand, the basic moving average frame allows us to uncouple the random numbers from the structural parameters (mean, variance, correlation length, ... ), but also to draw the randomness components in spatial domain. Such features impart great flexibility to the FFT-MA generator. For instance, changing only the random numbers gives distinct realizations all having the same covariance function. Similarly, several realizations can be built from the same random number set, but from different structural parameters. Integrating the FFT-MA generator into an optimization procedure provides a tool theoretically capable to determine the random numbers identifying the Gaussian field as well as the structural parameters from dynamic data. Moreover, all or only some of the random numbers can be perturbed so that realizations produced using the FFT-MA generator can be locally updated through an optimization process.  相似文献   

19.
In an earlier study, two hierarchical multi-objective methods were suggested to include short-term targets in life-cycle production optimization. However, this earlier study has two limitations: (1) the adjoint formulation is used to obtain gradient information, requiring simulator source code access and an extensive implementation effort, and (2) one of the two proposed methods relies on the Hessian matrix which is obtained by a computationally expensive method. In order to overcome the first of these limitations, we used ensemble-based optimization (EnOpt). EnOpt does not require source code access and is relatively easy to implement. To address the second limitation, we used the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm to obtain an approximation of the Hessian matrix. We performed experiments in which water flood was optimized in a geologically realistic multilayer sector model. The controls were inflow control valve settings at predefined time intervals. Undiscounted net present value (NPV) and highly discounted NPV were the long-term and short-term objective functions used. We obtained an increase of approximately 14 % in the secondary objective for a decrease of only 0.2–0.5 % in the primary objective. The study demonstrates that ensemble-based hierarchical multi-objective optimization can achieve results of practical value in a computationally efficient manner.  相似文献   

20.
Stochastic geostatistical techniques are essential tools for groundwater flow and transport modelling in highly heterogeneous media. Typically, these techniques require massive numbers of realizations to accurately simulate the high variability and account for the uncertainty. These massive numbers of realizations imposed several constraints on the stochastic techniques (e.g. increasing the computational effort, limiting the domain size, grid resolution, time step and convergence issues). Understanding the connectivity of the subsurface layers gives an opportunity to overcome these constraints. This research presents a sampling framework to reduce the number of the required Monte Carlo realizations utilizing the connectivity properties of the hydraulic conductivity distributions in a three-dimensional domain. Different geostatistical distributions were tested in this study including exponential distribution with the Turning Bands (TBM) algorithm and spherical distribution using Sequential Gaussian Simulation (SGSIM). It is found that the total connected fraction of the largest clusters and its tortuosity are highly correlated with the percentage of mass arrival and the first arrival quantiles at different control planes. Applying different sampling techniques together with several indicators suggested that a compact sample representing only 10% of the total number of realizations can be used to produce results that are close to the results of the full set of realizations. Also, the proposed sampling techniques specially utilizing the low conductivity clustering show very promising results in terms of matching the full range of realizations. Finally, the size of selected clusters relative to domain size significantly affects transport characteristics and the connectivity indicators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号