首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 797 毫秒
1.
Determination of well locations and their operational settings (controls) such as injection/production rates in heterogeneous subsurface reservoirs poses a challenging optimization problem that has a significant impact on the recovery performance and economic value of subsurface energy resources. The well placement optimization is often formulated as an integer-programming problem that is typically carried out assuming known well control settings. Similarly, identification of the optimal well settings is usually formulated and solved as a control problem in which the well locations are fixed. Solving each of the two problems individually without accounting for the coupling between them leads to suboptimal solutions. Here, we propose to solve the coupled well placement and control optimization problems for improved production performance. We present an alternating iterative solution of the decoupled well placement and control subproblems where each subproblem (e.g., well locations) is resolved after updating the decision variables of the other subproblem (e.g., solving for the control settings) from previous step. This approach allows for application of well-established methods in the literature to solve each subproblem individually. We show that significant improvements can be achieved when the well placement problem is solved by allowing for variable and optimized well controls. We introduce a well-distance constraint into the well placement objective function to avoid solutions containing well clusters in a small region. In addition, we present an efficient gradient-based method for solving the well control optimization problem. We illustrate the effectiveness of the proposed algorithms using several numerical experiments, including the three-dimensional PUNQ reservoir and the top layer of the SPE10 benchmark model.  相似文献   

2.
Geologic uncertainties and limited well data often render recovery forecasting a difficult undertaking in typical appraisal and early development settings. Recent advances in geologic modeling algorithms permit automation of the model generation process via macros and geostatistical tools. This allows rapid construction of multiple alternative geologic realizations. Despite the advances in geologic modeling, computation of the reservoir dynamic response via full-physics reservoir simulation remains a computationally expensive task. Therefore, only a few of the many probable realizations are simulated in practice. Experimental design techniques typically focus on a few discrete geologic realizations as they are inherently more suitable for continuous engineering parameters and can only crudely approximate the impact of geology. A flow-based pattern recognition algorithm (FPRA) has been developed for quantifying the forecast uncertainty as an alternative. The proposed algorithm relies on the rapid characterization of the geologic uncertainty space represented by an ensemble of sufficiently diverse static model realizations. FPRA characterizes the geologic uncertainty space by calculating connectivity distances, which quantify how different each individual realization is from all others in terms of recovery response. Fast streamline simulations are employed in evaluating these distances. By applying pattern recognition techniques to connectivity distances, a few representative realizations are identified within the model ensemble for full-physics simulation. In turn, the recovery factor probability distribution is derived from these intelligently selected simulation runs. Here, FPRA is tested on an example case where the objective is to accurately compute the recovery factor statistics as a function of geologic uncertainty in a channelized turbidite reservoir. Recovery factor cumulative distribution functions computed by FPRA compare well to the one computed via exhaustive full-physics simulations.  相似文献   

3.
4.
On optimization algorithms for the reservoir oil well placement problem   总被引:1,自引:0,他引:1  
Determining optimal locations and operation parameters for wells in oil and gas reservoirs has a potentially high economic impact. Finding these optima depends on a complex combination of geological, petrophysical, flow regimen, and economical parameters that are hard to grasp intuitively. On the other hand, automatic approaches have in the past been hampered by the overwhelming computational cost of running thousands of potential cases using reservoir simulators, given that each of these runs can take on the order of hours. Therefore, the key issue to such automatic optimization is the development of algorithms that find good solutions with a minimum number of function evaluations. In this work, we compare and analyze the efficiency, effectiveness, and reliability of several optimization algorithms for the well placement problem. In particular, we consider the simultaneous perturbation stochastic approximation (SPSA), finite difference gradient (FDG), and very fast simulated annealing (VFSA) algorithms. None of these algorithms guarantees to find the optimal solution, but we show that both SPSA and VFSA are very efficient in finding nearly optimal solutions with a high probability. We illustrate this with a set of numerical experiments based on real data for single and multiple well placement problems.  相似文献   

5.
Oilfield development involves several key decisions, including the number, type (injection/production), location, drilling schedule, and operating control trajectories of the wells. Without considering the coupling between these decision variables, any optimization problem formulation is bound to find suboptimal solutions. This paper presents a unified formulation for oilfield development optimization that seeks to simultaneously optimize these decision variables. We show that the source/sink term of the governing multiphase flow equations includes all the above decision variables. This insight leads to a novel and unified formulation of the field development optimization problem that considers the source/sink term in reservoir simulation equations as optimization decision variables. Therefore, a single optimization problem is formulated to simultaneously search for optimal decision variables by determining the complete dynamic form of the source/sink terms. The optimization objective function is the project net present value (NPV), which involves discounted revenue from oil production, operating costs (e.g. water injection and recycling), and capital costs (e.g., cost of drilling wells). A major difficulty after formulating the generalized field development optimization problem is finding an efficient solution approach. Since the total number of cells in a reservoir model far exceeds the number of cells that are intersected by wells, the source/sink terms tend to be sparse. In fact, the drilling cost in the NPV objective function serves as a sparsity-promoting penalty to minimize the number of wells while maximizing the NPV. Inspired by this insight, we solve the optimization problem using an efficient gradient-based method based on recent algorithmic developments in sparse reconstruction literature. The gradients of the NPV function with respect to the source/sink terms is readily computed using well-established adjoint methods. Numerical experiments are presented to evaluate the feasibility and performance of the generalized field development formulation for simultaneous optimization of the number, location, type, controls, and drilling schedule of the wells.  相似文献   

6.
Determining the optimum placement of new wells in an oil field is a crucial work for reservoir engineers. The optimization problem is complex due to the highly nonlinearly correlated and uncertain reservoir performances which are affected by engineering and geologic variables. In this paper, the combination of a modified particle swarm optimization algorithm and quality map method (QM + MPSO), modified particle swarm optimization algorithm (MPSO), standard particle swarm optimization algorithm (SPSO), and centered-progressive particle swarm optimization (CP-PSO) are applied for optimization of well placement. The SPSO, CP-PSO, and MPSO algorithms are first discussed, and then the modified quality map method is discussed, and finally the implementation of these four methods for well placement optimization is described. Four example cases which involve depletion drive model, water injection model, and a real field reservoir model, with the maximization of net present value (NPV) as the objective function are considered. The physical model used in the optimization analyses is a 3-dimensional implicit black-oil model. Multiple runs of all methods are performed, and the results are averaged in order to achieve meaningful comparisons. In the case of optimizing placement of a single producer well, it is shown that it is not necessary to use the quality map to initialize the position of well placement. In other cases considered, it is shown that the QM + MPSO method outperforms MPSO method, and MPSO method outperforms SPSO and CP-PSO method. Taken in total, the modification of SPSO method is effective and the applicability of QM + MPSO for this challenging problem is promising  相似文献   

7.
8.
Performing a line search method in the direction given by the simplex gradient is a well-known method in the mathematical optimization community. For reservoir engineering optimization problems, both a modification of the simultaneous perturbation stochastic approximation (SPSA) and ensemble-based optimization (EnOpt) have recently been applied for estimating optimal well controls in the production optimization step of closed-loop reservoir management. The modified SPSA algorithm has also been applied to assisted history-matching problems. A recent comparison of the performance of EnOpt and a SPSA-type algorithm (G-SPSA) for a set of production optimization test problems showed that the two algorithms resulted in similar estimates of the optimal net-present-value and required roughly the same amount of computational time to achieve these estimates. Here, we show that, theoretically, this result is not surprising. In fact, we show that both the simplex, preconditioned simplex, and EnOpt algorithms can be derived directly from a modified SPSA-type algorithm where the preconditioned simplex algorithm is presented for the first time in this paper. We also show that the expectation of all these preconditioned stochastic gradients is a first-order approximation of the preconditioning covariance matrix times the true gradient or a covariance matrix squared times the true gradient.  相似文献   

9.
10.
11.
Construction of predictive reservoir models invariably involves interpretation and interpolation between limited available data and adoption of imperfect modeling assumptions that introduce significant subjectivity and uncertainty into the modeling process. In particular, uncertainty in the geologic continuity model can significantly degrade the quality of fluid displacement patterns and predictive modeling outcomes. Here, we address a standing challenge in flow model calibration under uncertainty in geologic continuity by developing an adaptive sparse representation formulation for prior model identification (PMI) during model calibration. We develop a flow-data-driven sparsity-promoting inversion to discriminate against distinct prior geologic continuity models (e.g., variograms). Realizations of reservoir properties from each geologic continuity model are used to generate sparse geologic dictionaries that compactly represent models from each respective prior. For inversion initially the same number of elements from each prior dictionary is used to construct a diverse geologic dictionary that reflects a wide range of variability and uncertainty in the prior continuity. The inversion is formulated as a sparse reconstruction problem that inverts the flow data to identify and linearly combine the relevant elements from the large and diverse set of geologic dictionary elements to reconstruct the solution. We develop an adaptive sparse reconstruction algorithm in which, at every iteration, the contribution of each dictionary to the solution is monitored to replace irrelevant (insignificant) elements with more geologically relevant (significant) elements to improve the solution quality. Several numerical examples are used to illustrate the effectiveness of the proposed approach for identification of geologic continuity in practical model calibration problems where the uncertainty in the prior geologic continuity model can lead to biased inversion results and prediction.  相似文献   

12.
In the past years, many applications of history-matching methods in general and ensemble Kalman filter in particular have been proposed, especially in order to estimate fields that provide uncertainty in the stochastic process defined by the dynamical system of hydrocarbon recovery. Such fields can be permeability fields or porosity fields, but can also fields defined by the rock type (facies fields). The estimation of the boundaries of the geologic facies with ensemble Kalman filter (EnKF) was made, in different papers, with the aid of Gaussian random fields, which were truncated using various schemes and introduced in a history-matching process. In this paper, we estimate, in the frame of the EnKF process, the locations of three facies types that occur into a reservoir domain, with the property that each two could have a contact. The geological simulation model is a form of the general truncated plurigaussian method. The difference with other approaches consists in how the truncation scheme is introduced and in the observation operator of the facies types at the well locations. The projection from the continuous space of the Gaussian fields into the discrete space of the facies fields is realized through in an intermediary space (space with probabilities). This space connects the observation operator of the facies types at the well locations with the geological simulation model. We will test the model using a 2D reservoir which is connected with the EnKF method as a data assimilation technique. We will use different geostatistical properties for the Gaussian fields and different levels of the uncertainty introduced in the model parameters and also in the construction of the Gaussian fields.  相似文献   

13.
We propose a workflow for decision making under uncertainty aiming at comparing different field development plan scenarios. The approach applies to mature fields where the residual uncertainty is estimated using a probabilistic inversion approach. Moreover, a robust optimization method is presented to optimize controllable parameters in the presence of uncertainty. The key element of this approach is the use of response surface model to reduce the very high number of simulator model evaluations that are classically needed to perform such workflows. The major issue is to be able to build an efficient and reliable response surface. This is achieved using a Gaussian process (kriging) statistical model and using a particular training set (experimental design) developed to take into account the variable correlation induced by the probabilistic inversion process. For the problem of optimization under uncertainty, an iterative training set is proposed, aiming at refining the response surface iteratively such as to effectively reduce approximation errors and converging faster to the true solution. The workflow is illustrated on a realistic test case of a mature field where the approach is used to compare two new development plan scenarios both in terms of expectation and of risk mitigation and to optimize well position parameters in the presence of uncertainty.  相似文献   

14.
15.
资源定量评价发展方向展望   总被引:8,自引:0,他引:8  
未来的资源定量评价希望能够评估未发现矿产资源的量、价值并对其进行定位预测, 以能够表达矿产资源的经济潜力和不确定性.近年来金属价格的长期下跌提出了对更大型的矿床的需求.敏感度分析表明了减少评价中不确定性和风险的最有效途径是降低有关吨位估计因素的不确定性.到目前为止, 在评价中所有可能造成误差的因素中, 那些与吨位估计误差有关的因素是最重要的.鉴于吨位模型的绝对重要地位以及矿床模型是吨位最有效的预测手段, 正确地选择矿床模型是控制误差最重要的途径.地表大部分地区被大面积裸露的岩石和沉积物所覆盖.由于很多出露地表的矿床已经被发现, 人们开始把注意力转向盖层下面岩石可能显露的矿化信息上.这些区域的资源评价必需依靠对其周边地区的外推、地下覆盖岩石新的地质填图或者通过在其他成功勘探区获得的经验进行类推.盖层对评价的不确定性以及评价的方法与程序都具有深远的影响, 因为地下地质现象的不可见性和地球物理方法所获得的是一种被削弱了的信息.许多早期的评价方法都是基于从那些出露地表的矿床中总结出的地球化学和地球物理变量之间的关系而进行的, 而现在我们同样需要研究基于地下隐伏矿床的勘探经验.矿床模型在资源定量评价中的重要地位基于以下两个原因: (1) 大多数矿床类型具有明显不同的品位和吨位分布; (2) 不同的矿床类型出现在不同的地质背景中, 而这种背景可从地质图中进行区分.在综合利用地质、矿产、地球物理和地球化学等地学信息进行资源评价及矿床勘探中, 矿床模型起着至关重要的作用.品位和吨位模型以及定量描述、经济和矿床密度模型的发展将有利于减少这些新的评价的不确定性.   相似文献   

16.
Some Suggested Future Directions ofQuantitative Resource Assessments   总被引:6,自引:0,他引:6  
Like most journeys, success depends critically on wherewe are going to be in the end. Thus, if we are to have someideas about future directions of quantitative assessments ofmineral resources, we need some basic understanding of whatthe assessments will be used for. What will be expected ofquantitative resource assessments in the future? It would behelpful to identify who will use these future assessments, howthe assessments will be used, and what are acceptable forms ofproducts. The purpose o…  相似文献   

17.
The amount of hydrocarbon recovered can be considerably increased by finding optimal placement of non-conventional wells. For that purpose, the use of optimization algorithms, where the objective function is evaluated using a reservoir simulator, is needed. Furthermore, for complex reservoir geologies with high heterogeneities, the optimization problem requires algorithms able to cope with the non-regularity of the objective function. In this paper, we propose an optimization methodology for determining optimal well locations and trajectories based on the covariance matrix adaptation evolution strategy (CMA-ES) which is recognized as one of the most powerful derivative-free optimizers for continuous optimization. In addition, to improve the optimization procedure, two new techniques are proposed: (a) adaptive penalization with rejection in order to handle well placement constraints and (b) incorporation of a meta-model, based on locally weighted regression, into CMA-ES, using an approximate stochastic ranking procedure, in order to reduce the number of reservoir simulations required to evaluate the objective function. The approach is applied to the PUNQ-S3 case and compared with a genetic algorithm (GA) incorporating the Genocop III technique for handling constraints. To allow a fair comparison, both algorithms are used without parameter tuning on the problem, and standard settings are used for the GA and default settings for CMA-ES. It is shown that our new approach outperforms the genetic algorithm: It leads in general to both a higher net present value and a significant reduction in the number of reservoir simulations needed to reach a good well configuration. Moreover, coupling CMA-ES with a meta-model leads to further improvement, which was around 20% for the synthetic case in this study.  相似文献   

18.
Subsurface models of hydrocarbon reservoirs are coarse and of low resolution when compared with the actual geologic characteristics. Therefore, the understanding of the three-dimensional architecture of reservoir units is often incomplete. Outcrop analogues are commonly used to understand the spatial continuity of reservoir units. In this study, a Late Jurassic outcrop analogue for the Arab-D reservoir of central Saudi Arabia was used to build a high-resolution model that captures fine geologic details. Subsurface reservoir lithofacies were matched with those from the studied outcrop, and porosity values derived from published core and well log data from the Ain Dar, Uthmanyah, and Shudgum areas of the Ghawar Field, eastern Saudi Arabia, were then applied to the equivalent lithofacies in the outcrop. Maximum, minimum, and average subsurface porosity for each lithofacies were distributed in the facies model using a geostatistical algorithm to produce nine porosity models for the field data. Several realisations were run to visualise the variability in each model and to quantitatively measure the uncertainty associated with the models. The results indicated that potential reservoir zones were associated with grainstone, packstone, and some wackestone layers. Semivariogram analysis of the lithofacies showed good continuity in the N-S direction and less continuity in the E-W direction. The high-resolution lithofacies models detected permeability barriers and isolated low porosity bodies within the potential reservoir zones. This model revealed the porosity distribution in areas smaller than one cell in the subsurface model and highlighted the uncertainty associated with several aspects of the model.  相似文献   

19.
In this study, we introduce the application of data mining to petroleum exploration and development to obtain high-performance predictive models and optimal classifications of geology, reservoirs, reservoir beds, and fluid properties. Data mining is a practical method for finding characteristics of, and inherent laws in massive multi-dimensional data. The data mining method is primarily composed of three loops, which are feature selection, model parameter optimization, and model performance evaluation. The method’s key techniques involve applying genetic algorithms to carry out feature selection and parameter optimization and using repeated cross-validation methods to obtain unbiased estimation of generalization accuracy. The optimal model is finally selected from the various algorithms tested. In this paper, the evaluation of water-flooded layers and the classification of conglomerate reservoirs in Karamay oil field are selected as case studies to analyze comprehensively two important functions in data mining, namely predictive modeling and cluster analysis. For the evaluation of water-flooded layers, six feature subset schemes and five distinct types of data mining methods (decision trees, artificial neural networks, support vector machines, Bayesian networks, and ensemble learning) are analyzed and compared. The results clearly demonstrate that decision trees are superior to the other methods in terms of predictive model accuracy and interpretability. Therefore, a decision tree-based model is selected as the final model for identifying water-flooded layers in the conglomerate reservoir. For the reservoir classification, the reservoir classification standards from four types of clustering algorithms, such as those based on division, level, model, and density, are comparatively analyzed. The results clearly indicate that the clustering derived from applying the standard K-means algorithm, which is based on division, provides the best fit to the geological characteristics of the actual reservoir and the greatest accuracy of reservoir classification. Moreover, the internal measurement parameters of this algorithm, such as compactness, efficiency, and resolution, are all better than those of the other three algorithms. Compared with traditional methods from exploration geophysics, the data mining method has obvious advantages in solving problems involving calculation of reservoir parameters and reservoir classification using different specialized field data. Hence, the effective application of data mining methods can provide better services for petroleum exploration and development.  相似文献   

20.
Landslide susceptibility mapping is an indispensable prerequisite for landslide prevention and reduction. At present, research into landslide susceptibility mapping has begun to combine machine learning with remote sensing and geographic information system (GIS) techniques. The random forest model is a new integrated classification method, but its application to landslide susceptibility mapping remains limited. Landslides represent a serious threat to the lives and property of people living in the Zigui–Badong area in the Three Gorges region of China, as well as to the operation of the Three Gorges Reservoir. However, the geological structure of this region is complex, involving steep mountains and deep valleys. The purpose of the current study is to produce a landslide susceptibility map of the Zigui–Badong area using a random forest model, multisource data, GIS, and remote sensing data. In total, 300 pre-existing landslide locations were obtained from a landslide inventory map. These landslides were identified using visual interpretation of high-resolution remote sensing images, topographic and geologic data, and extensive field surveys. The occurrence of landslides is closely related to a series of environmental parameters. Topographic, geologic, Landsat-8 image, raining data, and seismic data were used as the primary data sources to extract the geo-environmental factors influencing landslides. Thirty-four layers of causative factors were prepared as predictor variables, which can mainly be categorized as topographic, geological, hydrological, land cover, and environmental trigger parameters. The random forest method is an ensemble classification technique that extends diversity among the classification trees by resampling the data with replacement and randomly changing the predictive variable sets during the different tree induction processes. A random forest model was adopted to calculate the quantitative relationships between the landslide-conditioning factors and the landslide inventory map and then generate a landslide susceptibility map. The analytical results were compared with known landslide locations in terms of area under the receiver operating characteristic curve. The random forest model has an area ratio of 86.10%. In contrast to the random forest (whole factors, WF), random forest (12 major factors, 12F), decision tree (WF), decision tree (12F), the final result shows that random forest (12F) has a higher prediction accuracy. Meanwhile, the random forest models have higher prediction accuracy than the decision tree model. Subsequently, the landslide susceptibility map was classified into five classes (very low, low, moderate, high, and very high). The results demonstrate that the random forest model achieved a reasonable accuracy in landslide susceptibility mapping. The landslide hazard zone information will be useful for general development planning and landslide risk management.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号