首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Development of subsurface energy and environmental resources can be improved by tuning important decision variables such as well locations and operating rates to optimize a desired performance metric. Optimal well locations in a discretized reservoir model are typically identified by solving an integer programming problem while identification of optimal well settings (controls) is formulated as a continuous optimization problem. In general, however, the decision variables in field development optimization can include many design parameters such as the number, type, location, short-term and long-term operational settings (controls), and drilling schedule of the wells. In addition to the large number of decision variables, field optimization problems are further complicated by the existing technical and physical constraints as well as the uncertainty in describing heterogeneous properties of geologic formations. In this paper, we consider simultaneous optimization of well locations and dynamic rate allocations under geologic uncertainty using a variant of the simultaneous perturbation and stochastic approximation (SPSA). In addition, by taking advantage of the robustness of SPSA against errors in calculating the cost function, we develop an efficient field development optimization under geologic uncertainty, where an ensemble of models are used to describe important flow and transport reservoir properties (e.g., permeability and porosity). We use several numerical experiments, including a channel layer of the SPE10 model and the three-dimensional PUNQ-S3 reservoir, to illustrate the performance improvement that can be achieved by solving a combined well placement and control optimization using the SPSA algorithm under known and uncertain reservoir model assumptions.  相似文献   

2.
Z-trend maps are a simplified lineprinter version of spatially filtered maps designed to give a quick visual appraisal of trends. The printout shows a yes-no configuration by a printed character or a blank so that the map has a conspicuous pattern. This pattern reflects the presence, position, and trend of the desired features. If a reasonable symbol density ratio is used the results can be visually pleasing thus enhancing trend recognition. Z-trending can be adapted to any map with stationary properties but is most easily applied to data that have been filtered with a bandpass operator.  相似文献   

3.
Computer recognition of prospective areas through the processing of digital exploration data can be effective if the statistical tests for the determination of the prospects are pertinent to the presence of the desired mineral. Where exploration involves the application of polynomial trend analysis to structure contour maps in the search for petroleum and natural gas, standard analysis of variance tests may not indicate the best exploration maps. Variance tests may be completely invalid where isolated dips and clustered samples cause the surfaces generated by some of the most common trend programs to oscillate, creating a false impression of variance. On the other hand, tests that directly compare the position of residual features with areas of known production consistently indicate the best map for the determination of new prospects. They are simple to apply and appear to offer the most opportunity for the automatic recognition of prospective areas.  相似文献   

4.
This paper presents the application of a population Markov Chain Monte Carlo (MCMC) technique to generate history-matched models. The technique has been developed and successfully adopted in challenging domains such as computational biology but has not yet seen application in reservoir modelling. In population MCMC, multiple Markov chains are run on a set of response surfaces that form a bridge from the prior to posterior. These response surfaces are formed from the product of the prior with the likelihood raised to a varying power less than one. The chains exchange positions, with the probability of a swap being governed by a standard Metropolis accept/reject step, which allows for large steps to be taken with high probability. We show results of Population MCMC on the IC Fault Model—a simple three-parameter model that is known to have a highly irregular misfit surface and hence be difficult to match. Our results show that population MCMC is able to generate samples from the complex, multi-modal posterior probability distribution of the IC Fault model very effectively. By comparison, previous results from stochastic sampling algorithms often focus on only part of the region of high posterior probability depending on algorithm settings and starting points.  相似文献   

5.
6.
7.
The earthquake catalogue from 1964 to August 1991 is used to identify the times of increased probabilities (TIPs) of the earthquake mainshocks of magnitudes greater than or equal to 6·4 and are associated with the Indian convergent plate margins, in retrospect. In Pakistan and Indo-Burma regions, the analysis was repeated for magnitude threshold 6·2 and 7·0 respectively. All the earthquakes (except one in the Hindukush region and one in Indo-Burmese region) in Pakistan, Hindukush-Pamir, Himalaya and Indo-Burmese regions were preceded by the special activation and hence were predicted. Approximately 23 ± 10% of the total time (1970 to August 1991) is occupied by the TIPs in all the regions. The reasons for failure to predict the two earthquakes in these regions are discussed. Our analysis gives a better picture of the regionalization and the size of the space-time volume for the preparation of an earthquake. The high success ratio of the algorithm proves that it can be applied in this territory for further prediction in the real time, without any significant changes in its parameters.  相似文献   

8.
9.
The seismicity associated with the convergence of the Indian and Eurasian plates, from 1964 to August 1990 was scanned using the M8 algorithm with a view to identify the times of increased probabilities (TIPs) of the earthquakes of magnitudes greater or equal to 6·4 that occurred during the period from 1970 to August 1990. 23 out of 28 earthquakes (M ? 6·4) have been predicted. These were preceded by specific activation of the earthquake flow which was picked up by the M8 algorithm. The earthquake of August 1988 in the Himalaya could not be predicted, the other four unpredictable earthquakes occurred in the early dates of the catalogue (1970–1971) and hence their TIPs could not be diagnosed. Two current alarms are diagnosed, one in the Indo-Burmese arc and the other in the Hindukush-Pamir region. The algorithm provides the correlation between the earthquakes and their area of activation (both in time as well as in space) which, when compared with the local geology, may help to comment on the present day status of the seismic features on the surface.  相似文献   

10.
The injection of water (or CO2) at high pressure is a common practice to enhance oil production. A crucial component of this activity is the estimation of the maximum pressure at which the fluids can be injected without inducing the reactivation of pre-existing faults that may exist in the formation. The damage zones typically formed around the geological faults are highly heterogeneous. The materials involved in the damage zones are characterized by the huge variation of their properties and high uncertainties associated with them. To estimate the maximum allowable injection pressure this paper presents a novel approach based on: a coupled hydro-mechanical formulation (for the numerical analyses); a criterion based on the total plastic work (for the fault reactivation); and the evidence theory (for uncertainty quantification). A case study based on information gathered from an actual field is presented to illustrate the capabilities of the proposed framework.  相似文献   

11.
刘东海  黄培志  冯守中 《岩土力学》2010,31(4):1181-1186
不良地质条件是影响TBM施工隧洞管片结构安全的重要因素。综合考虑围岩地质条件和衬砌结构的不确定性,提出了一种定量分析TBM管片结构失事概率的新方法。在基于Markov过程估计隧洞沿程地质岩性变化概率的基础上,建立了隧洞任意位置处管片选型不匹配的概率模型;考虑围岩和管片参数的不确定性,采用随机有限元方法计算某一类型管片在不同围岩下的失事概率;由此,采用全概率公式,可计算隧洞沿程任意位置处管片结构的失事概率。结合实际工程,针对施工期工况,确定了该隧洞管片沿程的失事概率、最大失事概率及其所对应的位置等,为管片选型、优化设计及TBM施工期的风险防范提供了依据。  相似文献   

12.
A hierarchical concept is proposed for the development of constitutive models to account for various factors that influence behaviour of (geologic) materials. It permits evolution of models of progressively higher grades from the basic model representing isotropic hardening with associative behaviour. Factors such as non-associativeness and induced anisotropy due to friction and cyclic loading, and softening are introduced as corrections or perturbations to the basic model. The influence of these factors is captured through non-associativeness manifested by deviation from normality of the plastic strain increments to the yield surface, F. Details of four models: isotropic hardening with associative behaviour, isotropic hardening with non-associative behavioural anisotropic hardening and strain-softening with a damage variable are presented. They are verified with respect to laboratory multiaxial test data under various paths of loading, unloading and reloading for typical soils, rock and concrete. The proposed concept is general, yet sufficiently simplified in terms of physical understanding, number of constants and their physical meanings, determination of the constants and implementation.  相似文献   

13.
14.
The use of upscaled models is attractive in many-query applications that require a large number of simulation runs, such as uncertainty quantification and optimization. Highly coarsened models often display error in output quantities of interest, e.g., phase production and injection rates, so the direct use of these results for quantitative evaluations and decision making may not be appropriate. In this work, we introduce a machine-learning-based post-processing framework for modeling the error in coarse-model results in the context of uncertainty quantification. Coarse-scale models are constructed using an accurate global single-phase transmissibility upscaling procedure. The framework entails the use of high-dimensional regression (random forest in this work) to model error based on a number of error indicators or features. Many of these features are derived from approximations of the subgrid effects neglected in the coarse-scale saturation equation. These features are identified through volume averaging, and they are generated by solving a fine-scale saturation equation with a constant-in-time velocity field. Our approach eliminates the need for the user to hand-design a small number of informative (relevant) features. The training step requires the simulation of some number of fine and coarse models (in this work we perform either 10 or 30 training simulations), followed by construction of a regression model for each well. Classification is also applied for production wells. The methodology then provides a correction at each time step, and for each well, in the phase production and injection rates. Results are presented for two- and three-dimensional oil–water systems. The corrected coarse-scale solutions show significantly better accuracy than the uncorrected solutions, both in terms of realization-by-realization predictions for oil and water production rates, and for statistical quantities important for uncertainty quantification, such as P10, P50, and P90 predictions.  相似文献   

15.
16.
Seismicity of the Himalayan arc lying within the limits shown in figure 1 and covering the period 1964 to 1987 was scanned using M8 algorithm with a view to identifying the times of increased probabilities (TIPs) of the occurrence of earthquakes of magnitude greater than or equal to 7·0, during the period 1970 to 1987. In this period, TIPs occupy 18% of the space time considered. One of these precedes the only earthquake in this magnitude range which occurred during the period. Two numerical parameters used in the algorithm, namely the magnitude thresholds, had to be altered for the present study owing to incomplete data. Further monitoring of TIPs is however warranted, both for testing the predictive capability of this algorithm in the Himalayan region and for creating a base for the search of short-term precursors.  相似文献   

17.

Conditioning complex subsurface flow models on nonlinear data is complicated by the need to preserve the expected geological connectivity patterns to maintain solution plausibility. Generative adversarial networks (GANs) have recently been proposed as a promising approach for low-dimensional representation of complex high-dimensional images. The method has also been adopted for low-rank parameterization of complex geologic models to facilitate uncertainty quantification workflows. A difficulty in adopting these methods for subsurface flow modeling is the complexity associated with nonlinear flow data conditioning. While conditional GAN (CGAN) can condition simulated images on labels, application to subsurface problems requires efficient conditioning workflows for nonlinear data, which is far more complex. We present two approaches for generating flow-conditioned models with complex spatial patterns using GAN. The first method is through conditional GAN, whereby a production response label is used as an auxiliary input during the training stage of GAN. The production label is derived from clustering of the flow responses of the prior model realizations (i.e., training data). The underlying assumption of this approach is that GAN can learn the association between the spatial features corresponding to the production responses within each cluster. An alternative method is to use a subset of samples from the training data that are within a certain distance from the observed flow responses and use them as training data within GAN to generate new model realizations. In this case, GAN is not required to learn the nonlinear relation between production responses and spatial patterns. Instead, it is tasked to learn the patterns in the selected realizations that provide a close match to the observed data. The conditional low-dimensional parameterization for complex geologic models with diverse spatial features (i.e., when multiple geologic scenarios are plausible) performed by GAN allows for exploring the spatial variability in the conditional realizations, which can be critical for decision-making. We present and discuss the important properties of GAN for data conditioning using several examples with increasing complexity.

  相似文献   

18.
Uncertainty quantification is currently one of the leading challenges in the geosciences, in particular in reservoir modeling. A wealth of subsurface data as well as expert knowledge are available to quantify uncertainty and state predictions on reservoir performance or reserves. The geosciences component within this larger modeling framework is partially an interpretive science. Geologists and geophysicists interpret data to postulate on the nature of the depositional environment, for example on the type of fracture system, the nature of faulting, and the type of rock physics model. Often, several alternative scenarios or interpretations are offered, including some associated belief quantified with probabilities. In the context of facies modeling, this could result in various interpretations of facies architecture, associations, geometries, and the way they are distributed in space. A quantitative approach to specify this uncertainty is to provide a set of alternative 3D training images from which several geostatistical models can be generated. In this paper, we consider quantifying uncertainty on facies models in the early development stage of a reservoir when there is still considerable uncertainty on the nature of the spatial distribution of the facies. At this stage, production data are available to further constrain uncertainty. We develop a workflow that consists of two steps: (1) determining which training images are no longer consistent with production data and should be rejected and (2) to history match with a given fixed training image. We illustrate our ideas and methodology on a test case derived from a real field case of predicting flow in a newly planned well in a turbidite reservoir off the African West coast.  相似文献   

19.
地下水质量的优劣是影响水资源可利用性的一个重要方面,因此对地下水水质进行合理评价是水资源开发利用、保护的依据。本文根据水质评价的特殊性,就模糊模式识别模型进行了改进,将改进的模糊模式识别模型应用于江阴市地下水水质综合评价,将评价结果与基于熵权法赋权的模糊综合评价及传统模糊综合评价结果进行了对比分析。研究表明:模糊模式识别方法对地下水水质评价问题具有较高的数值稳定性和适用性,模糊模式识别法评价的水质级别趋于平均化、中间化,本文提出的模糊模式识别理论模型应用于典型案例地区地下水水质评价是有效可行的。  相似文献   

20.
The first step in a seismicity analysis usually consists of defining the seismogenic units, seismic zones or individual faults. The worldwide delimitation of these zones involves an enormous effort and is often rather subjective. Also, a complete recording of faults will not be available for a long time yet. The seismicity model presented in this paper therefore is not based on individually defined seismic zones but rather on the assumption that each point in a global 1/2° grid of coordinates represents a potential earthquake source. The corresponding seismogenic parameters are allocated to each of these points. The earthquake occurrence frequency, one of the most important parameters, is determined purely statistically by appropriately spreading out the positions of past occurrences. All the other significant seismicity characteristics, such as magnitude-frequency relations, maximum possible magnitude and attenuation laws including the dependence on focal depth are determined in a global 1/2° grid of co-ordinates. This method of interpreting seismicity data allows us to establish a transparent, sufficiently precise representation of seismic hazard which is ideally suited for computer-aided risk analyses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号