首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 404 毫秒
1.
Representing Spatial Uncertainty Using Distances and Kernels   总被引:8,自引:7,他引:1  
Assessing uncertainty of a spatial phenomenon requires the analysis of a large number of parameters which must be processed by a transfer function. To capture the possibly of a wide range of uncertainty in the transfer function response, a large set of geostatistical model realizations needs to be processed. Stochastic spatial simulation can rapidly provide multiple, equally probable realizations. However, since the transfer function is often computationally demanding, only a small number of models can be evaluated in practice, and are usually selected through a ranking procedure. Traditional ranking techniques for selection of probabilistic ranges of response (P10, P50 and P90) are highly dependent on the static property used. In this paper, we propose to parameterize the spatial uncertainty represented by a large set of geostatistical realizations through a distance function measuring “dissimilarity” between any two geostatistical realizations. The distance function allows a mapping of the space of uncertainty. The distance can be tailored to the particular problem. The multi-dimensional space of uncertainty can be modeled using kernel techniques, such as kernel principal component analysis (KPCA) or kernel clustering. These tools allow for the selection of a subset of representative realizations containing similar properties to the larger set. Without losing accuracy, decisions and strategies can then be performed applying a transfer function on the subset without the need to exhaustively evaluate each realization. This method is applied to a synthetic oil reservoir, where spatial uncertainty of channel facies is modeled through multiple realizations generated using a multi-point geostatistical algorithm and several training images.  相似文献   

2.
Spatial inverse problems in the Earth Sciences are often ill-posed, requiring the specification of a prior model to constrain the nature of the inverse solutions. Otherwise, inverted model realizations lack geological realism. In spatial modeling, such prior model determines the spatial variability of the inverse solution, for example as constrained by a variogram, a Boolean model, or a training image-based model. In many cases, particularly in subsurface modeling, one lacks the amount of data to fully determine the nature of the spatial variability. For example, many different training images could be proposed for a given study area. Such alternative training images or scenarios relate to the different possible geological concepts each exhibiting a distinctive geological architecture. Many inverse methods rely on priors that represent a single subjectively chosen geological concept (a single variogram within a multi-Gaussian model or a single training image). This paper proposes a novel and practical parameterization of the prior model allowing several discrete choices of geological architectures within the prior. This method does not attempt to parameterize the possibly complex architectures by a set of model parameters. Instead, a large set of prior model realizations is provided in advance, by means of Monte Carlo simulation, where the training image is randomized. The parameterization is achieved by defining a metric space which accommodates this large set of model realizations. This metric space is equipped with a “similarity distance” function or a distance function that measures the similarity of geometry between any two model realizations relevant to the problem at hand. Through examples, inverse solutions can be efficiently found in this metric space using a simple stochastic search method.  相似文献   

3.
Generating one realization of a random permeability field that is consistent with observed pressure data and a known variogram model is not a difficult problem. If, however, one wants to investigate the uncertainty of reservior behavior, one must generate a large number of realizations and ensure that the distribution of realizations properly reflects the uncertainty in reservoir properties. The most widely used method for conditioning permeability fields to production data has been the method of simulated annealing, in which practitioners attempt to minimize the difference between the ’ ’true and simulated production data, and “true” and simulated variograms. Unfortunately, the meaning of the resulting realization is not clear and the method can be extremely slow. In this paper, we present an alternative approach to generating realizations that are conditional to pressure data, focusing on the distribution of realizations and on the efficiency of the method. Under certain conditions that can be verified easily, the Markov chain Monte Carlo method is known to produce states whose frequencies of appearance correspond to a given probability distribution, so we use this method to generate the realizations. To make the method more efficient, we perturb the states in such a way that the variogram is satisfied automatically and the pressure data are approximately matched at every step. These perturbations make use of sensitivity coefficients calculated from the reservoir simulator.  相似文献   

4.
5.
Conditioning realizations of stationary Gaussian random fields to a set of data is traditionally based on simple kriging. In practice, this approach may be demanding as it does not account for the uncertainty in the spatial average of the random field. In this paper, an alternative model is presented, in which the Gaussian field is decomposed into a random mean, constant over space but variable over the realizations, and an independent residual. It is shown that, when the prior variance of the random mean is infinitely large (reflecting prior ignorance on the actual spatial average), the realizations of the Gaussian random field are made conditional by substituting ordinary kriging for simple kriging. The proposed approach can be extended to models with random drifts that are polynomials in the spatial coordinates, by using universal or intrinsic kriging for conditioning the realizations, and also to multivariate situations by using cokriging instead of kriging.  相似文献   

6.
Uncertainty in future reservoir performance is usually evaluated from the simulated performance of a small number of reservoir realizations. Unfortunately, most of the practical methods for generating realizations conditional to production data are only approximately correct. It is not known whether or not the recently developed method of Gradual Deformation is an approximate method or if it actually generates realizations that are distributed correctly. In this paper, we evaluate the ability of the Gradual Deformation method to correctly assess the uncertainty in reservoir predictions by comparing the distribution of conditional realizations for a small test problem with the standard distribution from a Markov Chain Monte Carlo (MCMC) method, which is known to be correct, and with distributions from several approximate methods. Although the Gradual Deformation algorithm samples inefficiently for this test problem and is clearly not an exact method, it gives similar uncertainty estimates to those obtained by MCMC method based on a relatively small number of realizations.  相似文献   

7.
Construction of predictive reservoir models invariably involves interpretation and interpolation between limited available data and adoption of imperfect modeling assumptions that introduce significant subjectivity and uncertainty into the modeling process. In particular, uncertainty in the geologic continuity model can significantly degrade the quality of fluid displacement patterns and predictive modeling outcomes. Here, we address a standing challenge in flow model calibration under uncertainty in geologic continuity by developing an adaptive sparse representation formulation for prior model identification (PMI) during model calibration. We develop a flow-data-driven sparsity-promoting inversion to discriminate against distinct prior geologic continuity models (e.g., variograms). Realizations of reservoir properties from each geologic continuity model are used to generate sparse geologic dictionaries that compactly represent models from each respective prior. For inversion initially the same number of elements from each prior dictionary is used to construct a diverse geologic dictionary that reflects a wide range of variability and uncertainty in the prior continuity. The inversion is formulated as a sparse reconstruction problem that inverts the flow data to identify and linearly combine the relevant elements from the large and diverse set of geologic dictionary elements to reconstruct the solution. We develop an adaptive sparse reconstruction algorithm in which, at every iteration, the contribution of each dictionary to the solution is monitored to replace irrelevant (insignificant) elements with more geologically relevant (significant) elements to improve the solution quality. Several numerical examples are used to illustrate the effectiveness of the proposed approach for identification of geologic continuity in practical model calibration problems where the uncertainty in the prior geologic continuity model can lead to biased inversion results and prediction.  相似文献   

8.
Ensemble-based methods are becoming popular assisted history matching techniques with a growing number of field applications. These methods use an ensemble of model realizations, typically constructed by means of geostatistics, to represent the prior uncertainty. The performance of the history matching is very dependent on the quality of the initial ensemble. However, there is a significant level of uncertainty in the parameters used to define the geostatistical model. From a Bayesian viewpoint, the uncertainty in the geostatistical modeling can be represented by a hyper-prior in a hierarchical formulation. This paper presents the first steps towards a general parametrization to address the problem of uncertainty in the prior modeling. The proposed parametrization is inspired in Gaussian mixtures, where the uncertainty in the prior mean and prior covariance is accounted by defining weights for combining multiple Gaussian ensembles, which are estimated during the data assimilation. The parametrization was successfully tested in a simple reservoir problem where the orientation of the major anisotropic direction of the permeability field was unknown.  相似文献   

9.
Geologic uncertainties and limited well data often render recovery forecasting a difficult undertaking in typical appraisal and early development settings. Recent advances in geologic modeling algorithms permit automation of the model generation process via macros and geostatistical tools. This allows rapid construction of multiple alternative geologic realizations. Despite the advances in geologic modeling, computation of the reservoir dynamic response via full-physics reservoir simulation remains a computationally expensive task. Therefore, only a few of the many probable realizations are simulated in practice. Experimental design techniques typically focus on a few discrete geologic realizations as they are inherently more suitable for continuous engineering parameters and can only crudely approximate the impact of geology. A flow-based pattern recognition algorithm (FPRA) has been developed for quantifying the forecast uncertainty as an alternative. The proposed algorithm relies on the rapid characterization of the geologic uncertainty space represented by an ensemble of sufficiently diverse static model realizations. FPRA characterizes the geologic uncertainty space by calculating connectivity distances, which quantify how different each individual realization is from all others in terms of recovery response. Fast streamline simulations are employed in evaluating these distances. By applying pattern recognition techniques to connectivity distances, a few representative realizations are identified within the model ensemble for full-physics simulation. In turn, the recovery factor probability distribution is derived from these intelligently selected simulation runs. Here, FPRA is tested on an example case where the objective is to accurately compute the recovery factor statistics as a function of geologic uncertainty in a channelized turbidite reservoir. Recovery factor cumulative distribution functions computed by FPRA compare well to the one computed via exhaustive full-physics simulations.  相似文献   

10.
Spatial uncertainty modelling is a complex and challenging job for orebody modelling in mining, reservoir characterization in petroleum, and contamination modelling in air and water. Stochastic simulation algorithms are popular methods for such modelling. In this paper, discrete wavelet transformation (DWT)-based multiple point simulation algorithm for continuous variable is proposed that handles multi-scale spatial characteristics in datasets and training images. The DWT of a training image provides multi-scale high-frequency wavelet images and one low-frequency scaling image at the coarsest scale. The simulation of the proposed approach is performed on the frequency (wavelet) domain where the scaling image and wavelet images across the scale are simulated jointly. The inverse DWT reconstructs simulated realizations of an attribute of interest in the space domain. An automatic scale-selection algorithm using dominant mode difference is applied for the selection of the optimal scale of wavelet decomposition. The proposed algorithm reduces the computational time required for simulating large domain as compared to spatial domain multi-point simulation algorithm. The algorithm is tested with an exhaustive dataset using conditional and unconditional simulation in two- and three-dimensional fluvial reservoir and mining blasted rock data. The realizations generated by the proposed algorithm perform well and reproduce the statistics of the training image. The study conducted comparing the spatial domain filtersim multiple-point simulation algorithm suggests that the proposed algorithm generates equally good realizations at lower computational cost.  相似文献   

11.
Seismic inverse modeling, which transforms appropriately processed geophysical data into the physical properties of the Earth, is an essential process for reservoir characterization. This paper proposes a work flow based on a Markov chain Monte Carlo method consistent with geology, well-logs, seismic data, and rock-physics information. It uses direct sampling as a multiple-point geostatistical method for generating realizations from the prior distribution, and Metropolis sampling with adaptive spatial resampling to perform an approximate sampling from the posterior distribution, conditioned to the geophysical data. Because it can assess important uncertainties, sampling is a more general approach than just finding the most likely model. However, since rejection sampling requires a large number of evaluations for generating the posterior distribution, it is inefficient and not suitable for reservoir modeling. Metropolis sampling is able to perform an equivalent sampling by forming a Markov chain. The iterative spatial resampling algorithm perturbs realizations of a spatially dependent variable, while preserving its spatial structure by conditioning to subset points. However, in most practical applications, when the subset conditioning points are selected at random, it can get stuck for a very long time in a non-optimal local minimum. In this paper it is demonstrated that adaptive subset sampling improves the efficiency of iterative spatial resampling. Depending on the acceptance/rejection criteria, it is possible to obtain a chain of geostatistical realizations aimed at characterizing the posterior distribution with Metropolis sampling. The validity and applicability of the proposed method are illustrated by results for seismic lithofacies inversion on the Stanford VI synthetic test sets.  相似文献   

12.

Conditioning complex subsurface flow models on nonlinear data is complicated by the need to preserve the expected geological connectivity patterns to maintain solution plausibility. Generative adversarial networks (GANs) have recently been proposed as a promising approach for low-dimensional representation of complex high-dimensional images. The method has also been adopted for low-rank parameterization of complex geologic models to facilitate uncertainty quantification workflows. A difficulty in adopting these methods for subsurface flow modeling is the complexity associated with nonlinear flow data conditioning. While conditional GAN (CGAN) can condition simulated images on labels, application to subsurface problems requires efficient conditioning workflows for nonlinear data, which is far more complex. We present two approaches for generating flow-conditioned models with complex spatial patterns using GAN. The first method is through conditional GAN, whereby a production response label is used as an auxiliary input during the training stage of GAN. The production label is derived from clustering of the flow responses of the prior model realizations (i.e., training data). The underlying assumption of this approach is that GAN can learn the association between the spatial features corresponding to the production responses within each cluster. An alternative method is to use a subset of samples from the training data that are within a certain distance from the observed flow responses and use them as training data within GAN to generate new model realizations. In this case, GAN is not required to learn the nonlinear relation between production responses and spatial patterns. Instead, it is tasked to learn the patterns in the selected realizations that provide a close match to the observed data. The conditional low-dimensional parameterization for complex geologic models with diverse spatial features (i.e., when multiple geologic scenarios are plausible) performed by GAN allows for exploring the spatial variability in the conditional realizations, which can be critical for decision-making. We present and discuss the important properties of GAN for data conditioning using several examples with increasing complexity.

  相似文献   

13.
Uncertainty in surfactant–polymer flooding is an important challenge to the wide-scale implementation of this process. Any successful design of this enhanced oil recovery process will necessitate a good understanding of uncertainty. Thus, it is essential to have the ability to quantify this uncertainty in an efficient manner. Monte Carlo simulation is the traditional uncertainty quantification approach that is used for quantifying parametric uncertainty. However, the convergence of Monte Carlo simulation is relatively low, requiring a large number of realizations to converge. This study proposes the use of the probabilistic collocation method in parametric uncertainty quantification for surfactant–polymer flooding using four synthetic reservoir models. Four sources of uncertainty were considered: the chemical flood residual oil saturation, surfactant and polymer adsorption, and the polymer viscosity multiplier. The output parameter approximated is the recovery factor. The output metrics were the input–output model response relationship, the probability density function, and the first two moments. These were compared with the results obtained from Monte Carlo simulation over a large number of realizations. Two methods for solving for the coefficients of the output parameter polynomial chaos expansion are compared: Gaussian quadrature and linear regression. The linear regression approach used two types of sampling: full-tensor product nodes and Chebyshev-derived nodes. In general, the probabilistic collocation method was applied successfully to quantify the uncertainty in the recovery factor. Applying the method using the Gaussian quadrature produced more accurate results compared with using the linear regression with full-tensor product nodes. Applying the method using the linear regression with Chebyshev derived sampling also performed relatively well. Possible enhancements to improve the performance of the probabilistic collocation method were discussed. These enhancements include improved sparse sampling, approximation order-independent sampling, and using arbitrary random input distribution that could be more representative of reality.  相似文献   

14.
15.
16.
A stochastic channel embedded in a background facies is conditioned to data observed at wells. The background facies is a fixed rectangular box. The model parameters consist of geometric parameters that describe the shape, size, and location of the channel, and permeability and porosity in the channel and nonchannel facies. We extend methodology previously developed to condition a stochastic channel to well-test pressure data, and well observations of the channel thickness and the depth of the top of the channel. The main objective of this work is to characterize the reduction in uncertainty in channel model parameters and predicted reservoir performance that can be achieved by conditioning to well-test pressure data at one or more wells. Multiple conditional realizations of the geometric parameters and rock properties are generated to evaluate the uncertainty in model parameters. The ensemble of predictions of reservoir performance generated from the suite of realizations provides a Monte Carlo estimate of the uncertainty in future performance predictions. In addition, we provide some insight on how prior variances, data measurement errors, and sensitivity coefficients interact to determine the reduction in model parameters obtained by conditioning to pressure data and examine the value of active and observation well data in resolving model parameters.  相似文献   

17.
A Bayesian linear inversion methodology based on Gaussian mixture models and its application to geophysical inverse problems are presented in this paper. The proposed inverse method is based on a Bayesian approach under the assumptions of a Gaussian mixture random field for the prior model and a Gaussian linear likelihood function. The model for the latent discrete variable is defined to be a stationary first-order Markov chain. In this approach, a recursive exact solution to an approximation of the posterior distribution of the inverse problem is proposed. A Markov chain Monte Carlo algorithm can be used to efficiently simulate realizations from the correct posterior model. Two inversion studies based on real well log data are presented, and the main results are the posterior distributions of the reservoir properties of interest, the corresponding predictions and prediction intervals, and a set of conditional realizations. The first application is a seismic inversion study for the prediction of lithological facies, P- and S-impedance, where an improvement of 30% in the root-mean-square error of the predictions compared to the traditional Gaussian inversion is obtained. The second application is a rock physics inversion study for the prediction of lithological facies, porosity, and clay volume, where predictions slightly improve compared to the Gaussian inversion approach.  相似文献   

18.
Modeling complex reservoir geometries with multiple-point statistics   总被引:2,自引:0,他引:2  
Large-scale reservoir architecture constitutes first-order reservoir heterogeneity and dietates to a large extent reservoir flow behavior. It also manifests geometric characteristics beyond the capability of traditional geostatistical models conditioned only on single-point and two-point statistics. Multiple-point statistics, as obtained by scanning a training image deemed representative of the actual reservoir, if reproduced properly provides stochastic models that better capture the essence of the heterogeneity. A growth algorithm, coupled with an optimization procedure, is proposed to reproduce target multiple-point histograms. The growth algorithm makes an analogy between geological accretion process and stochastic process and amounts to restricting the random path of sequential simulation at any given stage to a set of eligible nodes (immediately adjacent to a previously simulated node or sand grain). The proposed algorithm, combined with a multiple-grid approach, is shown to reproduce effectively the geometric essence of complex training images exhibiting long-range and curvilinear structures. Also, by avoiding a rigorous search for global minimum and accepting local minima, the proposed algorithm improves CPU time over traditional optimization procedures by several orders of magnitude. Average flow responses run on simulated realizations are shown to bracket correctly the reference responses of the training image.  相似文献   

19.
In the analysis of petroleum reservoirs, one of the most challenging problems is to use inverse theory in the search for an optimal parameterization of the reservoir. Generally, scientists approach this problem by computing a sensitivity matrix and then perform a singular value decomposition in order to determine the number of degrees of freedom i.e. the number of independent parameters necessary to specify the configuration of the system. Here we propose a complementary approach: it uses the concept of refinement indicators to select those degrees which have the greatest sensitivity to an objective function quantifying the mismatch between measured and simulated data. We apply this approach to the problem of data integration for petrophysical reservoir charaterization where geoscientists are currently working with multimillion cell geological models. Data integration may be performed by gradually deforming (by a linear combination) a set of these multimillion grid geostatistical realizations during the optimization process. The inversion parameters are then reduced to the number of coefficients of this linear combination. However, there is an infinity of geostatistical realizations to choose from which may not be efficient regarding operational constraints. Following our new approach, we are able through a single objective function evaluation to compute refinement indicators that indicate which realizations might improve the iterative geological model in a significant way. This computation is extremely fast as it implies a single gradient computation through the adjoint state approach and dot products. Using only the most sensitive realizations from a given set, we are able to resolve quicker the optimization problem case. We applied this methodology to the integration of interference test data into 3D geostatistical models.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号