首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Uncertainty quantification is typically accomplished by simulating multiple geological realizations, which can be very expensive computationally if the flow process is complicated and the models are highly resolved. Upscaling procedures can be applied to reduce computational demands, though it is essential that the resulting coarse-model predictions correspond to reference fine-scale solutions. In this work, we develop an ensemble level upscaling (EnLU) procedure for compositional systems, which enables the efficient generation of multiple coarse models for use in uncertainty quantification. We apply a newly developed global compositional upscaling method to provide coarse-scale parameters and functions for selected realizations. This global upscaling entails transmissibility and relative permeability upscaling, along with the computation of a-factors to capture component fluxes. Additional features include near-well upscaling for all coarse parameters and functions, and iteration on the a-factors, which is shown to improve accuracy. In the EnLU framework, this global upscaling is applied for only a few selected realizations. For 90 % or more of the realizations, upscaled functions are assigned statistically based on quickly computed flow and permeability attributes. A sequential Gaussian co-simulation procedure is incorporated to provide coarse models that honor the spatial correlation structure of the upscaled properties. The resulting EnLU procedure is applied for multiple realizations of two-dimensional models, for both Gaussian and channelized permeability fields. Results demonstrate that EnLU provides P10, P50, and P90 results for phase and component production rates that are in close agreement with reference fine-scale results. Less accuracy is observed in realization-by-realization comparisons, though the models are still much more accurate than those generated using standard coarsening procedures.  相似文献   

2.
Distance-based stochastic techniques have recently emerged in the context of ensemble modeling, in particular for history matching, model selection and uncertainty quantification. Starting with an initial ensemble of realizations, a distance between any two models is defined. This distance is defined such that the objective of the study is incorporated into the geological modeling process, thereby potentially enhancing the efficacy of the overall workflow. If the intent is to create new models that are constrained to dynamic data (history matching), the calculation of the distance requires flow simulation for each model in the initial ensemble. This can be very time consuming, especially for high-resolution models. In this paper, we present a multi-resolution framework for ensemble modeling. A distance-based procedure is employed, with emphasis on the rapid construction of multiple models that have improved dynamic data conditioning. Our intent is to construct new high-resolution models constrained to dynamic data, while performing most of the flow simulations only on upscaled models. An error modeling procedure is introduced into the distance calculations to account for potential errors in the upscaling. Based on a few fine-scale flow simulations, the upscaling error is estimated for each model using a clustering technique. We demonstrate the efficiency of the method on two examples, one where the upscaling error is small, and another where the upscaling error is significant. Results show that the error modeling procedure can accurately capture the error in upscaling, and can thus reproduce the fine-scale flow behavior from coarse-scale simulations with sufficient accuracy (in terms of uncertainty predictions). As a consequence, an ensemble of high-resolution models, which are constrained to dynamic data, can be obtained, but with a minimum of flow simulations at the fine scale.  相似文献   

3.
Large-scale flow models constructed using standard coarsening procedures may not accurately resolve detailed near-well effects. Such effects are often important to capture, however, as the interaction of the well with the formation can have a dominant impact on process performance. In this work, a near-well upscaling procedure, which provides three-phase well-block properties, is developed and tested. The overall approach represents an extension of a recently developed oil–gas upscaling procedure and entails the use of local well computations (over a region referred to as the local well model (LWM)) along with a gradient-based optimization procedure to minimize the mismatch between fine and coarse-scale well rates, for oil, gas, and water, over the LWM. The gradients required for the minimization are computed efficiently through solution of adjoint equations. The LWM boundary conditions are determined using an iterative local-global procedure. With this approach, pressures and saturations computed during a global coarse-scale simulation are interpolated onto LWM boundaries and then used as boundary conditions for the fine-scale LWM computations. In addition to extending the overall approach to the three-phase case, this work also introduces new treatments that provide improved accuracy in cases with significant flux from the gas cap into the well block. The near-well multiphase upscaling method is applied to heterogeneous reservoir models, with production from vertical and horizontal wells. Simulation results illustrate that the method is able to accurately capture key near-well effects and to provide predictions for component production rates that are in close agreement with reference fine-scale results. The level of accuracy of the procedure is shown to be significantly higher than that of a standard approach which uses only upscaled single-phase flow parameters.  相似文献   

4.
5.
Stochastic spatial simulation allows generation of multiple realizations of spatial variables. Due to the computational time required for evaluating the transfer function, uncertainty quantification of these multiple realizations often requires a selection of a small subset of realization. However, by selecting only a few realizations, one may risk biasing the P10, P50, and P90 estimates as compared to the original multiple realizations. The objective of this study is to develop a methodology to quantify confidence intervals for the estimated P10, P50, and P90 quantiles when only a few models are retained for response evaluation. We use the parametric bootstrap technique, which evaluates the variability of the statistics obtained from uncertainty quantification and constructs confidence intervals. Using this technique, we compare the confidence intervals when using two selection methods: the traditional ranking technique and the distance-based kernel clustering technique (DKM). The DKM has been recently developed and has been shown to be effective in quantifying uncertainty. The methodology is demonstrated using two examples. The first example is a synthetic example, which uses bi-normal variables and serves to demonstrate the technique. The second example is from an oil field in West Africa where the uncertain variable is the cumulative oil production coming from 20 wells. The results show that, for the same number of transfer function evaluations, the DKM method has equal or smaller error and confidence interval compared to ranking.  相似文献   

6.
Upscaled flow functions are often needed to account for the effects of fine-scale permeability heterogeneity in coarse-scale simulation models. We present procedures in which the required coarse-scale flow functions are statistically assigned to an ensemble of upscaled geological models. This can be viewed as an extension and further development of a recently developed ensemble level upscaling (EnLU) approach. The method aims to efficiently generate coarse-scale flow models capable of reproducing the ensemble statistics (e.g., cumulative distribution function) of fine-scale flow predictions for multiple reservoir models. The most expensive part of standard coarsening procedures is typically the generation of upscaled two-phase flow functions (e.g., relative permeabilities). EnLU provides a means for efficiently generating these upscaled functions using stochastic simulation. This involves the use of coarse-block attributes that are both fast to compute and correlate closely with the upscaled two-phase functions. In this paper, improved attributes for use in EnLU, namely the coefficient of variation of the fine-scale single-phase velocity field (computed during computation of upscaled absolute permeability) and the integral range of the fine-scale permeability variogram, are identified. Geostatistical simulation methods, which account for spatial correlations of the statistically generated upscaled functions, are also applied. The overall methodology thus enables the efficient generation of coarse-scale flow models. The procedure is tested on 3D well-driven flow problems with different permeability distributions and variable fluid mobility ratios. EnLU is shown to capture the ensemble statistics of fine-scale flow results (water and oil flow rates as a function of time) with similar accuracy to full flow-based upscaling methods but with computational speedups of more than an order of magnitude.  相似文献   

7.
The aim of upscaling is to determine equivalent homogeneous parameters at a coarse-scale from a spatially oscillating fine-scale parameter distribution. To be able to use a limited number of relatively large grid-blocks in numerical oil reservoir simulators or groundwater models, upscaling of the permeability is frequently applied. The spatial fine-scale permeability distribution is generally obtained from geological and geostatistical models. After upscaling, the coarse-scale permeabilities are incorporated in the relatively large grid-blocks of the numerical model. If the porous rock may be approximated as a periodic medium, upscaling can be performed by the method of homogenization. In this paper the homogenization is performed numerically, which gives rise to an approximation error. The complementarity between two different numerical methods – the conformal-nodal finite element method and the mixed-hybrid finite element method – has been used to quantify this error. These two methods yield respectively upper and lower bounds for the eigenvalues of the coarse-scale permeability tensor. Results of 3D numerical experiments are shown, both for the far field and around wells.  相似文献   

8.
9.
Uncertainty quantification for geomechanical and reservoir predictions is in general a computationally intensive problem, especially if a direct Monte Carlo approach with large numbers of full-physics simulations is used. A common solution to this problem, well-known for the fluid flow simulations, is the adoption of surrogate modeling approximating the physical behavior with respect to variations in uncertain parameters. The objective of this work is the quantification of such uncertainty both within geomechanical predictions and fluid-flow predictions using a specific surrogate modeling technique, which is based on a functional approach. The methodology realizes an approximation of full-physics simulated outputs that are varying in time and space when uncertainty parameters are changed, particularly important for the prediction of uncertainty in vertical displacement resulting from geomechanical modeling. The developed methodology has been applied both to a subsidence uncertainty quantification example and to a real reservoir forecast risk assessment. The surrogate quality obtained with these applications confirms that the proposed method makes it possible to perform reliable time–space varying dependent risk assessment with a low computational cost, provided the uncertainty space is low-dimensional.  相似文献   

10.
Uncertainty quantification for subsurface flow problems is typically accomplished through model-based inversion procedures in which multiple posterior (history-matched) geological models are generated and used for flow predictions. These procedures can be demanding computationally, however, and it is not always straightforward to maintain geological realism in the resulting history-matched models. In some applications, it is the flow predictions themselves (and the uncertainty associated with these predictions), rather than the posterior geological models, that are of primary interest. This is the motivation for the data-space inversion (DSI) procedure developed in this paper. In the DSI approach, an ensemble of prior model realizations, honoring prior geostatistical information and hard data at wells, are generated and then (flow) simulated. The resulting production data are assembled into data vectors that represent prior ‘realizations’ in the data space. Pattern-based mapping operations and principal component analysis are applied to transform non-Gaussian data variables into lower-dimensional variables that are closer to multivariate Gaussian. The data-space inversion is posed within a Bayesian framework, and a data-space randomized maximum likelihood method is introduced to sample the conditional distribution of data variables given observed data. Extensive numerical results are presented for two example cases involving oil–water flow in a bimodal channelized system and oil–water–gas flow in a Gaussian permeability system. For both cases, DSI results for uncertainty quantification (e.g., P10, P50, P90 posterior predictions) are compared with those obtained from a strict rejection sampling (RS) procedure. Close agreement between the DSI and RS results is consistently achieved, even when the (synthetic) true data to be matched fall near the edge of the prior distribution. Computational savings using DSI are very substantial in that RS requires \(O(10^5\)\(10^6)\) flow simulations, in contrast to 500 for DSI, for the cases considered.  相似文献   

11.
An important task in modern geostatistics is the assessment and quantification of resource and reserve uncertainty. This uncertainty is valuable support information for many management decisions. Uncertainty at specific locations and uncertainty in the global resource is of interest. There are many different methods to build models of uncertainty, including Kriging, Cokriging, and Inverse Distance. Each method leads to different results. A method is proposed to combine local uncertainties predicted by different models to obtain a combined measure of uncertainty that combines good features of each alternative. The new estimator is the overlap of alternate conditional distributions.  相似文献   

12.
Ensemble methods present a practical framework for parameter estimation, performance prediction, and uncertainty quantification in subsurface flow and transport modeling. In particular, the ensemble Kalman filter (EnKF) has received significant attention for its promising performance in calibrating heterogeneous subsurface flow models. Since an ensemble of model realizations is used to compute the statistical moments needed to perform the EnKF updates, large ensemble sizes are needed to provide accurate updates and uncertainty assessment. However, for realistic problems that involve large-scale models with computationally demanding flow simulation runs, the EnKF implementation is limited to small-sized ensembles. As a result, spurious numerical correlations can develop and lead to inaccurate EnKF updates, which tend to underestimate or even eliminate the ensemble spread. Ad hoc practical remedies, such as localization, local analysis, and covariance inflation schemes, have been developed and applied to reduce the effect of sampling errors due to small ensemble sizes. In this paper, a fast linear approximate forecast method is proposed as an alternative approach to enable the use of large ensemble sizes in operational settings to obtain more improved sample statistics and EnKF updates. The proposed method first clusters a large number of initial geologic model realizations into a small number of groups. A representative member from each group is used to run a full forward flow simulation. The flow predictions for the remaining realizations in each group are approximated by a linearization around the full simulation results of the representative model (centroid) of the respective cluster. The linearization can be performed using either adjoint-based or ensemble-based gradients. Results from several numerical experiments with two-phase and three-phase flow systems in this paper suggest that the proposed method can be applied to improve the EnKF performance in large-scale problems where the number of full simulation is constrained.  相似文献   

13.
Uncertainty in surfactant–polymer flooding is an important challenge to the wide-scale implementation of this process. Any successful design of this enhanced oil recovery process will necessitate a good understanding of uncertainty. Thus, it is essential to have the ability to quantify this uncertainty in an efficient manner. Monte Carlo simulation is the traditional uncertainty quantification approach that is used for quantifying parametric uncertainty. However, the convergence of Monte Carlo simulation is relatively low, requiring a large number of realizations to converge. This study proposes the use of the probabilistic collocation method in parametric uncertainty quantification for surfactant–polymer flooding using four synthetic reservoir models. Four sources of uncertainty were considered: the chemical flood residual oil saturation, surfactant and polymer adsorption, and the polymer viscosity multiplier. The output parameter approximated is the recovery factor. The output metrics were the input–output model response relationship, the probability density function, and the first two moments. These were compared with the results obtained from Monte Carlo simulation over a large number of realizations. Two methods for solving for the coefficients of the output parameter polynomial chaos expansion are compared: Gaussian quadrature and linear regression. The linear regression approach used two types of sampling: full-tensor product nodes and Chebyshev-derived nodes. In general, the probabilistic collocation method was applied successfully to quantify the uncertainty in the recovery factor. Applying the method using the Gaussian quadrature produced more accurate results compared with using the linear regression with full-tensor product nodes. Applying the method using the linear regression with Chebyshev derived sampling also performed relatively well. Possible enhancements to improve the performance of the probabilistic collocation method were discussed. These enhancements include improved sparse sampling, approximation order-independent sampling, and using arbitrary random input distribution that could be more representative of reality.  相似文献   

14.
Paleoliquefaction features can be used to estimate lower bounds on the magnitude and ground motion associated with the earthquake that caused the liquefaction feature. The engineering back-analysis of paleoliquefaction features is usually conducted using state of the practice liquefaction-triggering analysis methodologies. Recent studies have shown that these methodologies are associated with variable probabilities of liquefaction depending on the soil parameters. This would imply that estimates of magnitude and ground motion intensity obtained from these methodologies would not be consistent for all soil sites. Moreover, these estimates could be unconservative. In this paper, the use of a probabilistic methodology for the back-analysis of paleoliquefaction features is proposed. The proposed methodology permits the incorporation of model and parameter uncertainty into the analysis and results in more robust estimates of past magnitude and a measure of the uncertainty associated with these predictions. Previously published paleoliquefaction data are used to demonstrate the applicability of the proposed method. Magnitude estimates obtained with the proposed method do not differ significantly from those obtained using deterministic methodologies, but the proposed methodology permits a quantification of the uncertainty associated with magnitude estimates.  相似文献   

15.
The past two decades have seen a rapid adoption of artificial intelligence methods applied to mineral exploration. More recently, the easier acquisition of some types of data has inspired a broad literature that has examined many machine learning and modelling techniques that combine exploration criteria, or ‘features’, to generate predictions for mineral prospectivity. Central to the design of prospectivity models is a ‘mineral system’, a conceptual model describing the key geological elements that control the timing and location of economic mineralisation. The mineral systems model defines what constitutes a training set, which features represent geological evidence of mineralisation, how features are engineered and what modelling methods are used. Mineral systems are knowledge-driven conceptual models, thus all parameter choices are subject to human biases and opinion so alternative models are possible. However, the effect of alternative mineral systems models on prospectivity is rarely compared despite the potential to heavily influence final predictions. In this study, we focus on the effect of conceptual uncertainty on Fe ore prospectivity models in the Hamersley region, Western Australia. Four important considerations are tested. (1) Five different supergene and hypogene conceptual mineral systems models guide the inputs for five forest-based classification prospectivity models model. (2) To represent conceptual uncertainty, the predictions are then combined for prospectivity model comparison. (3) Representation of three-dimensional objects as two-dimensional features are tested to address commonly ignored thickness of geological units. (4) The training dataset is composed of known economic mineralisation sites (deposits) as ‘positive’ examples, and exploration drilling data providing ‘negative’ sampling locations. Each of the spatial predictions are assessed using independent performance metrics common to AI-based classification methods and subjected to geological plausibility testing. We find that different conceptual mineral systems produce significantly different spatial predictions, thus conceptual uncertainty must be recognised. A benefit to recognising and modelling different conceptual models is that robust and geologically plausible predictions can be made that may guide mineral discovery.  相似文献   

16.

Conditioning complex subsurface flow models on nonlinear data is complicated by the need to preserve the expected geological connectivity patterns to maintain solution plausibility. Generative adversarial networks (GANs) have recently been proposed as a promising approach for low-dimensional representation of complex high-dimensional images. The method has also been adopted for low-rank parameterization of complex geologic models to facilitate uncertainty quantification workflows. A difficulty in adopting these methods for subsurface flow modeling is the complexity associated with nonlinear flow data conditioning. While conditional GAN (CGAN) can condition simulated images on labels, application to subsurface problems requires efficient conditioning workflows for nonlinear data, which is far more complex. We present two approaches for generating flow-conditioned models with complex spatial patterns using GAN. The first method is through conditional GAN, whereby a production response label is used as an auxiliary input during the training stage of GAN. The production label is derived from clustering of the flow responses of the prior model realizations (i.e., training data). The underlying assumption of this approach is that GAN can learn the association between the spatial features corresponding to the production responses within each cluster. An alternative method is to use a subset of samples from the training data that are within a certain distance from the observed flow responses and use them as training data within GAN to generate new model realizations. In this case, GAN is not required to learn the nonlinear relation between production responses and spatial patterns. Instead, it is tasked to learn the patterns in the selected realizations that provide a close match to the observed data. The conditional low-dimensional parameterization for complex geologic models with diverse spatial features (i.e., when multiple geologic scenarios are plausible) performed by GAN allows for exploring the spatial variability in the conditional realizations, which can be critical for decision-making. We present and discuss the important properties of GAN for data conditioning using several examples with increasing complexity.

  相似文献   

17.
Uncertainty quantification is currently one of the leading challenges in the geosciences, in particular in reservoir modeling. A wealth of subsurface data as well as expert knowledge are available to quantify uncertainty and state predictions on reservoir performance or reserves. The geosciences component within this larger modeling framework is partially an interpretive science. Geologists and geophysicists interpret data to postulate on the nature of the depositional environment, for example on the type of fracture system, the nature of faulting, and the type of rock physics model. Often, several alternative scenarios or interpretations are offered, including some associated belief quantified with probabilities. In the context of facies modeling, this could result in various interpretations of facies architecture, associations, geometries, and the way they are distributed in space. A quantitative approach to specify this uncertainty is to provide a set of alternative 3D training images from which several geostatistical models can be generated. In this paper, we consider quantifying uncertainty on facies models in the early development stage of a reservoir when there is still considerable uncertainty on the nature of the spatial distribution of the facies. At this stage, production data are available to further constrain uncertainty. We develop a workflow that consists of two steps: (1) determining which training images are no longer consistent with production data and should be rejected and (2) to history match with a given fixed training image. We illustrate our ideas and methodology on a test case derived from a real field case of predicting flow in a newly planned well in a turbidite reservoir off the African West coast.  相似文献   

18.
River flow is a complex dynamic system of hydraulic and sediment transport. Bed load transport have a dynamic nature in gravel bed rivers and because of the complexity of the phenomenon include uncertainties in predictions. In the present paper, two methods based on the Artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are developed by using 360 data points. Totally, 21 different combination of input parameters are used for predicting bed load transport in gravel bed rivers. In order to acquire reliable data subsets of training and testing, subset selection of maximum dissimilarity (SSMD) method, rather than classical trial and error method, is used in finding randomly manipulation of these subsets. Furthermore, uncertainty analysis of ANN and ANFIS models are determined using Monte Carlo simulation. Two uncertainty indices of d factor and 95% prediction uncertainty and uncertainty bounds in comparison with observed values show that these models have relatively large uncertainties in bed load predictions and using of them in practical problems requires considerable effort on training and developing processes. Results indicated that ANFIS and ANN are suitable models for predicting bed load transport; but there are many uncertainties in determination of bed load transport by ANFIS and ANN, especially for high sediment loads. Based on the predictions and confidence intervals, the superiority of ANFIS to those of ANN is proved.  相似文献   

19.
Coarse-scale data assimilation (DA) with large ensemble size is proposed as a robust alternative to standard DA with localization for reservoir history matching problems. With coarse-scale DA, the unknown property function associated with each ensemble member is upscaled to a grid significantly coarser than the original reservoir simulator grid. The grid coarsening is automatic, ensemble-specific and non-uniform. The selection of regions where the grid can be coarsened without introducing too large modelling errors is performed using a second-generation wavelet transform allowing for seamless handling of non-dyadic grids and inactive grid cells. An inexpensive local-local upscaling is performed on each ensemble member. A DA algorithm that restarts from initial time is utilized, which avoids the need for downscaling. Since the DA computational cost roughly equals the number of ensemble members times the cost of a single forward simulation, coarse-scale DA allows for a significant increase in the number of ensemble members at the same computational cost as standard DA with localization. Fixing the computational cost for both approaches, the quality of coarse-scale DA is compared to that of standard DA with localization (using state-of-the-art localization techniques) on examples spanning a large degree of variability. It is found that coarse-scale DA is more robust with respect to variation in example type than each of the localization techniques considered with standard DA. Although the paper is concerned with two spatial dimensions, coarse-scale DA is easily extendible to three spatial dimensions, where it is expected that its advantage with respect to standard DA with localization will increase.  相似文献   

20.
This paper presents statistical aspects related to the calibration process and a comparison of different regression approaches of relevance to almost all analytical techniques. The models for ordinary least-squares (OLS), weighted least-squares (WLS), and maximum likelihood fitting (MLF) were evaluated and, as a case study, X-ray fluorescence (XRF) calibration curves for major elements in geochemical reference materials were used. The results showed that WLS and MLF models were statistically more consistent in comparison with the usually applied OLS approach. The use of uncertainty on independent and dependent variables during the calibration process and the calculation of final uncertainty on individual results using error propagation equations are the novel aspects of our work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号