首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.

A new low-dimensional parameterization based on principal component analysis (PCA) and convolutional neural networks (CNN) is developed to represent complex geological models. The CNN–PCA method is inspired by recent developments in computer vision using deep learning. CNN–PCA can be viewed as a generalization of an existing optimization-based PCA (O-PCA) method. Both CNN–PCA and O-PCA entail post-processing a PCA model to better honor complex geological features. In CNN–PCA, rather than use a histogram-based regularization as in O-PCA, a new regularization involving a set of metrics for multipoint statistics is introduced. The metrics are based on summary statistics of the nonlinear filter responses of geological models to a pre-trained deep CNN. In addition, in the CNN–PCA formulation presented here, a convolutional neural network is trained as an explicit transform function that can post-process PCA models quickly. CNN–PCA is shown to provide both unconditional and conditional realizations that honor the geological features present in reference SGeMS geostatistical realizations for a binary channelized system. Flow statistics obtained through simulation of random CNN–PCA models closely match results for random SGeMS models for a demanding case in which O-PCA models lead to significant discrepancies. Results for history matching are also presented. In this assessment CNN–PCA is applied with derivative-free optimization, and a subspace randomized maximum likelihood method is used to provide multiple posterior models. Data assimilation and significant uncertainty reduction are achieved for existing wells, and physically reasonable predictions are also obtained for new wells. Finally, the CNN–PCA method is extended to a more complex nonstationary bimodal deltaic fan system, and is shown to provide high-quality realizations for this challenging example.

  相似文献   

2.
In the analysis of petroleum reservoirs, one of the most challenging problems is to use inverse theory in the search for an optimal parameterization of the reservoir. Generally, scientists approach this problem by computing a sensitivity matrix and then perform a singular value decomposition in order to determine the number of degrees of freedom i.e. the number of independent parameters necessary to specify the configuration of the system. Here we propose a complementary approach: it uses the concept of refinement indicators to select those degrees which have the greatest sensitivity to an objective function quantifying the mismatch between measured and simulated data. We apply this approach to the problem of data integration for petrophysical reservoir charaterization where geoscientists are currently working with multimillion cell geological models. Data integration may be performed by gradually deforming (by a linear combination) a set of these multimillion grid geostatistical realizations during the optimization process. The inversion parameters are then reduced to the number of coefficients of this linear combination. However, there is an infinity of geostatistical realizations to choose from which may not be efficient regarding operational constraints. Following our new approach, we are able through a single objective function evaluation to compute refinement indicators that indicate which realizations might improve the iterative geological model in a significant way. This computation is extremely fast as it implies a single gradient computation through the adjoint state approach and dot products. Using only the most sensitive realizations from a given set, we are able to resolve quicker the optimization problem case. We applied this methodology to the integration of interference test data into 3D geostatistical models.  相似文献   

3.
An approach for geostatistically consistent matching of 3D flow simulation models and 3D geological models is proposed. This approach uses an optimization algorithm based on identification of the parameters of the geostatistical model (for example, the variogram parameters, such as range, sill, and nugget effect). Here, the inverse problem is considered in the greatest generality taking into account facies heterogeneity and the variogram anisotropy. The correlation dependence parameters (porosity-to-log permeability) are clarified for each single facies.  相似文献   

4.
This paper describes a novel approach for creating an efficient, general, and differentiable parameterization of large-scale non-Gaussian, non-stationary random fields (represented by multipoint geostatistics) that is capable of reproducing complex geological structures such as channels. Such parameterizations are appropriate for use with gradient-based algorithms applied to, for example, history-matching or uncertainty propagation. It is known that the standard Karhunen–Loeve (K–L) expansion, also called linear principal component analysis or PCA, can be used as a differentiable parameterization of input random fields defining the geological model. The standard K–L model is, however, limited in two respects. It requires an eigen-decomposition of the covariance matrix of the random field, which is prohibitively expensive for large models. In addition, it preserves only the two-point statistics of a random field, which is insufficient for reproducing complex structures. In this work, kernel PCA is applied to address the limitations associated with the standard K–L expansion. Although widely used in machine learning applications, it does not appear to have found any application for geological model parameterization. With kernel PCA, an eigen-decomposition of a small matrix called the kernel matrix is performed instead of the full covariance matrix. The method is much more efficient than the standard K–L procedure. Through use of higher order polynomial kernels, which implicitly define a high-dimensionality feature space, kernel PCA further enables the preservation of high-order statistics of the random field, instead of just two-point statistics as in the K–L method. The kernel PCA eigen-decomposition proceeds using a set of realizations created by geostatistical simulation (honoring two-point or multipoint statistics) rather than the analytical covariance function. We demonstrate that kernel PCA is capable of generating differentiable parameterizations that reproduce the essential features of complex geological structures represented by multipoint geostatistics. The kernel PCA representation is then applied to history match a water flooding problem. This example demonstrates that kernel PCA can be used with gradient-based history matching to provide models that match production history while maintaining multipoint geostatistics consistent with the underlying training image.  相似文献   

5.
A Bayesian linear inversion methodology based on Gaussian mixture models and its application to geophysical inverse problems are presented in this paper. The proposed inverse method is based on a Bayesian approach under the assumptions of a Gaussian mixture random field for the prior model and a Gaussian linear likelihood function. The model for the latent discrete variable is defined to be a stationary first-order Markov chain. In this approach, a recursive exact solution to an approximation of the posterior distribution of the inverse problem is proposed. A Markov chain Monte Carlo algorithm can be used to efficiently simulate realizations from the correct posterior model. Two inversion studies based on real well log data are presented, and the main results are the posterior distributions of the reservoir properties of interest, the corresponding predictions and prediction intervals, and a set of conditional realizations. The first application is a seismic inversion study for the prediction of lithological facies, P- and S-impedance, where an improvement of 30% in the root-mean-square error of the predictions compared to the traditional Gaussian inversion is obtained. The second application is a rock physics inversion study for the prediction of lithological facies, porosity, and clay volume, where predictions slightly improve compared to the Gaussian inversion approach.  相似文献   

6.
Based on the algorithm for gradual deformation of Gaussian stochastic models, we propose, in this paper, an extension of this method to gradually deforming realizations generated by sequential, not necessarily Gaussian, simulation. As in the Gaussian case, gradual deformation of a sequential simulation preserves spatial variability of the stochastic model and yields in general a regular objective function that can be minimized by an efficient optimization algorithm (e.g., a gradient-based algorithm). Furthermore, we discuss the local gradual deformation and the gradual deformation with respect to the structural parameters (mean, variance, and variogram range, etc.) of realizations generated by sequential simulation. Local gradual deformation may significantly improve calibration speed in the case where observations are scattered in different zones of a field. Gradual deformation with respect to structural parameters is necessary when these parameters cannot be inferred a priori and need to be determined using an inverse procedure. A synthetic example inspired from a real oil field is presented to illustrate different aspects of this approach. Results from this case study demonstrate the efficiency of the gradual deformation approach for constraining facies models generated by sequential indicator simulation. They also show the potential applicability of the proposed approach to complex real cases.  相似文献   

7.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or 4D seismic data. Although reservoir heterogeneities are commonly generated using geostatistical models, random realizations cannot generally match observed dynamic data. To constrain model realizations to reproduce measured dynamic data, an optimization procedure may be applied in an attempt to minimize an objective function, which quantifies the mismatch between real and simulated data. Such assisted history matching methods require a parameterization of the geostatistical model to allow the updating of an initial model realization. However, there are only a few parameterization methods available to update geostatistical models in a way consistent with the underlying geostatistical properties. This paper presents a local domain parameterization technique that updates geostatistical realizations using assisted history matching. This technique allows us to locally change model realizations through the variation of geometrical domains whose geometry and size can be easily controlled and parameterized. This approach provides a new way to parameterize geostatistical realizations in order to improve history matching efficiency.  相似文献   

8.
Conditioning realizations of stationary Gaussian random fields to a set of data is traditionally based on simple kriging. In practice, this approach may be demanding as it does not account for the uncertainty in the spatial average of the random field. In this paper, an alternative model is presented, in which the Gaussian field is decomposed into a random mean, constant over space but variable over the realizations, and an independent residual. It is shown that, when the prior variance of the random mean is infinitely large (reflecting prior ignorance on the actual spatial average), the realizations of the Gaussian random field are made conditional by substituting ordinary kriging for simple kriging. The proposed approach can be extended to models with random drifts that are polynomials in the spatial coordinates, by using universal or intrinsic kriging for conditioning the realizations, and also to multivariate situations by using cokriging instead of kriging.  相似文献   

9.
Uncertainty quantification is typically accomplished by simulating multiple geological realizations, which can be very expensive computationally if the flow process is complicated and the models are highly resolved. Upscaling procedures can be applied to reduce computational demands, though it is essential that the resulting coarse-model predictions correspond to reference fine-scale solutions. In this work, we develop an ensemble level upscaling (EnLU) procedure for compositional systems, which enables the efficient generation of multiple coarse models for use in uncertainty quantification. We apply a newly developed global compositional upscaling method to provide coarse-scale parameters and functions for selected realizations. This global upscaling entails transmissibility and relative permeability upscaling, along with the computation of a-factors to capture component fluxes. Additional features include near-well upscaling for all coarse parameters and functions, and iteration on the a-factors, which is shown to improve accuracy. In the EnLU framework, this global upscaling is applied for only a few selected realizations. For 90 % or more of the realizations, upscaled functions are assigned statistically based on quickly computed flow and permeability attributes. A sequential Gaussian co-simulation procedure is incorporated to provide coarse models that honor the spatial correlation structure of the upscaled properties. The resulting EnLU procedure is applied for multiple realizations of two-dimensional models, for both Gaussian and channelized permeability fields. Results demonstrate that EnLU provides P10, P50, and P90 results for phase and component production rates that are in close agreement with reference fine-scale results. Less accuracy is observed in realization-by-realization comparisons, though the models are still much more accurate than those generated using standard coarsening procedures.  相似文献   

10.
Spatially distributed and varying natural phenomena encountered in geoscience and engineering problem solving are typically incompatible with Gaussian models, exhibiting nonlinear spatial patterns and complex, multiple-point connectivity of extreme values. Stochastic simulation of such phenomena is historically founded on second-order spatial statistical approaches, which are limited in their capacity to model complex spatial uncertainty. The newer multiple-point (MP) simulation framework addresses past limits by establishing the concept of a training image, and, arguably, has its own drawbacks. An alternative to current MP approaches is founded upon new high-order measures of spatial complexity, termed “high-order spatial cumulants.” These are combinations of moments of statistical parameters that characterize non-Gaussian random fields and can describe complex spatial information. Stochastic simulation of complex spatial processes is developed based on high-order spatial cumulants in the high-dimensional space of Legendre polynomials. Starting with discrete Legendre polynomials, a set of discrete orthogonal cumulants is introduced as a tool to characterize spatial shapes. Weighted orthonormal Legendre polynomials define the so-called Legendre cumulants that are high-order conditional spatial cumulants inferred from training images and are combined with available sparse data sets. Advantages of the high-order sequential simulation approach developed herein include the absence of any distribution-related assumptions and pre- or post-processing steps. The method is shown to generate realizations of complex spatial patterns, reproduce bimodal data distributions, data variograms, and high-order spatial cumulants of the data. In addition, it is shown that the available hard data dominate the simulation process and have a definitive effect on the simulated realizations, whereas the training images are only used to fill in high-order relations that cannot be inferred from data. Compared to the MP framework, the proposed approach is data-driven and consistently reconstructs the lower-order spatial complexity in the data used, in addition to high order.  相似文献   

11.
The spatial continuity of facies is one of the key factors controlling flow in reservoir models. Traditional pixel-based methods such as truncated Gaussian random fields and indicator simulation are based on only two-point statistics, which is insufficient to capture complex facies structures. Current methods for multi-point statistics either lack a consistent statistical model specification or are too computer intensive to be applicable. We propose a Markov mesh model based on generalized linear models for geological facies modeling. The approach defines a consistent statistical model that is facilitated by efficient estimation of model parameters and generation of realizations. Our presentation includes a formulation of the general framework, model specifications in two and three dimensions, and details on how the parameters can be estimated from a training image. We illustrate the method using multiple training images, including binary and trinary images and simulations in two and three dimensions. We also do a thorough comparison to the snesim approach. We find that the current model formulation is applicable for multiple training images and compares favorably to the snesim approach in our test examples. The method is highly memory efficient.  相似文献   

12.
Conditioning Surface-Based Geological Models to Well and Thickness Data   总被引:2,自引:1,他引:1  
Geostatistical simulation methods aim to represent spatial uncertainty through realizations that reflect a certain geological concept by means of a spatial continuity model. Most common spatial continuity models are either variogram, training image, or Boolean based. In this paper, a more recent spatial model of geological continuity is developed, termed the event, or surface-based model, which is specifically applicable to modeling cases with complex stratigraphy, such as in sedimentary systems. These methods rely on a rule-based stacking of events, which are mathematically represented by two-dimensional thickness variations over the domain, where positive thickness is associated with deposition and negative thickness with erosion. Although it has been demonstrated that the surface-based models accurately represent the geological variation present in complex layered systems, they are more difficult to constrain to hard and soft data as is typically required of practical geostatistical techniques. In this paper, we develop a practical methodology for constraining such models to hard data from wells and thickness data interpreted from geophysics, such as seismic data. Our iterative methodology relies on a decomposition of the parameter optimization problem into smaller, manageable problems that are solved sequentially. We demonstrate this method on a real case study of a turbidite sedimentary basin.  相似文献   

13.

One main problem in the modeling of mineral deposits is to design a block model that divides the deposit into homogeneous subdomains. The spatial uncertainty in the geological boundaries becomes a critical factor prior to the modeling of the ore properties. For this reason, reducing the uncertainty of geological models leads to an improved mineral resource evaluation. This research work addresses the problem of updating the geological models by using actual online-sensor measurement data. A novel algorithm is provided, which integrates the discrete wavelet transform to the Ensemble Kalman Filter for assimilating online-sensor production data into geological models. The geological realizations in each time step are transformed to frequency coefficients and, after each assimilation step, the updated realizations are back-transformed to the original categorical distribution. Furthermore, a reconciliation process is performed to compare the online-sensor data derived from the production blocks and the updated realizations in each time step. The algorithm is illustrated through an application to the Golgohar iron deposit located in SW of Sirjan, Iran, and proves to reproduce the statistical parameters and connectivity values of the primary geological realizations.

  相似文献   

14.
Parametric geostatistical simulations such as LU decomposition and sequential algorithms do not need Gaussian distributions. It is shown that variogram model reproduction is obtained when Uniform or Dipole distributions are used instead of Gaussian distributions for drawing i. i.d. random values in LU simulation, or for modeling the local conditional probability distributions in sequential simulation. Both algorithms yield simulated values with a marginal normal distribution no matter if Gaussian, Uniform, or Dipole distributions are used. The range of simulated values decreases as the entropy of the probability distribution decreases. Using Gaussian distributions provides a larger range of simulated normal score values than using Uniform or Dipole distributions. This feature has a negligible effect for reproduction of the normal scores variogram model but have a larger impact on the reproduction of the original values variogram. The Uniform or Dipole distributions also produce lesser fluctuations among the variograms of the simulated realizations.  相似文献   

15.
Uncertainty quantification for subsurface flow problems is typically accomplished through model-based inversion procedures in which multiple posterior (history-matched) geological models are generated and used for flow predictions. These procedures can be demanding computationally, however, and it is not always straightforward to maintain geological realism in the resulting history-matched models. In some applications, it is the flow predictions themselves (and the uncertainty associated with these predictions), rather than the posterior geological models, that are of primary interest. This is the motivation for the data-space inversion (DSI) procedure developed in this paper. In the DSI approach, an ensemble of prior model realizations, honoring prior geostatistical information and hard data at wells, are generated and then (flow) simulated. The resulting production data are assembled into data vectors that represent prior ‘realizations’ in the data space. Pattern-based mapping operations and principal component analysis are applied to transform non-Gaussian data variables into lower-dimensional variables that are closer to multivariate Gaussian. The data-space inversion is posed within a Bayesian framework, and a data-space randomized maximum likelihood method is introduced to sample the conditional distribution of data variables given observed data. Extensive numerical results are presented for two example cases involving oil–water flow in a bimodal channelized system and oil–water–gas flow in a Gaussian permeability system. For both cases, DSI results for uncertainty quantification (e.g., P10, P50, P90 posterior predictions) are compared with those obtained from a strict rejection sampling (RS) procedure. Close agreement between the DSI and RS results is consistently achieved, even when the (synthetic) true data to be matched fall near the edge of the prior distribution. Computational savings using DSI are very substantial in that RS requires \(O(10^5\)\(10^6)\) flow simulations, in contrast to 500 for DSI, for the cases considered.  相似文献   

16.
Gradual deformation is a parameterization method that reduces considerably the unknown parameter space of stochastic models. This method can be used in an iterative optimization procedure for constraining stochastic simulations to data that are complex, nonanalytical functions of the simulated variables. This method is based on the fact that linear combinations of multi-Gaussian random functions remain multi-Gaussian random functions. During the past few years, we developed the gradual deformation method by combining independent realizations. This paper investigates another alternative: the combination of dependent realizations. One of our motivations for combining dependent realizations was to improve the numerical stability of the gradual deformation method. Because of limitations both in the size of simulation grids and in the precision of simulation algorithms, numerical realizations of a stochastic model are never perfectly independent. It was shown that the accumulation of very small dependence between realizations might result in significant structural drift from the initial stochastic model. From the combination of random functions whose covariance and cross-covariance are proportional to each other, we derived a new formulation of the gradual deformation method that can explicitly take into account the numerical dependence between realizations. This new formulation allows us to reduce the structural deterioration during the iterative optimization. The problem of combining dependent realizations also arises when deforming conditional realizations of a stochastic model. As opposed to the combination of independent realizations, combining conditional realizations avoids the additional conditioning step during the optimization process. However, this procedure is limited to global deformations with fixed structural parameters.  相似文献   

17.
Ensemble-based methods are becoming popular assisted history matching techniques with a growing number of field applications. These methods use an ensemble of model realizations, typically constructed by means of geostatistics, to represent the prior uncertainty. The performance of the history matching is very dependent on the quality of the initial ensemble. However, there is a significant level of uncertainty in the parameters used to define the geostatistical model. From a Bayesian viewpoint, the uncertainty in the geostatistical modeling can be represented by a hyper-prior in a hierarchical formulation. This paper presents the first steps towards a general parametrization to address the problem of uncertainty in the prior modeling. The proposed parametrization is inspired in Gaussian mixtures, where the uncertainty in the prior mean and prior covariance is accounted by defining weights for combining multiple Gaussian ensembles, which are estimated during the data assimilation. The parametrization was successfully tested in a simple reservoir problem where the orientation of the major anisotropic direction of the permeability field was unknown.  相似文献   

18.
This paper describes a new method for gradually deforming realizations of Gaussian-related stochastic models while preserving their spatial variability. This method consists in building a stochastic process whose state space is the ensemble of the realizations of a spatial stochastic model. In particular, a stochastic process, built by combining independent Gaussian random functions, is proposed to perform the gradual deformation of realizations. Then, the gradual deformation algorithm is coupled with an optimization algorithm to calibrate realizations of stochastic models to nonlinear data. The method is applied to calibrate a continuous and a discrete synthetic permeability fields to well-test pressure data. The examples illustrate the efficiency of the proposed method. Furthermore, we present some extensions of this method (multidimensional gradual deformation, gradual deformation with respect to structural parameters, and local gradual deformation) that are useful in practice. Although the method described in this paper is operational only in the Gaussian framework (e.g., lognormal model, truncated Gaussian model, etc.), the idea of gradually deforming realizations through a stochastic process remains general and therefore promising even for calibrating non-Gaussian models.  相似文献   

19.
Optimization with the Gradual Deformation Method   总被引:1,自引:0,他引:1  
Building reservoir models consistent with production data and prior geological knowledge is usually carried out through the minimization of an objective function. Such optimization problems are nonlinear and may be difficult to solve because they tend to be ill-posed and to involve many parameters. The gradual deformation technique was introduced recently to simplify these problems. Its main feature is the preservation of the spatial structure: perturbed realizations exhibit the same spatial variability as the starting ones. It is shown that optimizations based on gradual deformation converge exponentially to the global minimum, at least for linear problems. In addition, it appears that combining the gradual deformation parameterization with optimizations may remove step by step the structure preservation capability of the gradual deformation method. This bias is negligible when deformation is restricted to a few realization chains, but grows increasingly when the chain number tends to infinity. As in practice, optimization of reservoir models is limited to a small number of iterations with respect to the number of gridblocks, the spatial variability is preserved. Last, the optimization processes are implemented on the basis of the Levenberg–Marquardt method. Although the objective functions, written in terms of Gaussian white noises, are reduced to the data mismatch term, the conditional realization space can be properly sampled.  相似文献   

20.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号