首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

2.
A new approach based on principal component analysis (PCA) for the representation of complex geological models in terms of a small number of parameters is presented. The basis matrix required by the method is constructed from a set of prior geological realizations generated using a geostatistical algorithm. Unlike standard PCA-based methods, in which the high-dimensional model is constructed from a (small) set of parameters by simply performing a multiplication using the basis matrix, in this method the mapping is formulated as an optimization problem. This enables the inclusion of bound constraints and regularization, which are shown to be useful for capturing highly connected geological features and binary/bimodal (rather than Gaussian) property distributions. The approach, referred to as optimization-based PCA (O-PCA), is applied here mainly for binary-facies systems, in which case the requisite optimization problem is separable and convex. The analytical solution of the optimization problem, as well as the derivative of the model with respect to the parameters, is obtained analytically. It is shown that the O-PCA mapping can also be viewed as a post-processing of the standard PCA model. The O-PCA procedure is applied both to generate new (random) realizations and for gradient-based history matching. For the latter, two- and three-dimensional systems, involving channelized and deltaic-fan geological models, are considered. The O-PCA method is shown to perform very well for these history matching problems, and to provide models that capture the key sand–sand and sand–shale connectivities evident in the true model. Finally, the approach is extended to generate bimodal systems in which the properties of both facies are characterized by Gaussian distributions. MATLAB code with the O-PCA implementation, and examples demonstrating its use are provided online as Supplementary Materials.  相似文献   

3.
赵小娟  周德亮 《地下水》2019,(1):28-29,50
本文中使用的径向基函数配点法是以时空配点法为基础来解决抛物型方程的一类问题。这种方法与近似求时间导数的隐式,显式法以及其他数值法不同,它不需要对离散系统的时间稳定性进行分析。用时空径向基函数配点法求解二维地下水非稳定流动问题,通过呈现有混合边界条件及只有一类边界条件两种情况下的计算结果,说明了该方法求解该问题的精度及效率较高,结果理想。  相似文献   

4.
5.
基于成分关联区域相似度的面实体模糊匹配算法   总被引:1,自引:0,他引:1  
叶亚琴  万波  陈波 《地球科学》2010,35(3):385-390
空间目标匹配是空间数据库增量更新的第一步,也是关键一步.研究了基于空间目标匹配的变化信息的获取算法.通过研究空间数据中存在的不确定性问题,提出将模糊理论引入到空间目标匹配算法中.重点研究如何用模糊的方法解决空间目标匹配问题,并以面实体为例说明了具体匹配过程,提出了基于成分关联区域相似度的面实体模糊匹配算法.该算法利用成分关联区域的度量因子,确定模糊拓扑关系隶属度矩阵,进而量化隶属度矩阵,最终确定模糊拓扑关系分类.算法综合利用了图幅索引、成分关联因子等进行优化,简化计算复杂度,提高了算法效率.   相似文献   

6.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or 4D seismic data. Although reservoir heterogeneities are commonly generated using geostatistical models, random realizations cannot generally match observed dynamic data. To constrain model realizations to reproduce measured dynamic data, an optimization procedure may be applied in an attempt to minimize an objective function, which quantifies the mismatch between real and simulated data. Such assisted history matching methods require a parameterization of the geostatistical model to allow the updating of an initial model realization. However, there are only a few parameterization methods available to update geostatistical models in a way consistent with the underlying geostatistical properties. This paper presents a local domain parameterization technique that updates geostatistical realizations using assisted history matching. This technique allows us to locally change model realizations through the variation of geometrical domains whose geometry and size can be easily controlled and parameterized. This approach provides a new way to parameterize geostatistical realizations in order to improve history matching efficiency.  相似文献   

7.
We present a method to determine lower and upper bounds to the predicted production or any other economic objective from history-matched reservoir models. The method consists of two steps: 1) performing a traditional computer-assisted history match of a reservoir model with the objective to minimize the mismatch between predicted and observed production data through adjusting the grid block permeability values of the model. 2) performing two optimization exercises to minimize and maximize an economic objective over the remaining field life, for a fixed production strategy, by manipulating the same grid block permeabilities, however without significantly changing the mismatch obtained under step 1. This is accomplished through a hierarchical optimization procedure that limits the solution space of a secondary optimization problem to the (approximate) null space of the primary optimization problem. We applied this procedure to two different reservoir models. We performed a history match based on synthetic data, starting from a uniform prior and using a gradient-based minimization procedure. After history matching, minimization and maximization of the net present value (NPV), using a fixed control strategy, were executed as secondary optimization problems by changing the model parameters while staying close to the null space of the primary optimization problem. In other words, we optimized the secondary objective functions, while requiring that optimality of the primary objective (a good history match) was preserved. This method therefore provides a way to quantify the economic consequences of the well-known problem that history matching is a strongly ill-posed problem. We also investigated how this method can be used as a means to assess the cost-effectiveness of acquiring different data types to reduce the uncertainty in the expected NPV.  相似文献   

8.
This paper shows a history matching workflow with both production and 4D seismic data where the uncertainty of seismic data for history matching comes from Bayesian seismic waveform inversion. We use a synthetic model and perform two seismic surveys, one before start of production and the second after 1 year of production. From the first seismic survey, we estimate the contrast in slowness squared (with uncertainty) and use this estimate to generate an initial estimate of porosity and permeability fields. This ensemble is then updated using the second seismic survey (after inversion to contrasts) and production data with an iterative ensemble smoother. The impact on history matching results from using different uncertainty estimates for the seismic data is investigated. From the Bayesian seismic inversion, we get a covariance matrix for the uncertainty and we compare using the full covariance matrix with using only the diagonal. We also compare with using a simplified uncertainty estimate that does not come from the seismic inversion. The results indicate that it is important not to underestimate the noise in seismic data and that having information about the correlation in the error in seismic data can in some cases improve the results.  相似文献   

9.
激发极化数据的最小二乘二维反演方法   总被引:12,自引:1,他引:11  
给出了一种电阻雍激发极化法数据的最小二乘二维反演方法,该方法与以往这类方法的主要差别在于网格单元中电导率和极化率参数呈线性变化而不是均匀值,从而使电经和极化率的反演结果更准确,更容易用等值线成图,方法在二维有限元正演计算中,采用三角单元,使实测数据反演前不需要进行地形改正;在目标函数中加入最简单模型以及背景场等先验信息,既民反演问题的多解性,又使反结果更接近实际情况,还采用了前人用电位函数与模型参  相似文献   

10.
Improving the Ensemble Estimate of the Kalman Gain by Bootstrap Sampling   总被引:1,自引:1,他引:0  
Using a small ensemble size in the ensemble Kalman filter methodology is efficient for updating numerical reservoir models but can result in poor updates following spurious correlations between observations and model variables. The most common approach for reducing the effect of spurious correlations on model updates is multiplication of the estimated covariance by a tapering function that eliminates all correlations beyond a prespecified distance. Distance-dependent tapering is not always appropriate, however. In this paper, we describe efficient methods for discriminating between the real and the spurious correlations in the Kalman gain matrix by using the bootstrap method to assess the confidence level of each element from the Kalman gain matrix. The new method is tested on a small linear problem, and on a water flooding reservoir history matching problem. For the water flooding example, a small ensemble size of 30 was used to compute the Kalman gain in both the screened EnKF and standard EnKF methods. The new method resulted in significantly smaller root mean squared errors of the estimated model parameters and greater variability in the final updated ensemble.  相似文献   

11.
Recent progress on reservoir history matching: a review   总被引:3,自引:0,他引:3  
History matching is a type of inverse problem in which observed reservoir behavior is used to estimate reservoir model variables that caused the behavior. Obtaining even a single history-matched reservoir model requires a substantial amount of effort, but the past decade has seen remarkable progress in the ability to generate reservoir simulation models that match large amounts of production data. Progress can be partially attributed to an increase in computational power, but the widespread adoption of geostatistics and Monte Carlo methods has also contributed indirectly. In this review paper, we will summarize key developments in history matching and then review many of the accomplishments of the past decade, including developments in reparameterization of the model variables, methods for computation of the sensitivity coefficients, and methods for quantifying uncertainty. An attempt has been made to compare representative procedures and to identify possible limitations of each.  相似文献   

12.
In a previous paper, we developed a theoretical basis for parameterization of reservoir model parameters based on truncated singular value decomposition (SVD) of the dimensionless sensitivity matrix. Two gradient-based algorithms based on truncated SVD were developed for history matching. In general, the best of these “SVD” algorithms requires on the order of 1/2 the number of equivalent reservoir simulation runs that are required by the limited memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) algorithm. In this work, we show that when combining SVD parameterization with the randomized maximum likelihood method, we can achieve significant additional computational savings by history matching all models simultaneously using a SVD parameterization based on a particular sensitivity matrix at each iteration. We present two new algorithms based on this idea, one which relies only on updating the SVD parameterization at each iteration and one which combines an inner iteration based on an adjoint gradient where during the inner iteration the truncated SVD parameterization does not vary. Results generated with our algorithms are compared with results obtained from the ensemble Kalman filter (EnKF). Finally, we show that by combining EnKF with the SVD-algorithm, we can improve the reliability of EnKF estimates.  相似文献   

13.
14.
Model calibration and history matching are important techniques to adapt simulation tools to real-world systems. When prediction uncertainty needs to be quantified, one has to use the respective statistical counterparts, e.g., Bayesian updating of model parameters and data assimilation. For complex and large-scale systems, however, even single forward deterministic simulations may require parallel high-performance computing. This often makes accurate brute-force and nonlinear statistical approaches infeasible. We propose an advanced framework for parameter inference or history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. Our framework consists of two main steps. In step 1, the original model is projected onto a mathematically optimal response surface via the aPC technique. The resulting response surface can be viewed as a reduced (surrogate) model. It captures the model’s dependence on all parameters relevant for history matching at high-order accuracy. Step 2 consists of matching the reduced model from step 1 to observation data via bootstrap filtering. Bootstrap filtering is a fully nonlinear and Bayesian statistical approach to the inverse problem in history matching. It allows to quantify post-calibration parameter and prediction uncertainty and is more accurate than ensemble Kalman filtering or linearized methods. Through this combination, we obtain a statistical method for history matching that is accurate, yet has a computational speed that is more than sufficient to be developed towards real-time application. We motivate and demonstrate our method on the problem of CO2 storage in geological formations, using a low-parametric homogeneous 3D benchmark problem. In a synthetic case study, we update the parameters of a CO2/brine multiphase model on monitored pressure data during CO2 injection.  相似文献   

15.
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations) and the updated parameter field becomes scattered and loses important geological features (for example, the contact between high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time, the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems.  相似文献   

16.
本文主要介绍了用于多层电测深曲线解释的切线法(图解作图法)和曲线拟合法。对图解切线法的应用条件、简单原理和操作步骤作了简单介绍。较详细介绍了实测曲线和理论曲线的拟合法。对于水平均匀的多层介质条件,目前计算电测深曲线多采用数字滤波法,在解反演问题时将实测曲线和理论曲线进行拟合。本文给出了应用实例。作者认为曲线拟合法可提高计算精度。在今后应是主要的方法之一。  相似文献   

17.
在波数域计算一维重磁异常导数的Matlab语言算法   总被引:1,自引:0,他引:1  
利用Matlab内建的快速傅氏变换函数可以方便地在波数域计算重磁异常导数。介绍了基于Matlab语言的波数域求导的算法,给出了程序源代码,讨论了一些有助于提高计算精度的编程技巧。通过模型试验和数据分析,发现在计算垂向导数时波数域求导算法的精度比傅氏级数的精度有明显改善;而水平导数的计算,2种方法的精度相当。在某区钾盐勘探中,用该方法处理高精度重力剖面数据,取得了较好效果。  相似文献   

18.
在大地电磁测深数据处理中,静态效应会影响解释结果.这里主要采用有限单元法进行正演计算,采用阻抗相位校正法、空间滤波法以及视电阻率导数解释法,对存在静态效应的理论模型进行数据处理.对比结果表明,前二种方法对静态效应起到了一定的压制和改善作用,且阻抗相位校正法效果相对较好,而通过视电阻率导数解释法,可以准确地反映地下真实异常形态.  相似文献   

19.
A method for multiscale parameter estimation with application to reservoir history matching is presented. Starting from a given fine-scale model, coarser models are generated using a global upscaling technique where the coarse models are tuned to match the solution of the fine model. Conditioning to dynamic data is done by history-matching the coarse model. Using consistently the same resolution both for the forward and inverse problems, this model is successively refined using a combination of downscaling and history matching until model-matching dynamic data are obtained at the finest scale. Large-scale corrections are obtained using fast models, which, combined with a downscaling procedure, provide a better initial model for the final adjustment on the fine scale. The result is thus a series of models with different resolution, all matching history as good as possible with this grid. Numerical examples show that this method may significantly reduce the computational effort and/or improve the quality of the solution when achieving a fine-scale match as compared to history-matching directly on the fine scale.  相似文献   

20.
Estimating observation error covariance matrix properly is a key step towards successful seismic history matching. Typically, observation errors of seismic data are spatially correlated; therefore, the observation error covariance matrix is non-diagonal. Estimating such a non-diagonal covariance matrix is the focus of the current study. We decompose the estimation into two steps: (1) estimate observation errors and (2) construct covariance matrix based on the estimated observation errors. Our focus is on step (1), whereas at step (2) we use a procedure similar to that in Aanonsen et al. 2003. In Aanonsen et al. 2003, step (1) is carried out using a local moving average algorithm. By treating seismic data as an image, this algorithm can be interpreted as a discrete convolution between an image and a rectangular window function. Following the perspective of image processing, we consider three types of image denoising methods, namely, local moving average with different window functions (as an extension of the method in Aanonsen et al. 2003), non-local means denoising and wavelet denoising. The performance of these three algorithms is compared using both synthetic and field seismic data. It is found that, in our investigated cases, the wavelet denoising method leads to the best performance in most of the time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号