首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper shows a history matching workflow with both production and 4D seismic data where the uncertainty of seismic data for history matching comes from Bayesian seismic waveform inversion. We use a synthetic model and perform two seismic surveys, one before start of production and the second after 1 year of production. From the first seismic survey, we estimate the contrast in slowness squared (with uncertainty) and use this estimate to generate an initial estimate of porosity and permeability fields. This ensemble is then updated using the second seismic survey (after inversion to contrasts) and production data with an iterative ensemble smoother. The impact on history matching results from using different uncertainty estimates for the seismic data is investigated. From the Bayesian seismic inversion, we get a covariance matrix for the uncertainty and we compare using the full covariance matrix with using only the diagonal. We also compare with using a simplified uncertainty estimate that does not come from the seismic inversion. The results indicate that it is important not to underestimate the noise in seismic data and that having information about the correlation in the error in seismic data can in some cases improve the results.  相似文献   

2.
While 3D seismic has been the basis for geological model building for a long time, time-lapse seismic has primarily been used in a qualitative manner to assist in monitoring reservoir behavior. With the growing acceptance of assisted history matching methods has come an equally rising interest in incorporating 3D or time-lapse seismic data into the history matching process in a more quantitative manner. The common approach in recent studies has been to invert the seismic data to elastic or to dynamic reservoir properties, typically acoustic impedance or saturation changes. Here we consider the use of both 3D and time-lapse seismic amplitude data based on a forward modeling approach that does not require any inversion in the traditional sense. Advantages of such an approach may be better estimation and treatment of model and measurement errors, the combination of two inversion steps into one by removing the explicit inversion to state space variables, and more consistent dependence on the validity of assumptions underlying the inversion process. In this paper, we introduce this approach with the use of an assisted history matching method in mind. Two ensemble-based methods, the ensemble Kalman filter and the ensemble randomized maximum likelihood method, are used to investigate issues arising from the use of seismic amplitude data, and possible solutions are presented. Experiments with a 3D synthetic reservoir model show that additional information on the distribution of reservoir fluids, and on rock properties such as porosity and permeability, can be extracted from the seismic data. The role for localization and iterative methods are discussed in detail.  相似文献   

3.
Hydrocarbon reservoir modelling and characterisation is a challenging subject within the oil and gas industry due to the lack of well data and the natural heterogeneities of the Earth’s subsurface. Integrating historical production data into the geo-modelling workflow, commonly designated by history matching, allows better reservoir characterisation and the possibility of predicting the reservoir behaviour. We present herein a geostatistical-based multi-objective history matching methodology. It starts with the generation of an initial ensemble of the subsurface petrophysical property of interest through stochastic sequential simulation. Each model is then ranked according the match between its dynamic response, after fluid flow simulation, and the observed available historical production data. This enables building regionalised Pareto fronts and the definition of a large ensemble of optimal subsurface Earth models that fit all the observed production data without compromising the exploration of the uncertainty space. The proposed geostatistical multi-objective history matching technique is successfully implemented in a benchmark synthetic reservoir dataset, the PUNQ-S3, where 12 objectives are targeted.  相似文献   

4.
5.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or 4D seismic data. Although reservoir heterogeneities are commonly generated using geostatistical models, random realizations cannot generally match observed dynamic data. To constrain model realizations to reproduce measured dynamic data, an optimization procedure may be applied in an attempt to minimize an objective function, which quantifies the mismatch between real and simulated data. Such assisted history matching methods require a parameterization of the geostatistical model to allow the updating of an initial model realization. However, there are only a few parameterization methods available to update geostatistical models in a way consistent with the underlying geostatistical properties. This paper presents a local domain parameterization technique that updates geostatistical realizations using assisted history matching. This technique allows us to locally change model realizations through the variation of geometrical domains whose geometry and size can be easily controlled and parameterized. This approach provides a new way to parameterize geostatistical realizations in order to improve history matching efficiency.  相似文献   

6.
NPGA-GW在地下水系统多目标优化管理中的应用   总被引:7,自引:0,他引:7  
在地下水系统管理问题中,涉及到多个相互冲突的目标函数常常被简化为不同形式的单一目标函数来求解,这种通过单一目标函数的优化方法只能给出一个解,由此确定的方案有时会违背决策者的意愿。而通过多目标优化方法可以得到一系列供决策者权衡选择的解集。将地下水流模拟程序MODFLOW 和溶质运移模拟程序MT3DMS 相耦合,采用基于小生境技术的Pareto 遗传算法进行求解,开发了一个用于地下水系统多目标管理的应用程序NPGA-GW。并将该程序应用于一个二维地下水污染修复问题的多目标优化求解,结果表明,该程序能够在较短的时间内得到一系列Pareto 最优解,解的跨度足够决策者进行适当的选择,具有很好的应用前景。  相似文献   

7.
The conventional paradigm for predicting future reservoir performance from existing production data involves the construction of reservoir models that match the historical data through iterative history matching. This is generally an expensive and difficult task and often results in models that do not accurately assess the uncertainty of the forecast. We propose an alternative re-formulation of the problem, in which the role of the reservoir model is reconsidered. Instead of using the model to match the historical production, and then forecasting, the model is used in combination with Monte Carlo sampling to establish a statistical relationship between the historical and forecast variables. The estimated relationship is then used in conjunction with the actual production data to produce a statistical forecast. This allows quantifying posterior uncertainty on the forecast variable without explicit inversion or history matching. The main rationale behind this is that the reservoir model is highly complex and even so, still remains a simplified representation of the actual subsurface. As statistical relationships can generally only be constructed in low dimensions, compression and dimension reduction of the reservoir models themselves would result in further oversimplification. Conversely, production data and forecast variables are time series data, which are simpler and much more applicable for dimension reduction techniques. We present a dimension reduction approach based on functional data analysis (FDA), and mixed principal component analysis (mixed PCA), followed by canonical correlation analysis (CCA) to maximize the linear correlation between the forecast and production variables. Using these transformed variables, it is then possible to apply linear Gaussian regression and estimate the statistical relationship between the forecast and historical variables. This relationship is used in combination with the actual observed historical data to estimate the posterior distribution of the forecast variable. Sampling from this posterior and reconstructing the corresponding forecast time series, allows assessing uncertainty on the forecast. This workflow will be demonstrated on a case based on a Libyan reservoir and compared with traditional history matching.  相似文献   

8.
在地震勘探中,随机噪声是一类不可避免的噪声,它的存在极大地降低了地震资料的信噪比,导致偏移成像效果差甚至不能成像。自适应线性预测滤波方法是在时间--空间域自回归( AR) 模型系数变化的假定下,把自回归( AR) 模型数学表达式进行变换,引入代价函数以提高解的稳定性、唯一性,推导得到压制一维和二维随机噪声的递归算法。通过模拟和实际地震资料验证表明,该方法能较好地压制低信噪比资料中的随机噪声干扰,同时能较好地保护有效地震信号。  相似文献   

9.
The availability of multiple history matched models is essential for proper handling of uncertainty in determining the optimal development of producing hydrocarbon fields. The ensemble Kalman Filter in particular is becoming recognized as an efficient method for quantitative conditioning of multiple models to history data. It is known, however, that the ensemble Kalman Filter (EnKF) may have problems with finding solutions in history matching cases that are highly nonlinear and involve very large numbers of data, such is typical when time-lapse seismic surveys are available. Recently, a parameterization of seismic anomalies due to saturation effects was proposed in terms of arrival times of fronts that reduces both nonlinearity and the effective number of data. A disadvantage of the parameterization in terms of arrival times is that it requires simulation of models beyond the update time. An alternative distance parameterization is proposed here for flood fronts, or more generally, for isolines of arbitrary seismic attributes representing a front that removes the need for additional simulation time. An accurate fast marching method for solution of the Eikonal equation in Cartesian grids is used to calculate distances between observed and simulated fronts, which are used as innovations in the EnKF. Experiments are presented that demonstrate the functioning of the method in synthetic 2D and realistic 3D cases. Results are compared with those resulting from use of saturation data, as they could potentially be inverted from seismic data, with and without localization. The proposed algorithm significantly reduces the number of data while still capturing the essential information. It furthermore removes the need for seismic inversion when the oil-water front is only identified, and it produces a more favorable distribution of simulated data, leading to a very efficient and improved functioning of the EnKF.  相似文献   

10.
非常规油气藏地质建模与数值模拟是通过对地质、地震、油藏综合研究,建立储层精细地质模型。首先建立构造模型,同时运用地震反演的成果结合单井岩相数据建立岩相模型,利用钻井泥浆漏失、油气田和单井的实际生产情况和微地震监测,描述油气藏内发育的天然裂缝和人工压裂裂缝,最终利用岩相模型控制建立孔隙度和饱和度等属性模型,为数值模拟提供能够反映与实际地质情况相符的三维油气藏粗化地质模型。在此模型的基础上,进行生产历史拟合、产量预测,并制定合理的开发方案。  相似文献   

11.
提出了基于分析三维地震数据的粗糙集(RS)—神经网络(NN)技术,预测采区断层和煤层厚度变化。利用粗糙集对地震数据中所包含的大量干扰数据进行约简,生成低噪音数据;将约简后的数据输入神经网络进行训练获得断层识别和煤层厚度预测。实际数据验证表明,该方法具有较高的精度。   相似文献   

12.
基于改进K-SVD字典学习方法的地震数据去噪   总被引:2,自引:0,他引:2  
为实现更好的地震数据去噪技术,笔者引入一种新的算法:快速迭代收缩阀值法(FISTA),通过FISTA和K-奇异值分解(K-SVD)不断迭代更新K-SVD字典,利用更新得到的K-SVD字典对地震数据进行稀疏表示,去除稀疏系数中较小的数值,使数据中的随机噪声得到压制。对层状模型合成地震记录,Marmousi模型合成地震记录以及实际地震数据进行对比实验,得出FISTA算法较OMP算法能更好地提高地震数据的信噪比,同时有效地保护了反射信号。  相似文献   

13.
基于匹配小波包算法的地震信号去噪   总被引:1,自引:0,他引:1  
地震资料的信噪比是影响地震资料质量的关键因素之一。目前的去噪方法中,在滤波的同时会损伤有效信号,因此提出基于匹配算法的去噪方法,利用和地震信号匹配的小波包对信号进行分解,用选出的波形代表有效信号达到去噪的效果。实验分析表明,利用匹配小波包算法能够很好地压制地震信号白噪声,提高信噪比。当噪声能量小于有效信号周期能量时,小波包算法去噪效果比小波收缩阈值法好,信噪比提高5 dB ±。  相似文献   

14.
The recent proliferation of the 3D reflection seismic method into the near-surface area of geophysical applications, especially in response to the emergence of the need to comprehensively characterize and monitor near-surface carbon dioxide sequestration in shallow saline aquifers around the world, Justifies the emphasis on cost-effective and robust quality control and assurance (QC/QA) workflow of 3D seismic data preprocessing that is suitable for near-surface applications. The main purpose of our seismic data preprocessing QC is to enable the use of appropriate header information, data that are free of noise-dominated traces, and/or flawed vertical stacking in subsequent processing steps. In this article, I provide an account of utilizing survey design specifications, noise properties, first breaks, and normal moveout for rapid and thorough graphical QC/QA diagnostics, which are easy to apply and efficient in the diagnosis of inconsistencies. A correlated vibroseis time-lapse 3D-seismic data set from n CO2-flood monitoring survey is used for demonstrating QC dlagnostles. An Important by-product of the QC workflow is establishing the number of layers for n refraction statics model in a data-driven graphical manner that capitalizes on the spatial coverage of the 3D seismic data.  相似文献   

15.
根据三维地震地质模型对地震数据进行模拟是从勘探到生产的周期内决策过程中的一个不可或缺的组成部分。虽然对于在储层内的动力过程和地震地质的模型表述已经取得很大进展,但如何从这些模型得到地震数据的精确模拟仍面临很多挑战。通常是在地球模型范围内根据物性用一维褶积方法来模拟地震数据。然而这个过程一般不考虑地震勘探布局和盖层对地震信号的影响。我们审视了为什么这些因素会制约三维地球模型的有效性,并考虑了为什么需要把盖层和地震勘探布局对三维覆盖和分辨率的影响加进模拟过程之中。我们提出了一种新方法,把建立物性模型和一种新的地震模拟技术结合起来,给出一个工作流程;利用这个流程,勘探工作者可以很快模拟出三维的PSDM数据,这些数据加进了盖层和地震勘探布局对覆盖及分辨率的影响。我们利用从远离挪威海岸的一个油田得到的数据,在考虑覆盖和分辨率效应的地震数据模拟之前,对岩石物性做了一些扰动,然后进行地震数据模拟,以此来说明如何可以用这种方法提高三维地球模型的精确性和增进我们对储层的了解。  相似文献   

16.
Model calibration and history matching are important techniques to adapt simulation tools to real-world systems. When prediction uncertainty needs to be quantified, one has to use the respective statistical counterparts, e.g., Bayesian updating of model parameters and data assimilation. For complex and large-scale systems, however, even single forward deterministic simulations may require parallel high-performance computing. This often makes accurate brute-force and nonlinear statistical approaches infeasible. We propose an advanced framework for parameter inference or history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. Our framework consists of two main steps. In step 1, the original model is projected onto a mathematically optimal response surface via the aPC technique. The resulting response surface can be viewed as a reduced (surrogate) model. It captures the model’s dependence on all parameters relevant for history matching at high-order accuracy. Step 2 consists of matching the reduced model from step 1 to observation data via bootstrap filtering. Bootstrap filtering is a fully nonlinear and Bayesian statistical approach to the inverse problem in history matching. It allows to quantify post-calibration parameter and prediction uncertainty and is more accurate than ensemble Kalman filtering or linearized methods. Through this combination, we obtain a statistical method for history matching that is accurate, yet has a computational speed that is more than sufficient to be developed towards real-time application. We motivate and demonstrate our method on the problem of CO2 storage in geological formations, using a low-parametric homogeneous 3D benchmark problem. In a synthetic case study, we update the parameters of a CO2/brine multiphase model on monitored pressure data during CO2 injection.  相似文献   

17.
In oil industry and subsurface hydrology, geostatistical models are often used to represent the porosity or the permeability field. In history matching of a geostatistical reservoir model, we attempt to find multiple realizations that are conditional to dynamic data and representative of the model uncertainty space. A relevant way to simulate the conditioned realizations is by generating Monte Carlo Markov chains (MCMC). The huge dimensions (number of parameters) of the model and the computational cost of each iteration are two important pitfalls for the use of MCMC. In practice, we have to stop the chain far before it has browsed the whole support of the posterior probability density function. Furthermore, as the relationship between the production data and the random field is highly nonlinear, the posterior can be strongly multimodal and the chain may stay stuck in one of the modes. In this work, we propose a methodology to enhance the sampling properties of classical single MCMC in history matching. We first show how to reduce the dimension of the problem by using a truncated Karhunen–Loève expansion of the random field of interest and assess the number of components to be kept. Then, we show how we can improve the mixing properties of MCMC, without increasing the global computational cost, by using parallel interacting Markov Chains. Finally, we show the encouraging results obtained when applying the method to a synthetic history matching case.  相似文献   

18.
Matching seismic data in assisted history matching processes can be a challenging task. One main idea is to bring flexibility in the choice of the parameters to be perturbed, focusing on the information provided by seismic data. Local parameterization techniques such as pilot-point or gradual deformation methods can be introduced, considering their high adaptability. However, the choice of the spatial supports associated to the perturbed parameters is crucial to successfully reduce the seismic mismatch. The information related to seismic data is sometimes considered to initialize such local methods. Recent attempts to define the regions adaptively have been proposed, focusing on the mismatch between simulated and reference seismic data. However, the regions are defined manually for each optimization process. Therefore, we propose to drive the definition of the parameter support by performing an automatic definition of the regions to be perturbed from the residual maps related to the 3D seismic data. Two methods are developed in this paper. The first one consists in clustering the residual map with classification algorithms. The second method proposes to drive the generation of pilot point locations in an adaptive way. Residual maps, after proper normalization, are considered as probability density functions of the pilot point locations. Both procedures lead to a complete adaptive and highly flexible perturbation technique for 3D seismic matching. A synthetic study based on the PUNQ test case is introduced to illustrate the potential of these adaptive strategies.  相似文献   

19.
成像条件是实现地震波场准确成像的关键环节,而优化逆时成像条件在改善逆时成像应用效果中的作用尤为明显。为此,从零井源距和非零井源距脉冲响应出发,系统对比了常规相关法逆时成像条件和基于行波分离法逆时成像条件的逆时偏移脉冲响应波场差异,分析了逆时偏移低波数噪声的成因机理及波场规律,同时以复杂断陷理论模型和SZ工区实际三维地震资料为例,系统对比了两种逆时成像条件的成像应用效果。研究结果表明,采用基于行波分离法逆时成像条件的成像结果能够有效衰减逆时偏移低波数背景噪声,恢复掩盖在这种背景噪声之下的有效地震成像信息,地层细节刻画更加清晰,这可为当前复杂波场和复杂构造高精度地震成像提供方法指导。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号