首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Ensemble-based data assimilation methods have recently become popular for solving reservoir history matching problems, but because of the practical limitation on ensemble size, using localization is necessary to reduce the effect of sampling error and to increase the degrees of freedom for incorporating large amounts of data. Local analysis in the ensemble Kalman filter has been used extensively for very large models in numerical weather prediction. It scales well with the model size and the number of data and is easily parallelized. In the petroleum literature, however, iterative ensemble smoothers with localization of the Kalman gain matrix have become the state-of-the-art approach for ensemble-based history matching. By forming the Kalman gain matrix row-by-row, the analysis step can also be parallelized. Localization regularizes updates to model parameters and state variables using information on the distance between the these variables and the observations. The truncation of small singular values in truncated singular value decomposition (TSVD) at the analysis step provides another type of regularization by projecting updates to dominant directions spanned by the simulated data ensemble. Typically, the combined use of localization and TSVD is necessary for problems with large amounts of data. In this paper, we compare the performance of Kalman gain localization to two forms of local analysis for parameter estimation problems with nonlocal data. The effect of TSVD with different localization methods and with the use of iteration is also analyzed. With several examples, we show that good results can be obtained for all localization methods if the localization range is chosen appropriately, but the optimal localization range differs for the various methods. In general, for local analysis with observation taper, the optimal range is somewhat shorter than the optimal range for other localization methods. Although all methods gave equivalent results when used in an iterative ensemble smoother, the local analysis methods generally converged more quickly than Kalman gain localization when the amount of data is large compared to ensemble size.  相似文献   

3.
This paper examines the properties of the Iterated Ensemble Smoother (IES) and the Multiple Data Assimilation Ensemble Smoother (ES–MDA) for solving the history matching problem. The iterative methods are compared with the standard Ensemble Smoother (ES) to improve the understanding of the similarities and differences between them. We derive the three smoothers from Bayes’ theorem for a scalar case which allows us to compare the equations solved by the three methods, and we can better understand which assumptions are applied and their consequences. When working with a scalar model, it is possible to use a vast ensemble size, and we can construct the sample distributions for both priors and posteriors, as well as intermediate iterates. For a linear model, all three methods give the same result. For a nonlinear model, the iterative methods improve on the ES result, but the two iterative methods converge to different solutions, and it is not clear which should be the preferred choice. It is clear that the ensemble of cost functions used to define the IES solution does not represent an exact sampling of the posterior-Bayes’ probability density function. Also, the use of an ensemble representation for the gradient in IES introduces an additional approximation compared to using an exact analytic gradient. For ES–MDA, the convergence, as a function of increasing number of uniform update steps, is studied for a huge ensemble size. We illustrate that ES–MDA converges to a solution that differs from the Bayesian posterior. The convergence is also examined using a realistic sample size to study the impact of the number of realizations relative to the number of update steps. We have run multiple ES–MDA experiments to examine the impact of using different schemes for choosing the lengths of the update steps, and we have tried to understand which properties of the inverse problem imply that a non-uniform update step length is beneficial. Finally, we have examined the smoother methods with a highly nonlinear model to examine their properties and limitations in more extreme situations.  相似文献   

4.
In this study, multi-linear regression (MLR) approach is used to construct intermittent reservoir daily inflow forecasting system. To illustrate the applicability and effect of using lumped and distributed input data in MLR approach, Koyna river watershed in Maharashtra, India is chosen as a case study. The results are also compared with autoregressive integrated moving average (ARIMA) models. MLR attempts to model the relationship between two or more independent variables over a dependent variable by fitting a linear regression equation. The main aim of the present study is to see the consequences of development and applicability of simple models, when sufficient data length is available. Out of 47 years of daily historical rainfall and reservoir inflow data, 33 years of data is used for building the model and 14 years of data is used for validating the model. Based on the observed daily rainfall and reservoir inflow, various types of time-series, cause-effect and combined models are developed using lumped and distributed input data. Model performance was evaluated using various performance criteria and it was found that as in the present case, of well correlated input data, both lumped and distributed MLR models perform equally well. For the present case study considered, both MLR and ARIMA models performed equally sound due to availability of large dataset.  相似文献   

5.
In this paper we present an extension of the ensemble Kalman filter (EnKF) specifically designed for multimodal systems. EnKF data assimilation scheme is less accurate when it is used to approximate systems with multimodal distribution such as reservoir facies models. The algorithm is based on the assumption that both prior and posterior distribution can be approximated by Gaussian mixture and it is validated by the introduction of the concept of finite ensemble representation. The effectiveness of the approach is shown with two applications. The first example is based on Lorenz model. In the second example, the proposed methodology combined with a localization technique is used to update a 2D reservoir facies models. Both applications give evidence of an improved performance of the proposed method respect to the EnKF.  相似文献   

6.
Over the last years, the ensemble Kalman filter (EnKF) has become a very popular tool for history matching petroleum reservoirs. EnKF is an alternative to more traditional history matching techniques as it is computationally fast and easy to implement. Instead of seeking one best model estimate, EnKF is a Monte Carlo method that represents the solution with an ensemble of state vectors. Lately, several ensemble-based methods have been proposed to improve upon the solution produced by EnKF. In this paper, we compare EnKF with one of the most recently proposed methods, the adaptive Gaussian mixture filter (AGM), on a 2D synthetic reservoir and the Punq-S3 test case. AGM was introduced to loosen up the requirement of a Gaussian prior distribution as implicitly formulated in EnKF. By combining ideas from particle filters with EnKF, AGM extends the low-rank kernel particle Kalman filter. The simulation study shows that while both methods match the historical data well, AGM is better at preserving the geostatistics of the prior distribution. Further, AGM also produces estimated fields that have a higher empirical correlation with the reference field than the corresponding fields obtained with EnKF.  相似文献   

7.
While 3D seismic has been the basis for geological model building for a long time, time-lapse seismic has primarily been used in a qualitative manner to assist in monitoring reservoir behavior. With the growing acceptance of assisted history matching methods has come an equally rising interest in incorporating 3D or time-lapse seismic data into the history matching process in a more quantitative manner. The common approach in recent studies has been to invert the seismic data to elastic or to dynamic reservoir properties, typically acoustic impedance or saturation changes. Here we consider the use of both 3D and time-lapse seismic amplitude data based on a forward modeling approach that does not require any inversion in the traditional sense. Advantages of such an approach may be better estimation and treatment of model and measurement errors, the combination of two inversion steps into one by removing the explicit inversion to state space variables, and more consistent dependence on the validity of assumptions underlying the inversion process. In this paper, we introduce this approach with the use of an assisted history matching method in mind. Two ensemble-based methods, the ensemble Kalman filter and the ensemble randomized maximum likelihood method, are used to investigate issues arising from the use of seismic amplitude data, and possible solutions are presented. Experiments with a 3D synthetic reservoir model show that additional information on the distribution of reservoir fluids, and on rock properties such as porosity and permeability, can be extracted from the seismic data. The role for localization and iterative methods are discussed in detail.  相似文献   

8.
Floods are one of nature's most destructive disasters because of the immense damage to land, buildings, and human fatalities.It is difficult to forecast the areas that are vulnerable to flash flooding due to the dynamic and complex nature of the flash floods.Therefore, earlier identification of flash flood susceptible sites can be performed using advanced machine learning models for managing flood disasters.In this study, we applied and assessed two new hybrid ensemble models, namely Dagging and Random Subspace(RS) coupled with Artificial Neural Network(ANN), Random Forest(RF), and Support Vector Machine(SVM) which are the other three state-of-the-art machine learning models for modelling flood susceptibility maps at the Teesta River basin, the northern region of Bangladesh.The application of these models includes twelve flood influencing factors with 413 current and former flooding points, which were transferred in a GIS environment.The information gain ratio, the multicollinearity diagnostics tests were employed to determine the association between the occurrences and flood influential factors.For the validation and the comparison of these models, for the ability to predict the statistical appraisal measures such as Freidman, Wilcoxon signed-rank, and t-paired tests and Receiver Operating Characteristic Curve(ROC) were employed.The value of the Area Under the Curve(AUC) of ROC was above 0.80 for all models.For flood susceptibility modelling, the Dagging model performs superior, followed by RF,the ANN, the SVM, and the RS, then the several benchmark models.The approach and solution-oriented outcomes outlined in this paper will assist state and local authorities as well as policy makers in reducing flood-related threats and will also assist in the implementation of effective mitigation strategies to mitigate future damage.  相似文献   

9.
10.
The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our implementation of the MCMC method provides the gold standard against which the aforementioned Gaussian approximations are assessed. We present numerical synthetic experiments where we quantify the capability of each of the ad hoc Gaussian approximation in reproducing the mean and the variance of the posterior distribution (characterized via MCMC) associated to a data assimilation problem. Both single-phase and two-phase (oil–water) reservoir models are considered so that fundamental differences in the resulting forward operators are highlighted. The main objective of our controlled experiments was to exhibit the substantial discrepancies of the approximation properties of standard ad hoc Gaussian approximations. Numerical investigations of the type we present here will lead to the greater understanding of the cost-efficient, but ad hoc, Bayesian techniques used for data assimilation in petroleum reservoirs and hence ultimately to improved techniques with more accurate uncertainty quantification.  相似文献   

11.
The ensemble Kalman filter (EnKF) has become a popular method for history matching production and seismic data in petroleum reservoir models. However, it is known that EnKF may fail to give acceptable data matches especially for highly nonlinear problems. In this paper, we introduce a procedure to improve EnKF data matches based on assimilating the same data multiple times with the covariance matrix of the measurement errors multiplied by the number of data assimilations. We prove the equivalence between single and multiple data assimilations for the linear-Gaussian case and present computational evidence that multiple data assimilations can improve EnKF estimates for the nonlinear case. The proposed procedure was tested by assimilating time-lapse seismic data in two synthetic reservoir problems, and the results show significant improvements compared to the standard EnKF. In addition, we review the inversion schemes used in the EnKF analysis and present a rescaling procedure to avoid loss of information during the truncation of small singular values.  相似文献   

12.
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations) and the updated parameter field becomes scattered and loses important geological features (for example, the contact between high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time, the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems.  相似文献   

13.
Hydraulic fracturing involves the initiation and propagation of fractures in rock formations by the injection of pressurized fluid. The largest use of hydraulic fracturing is in enhancing oil and gas production. Tiltmeters are sometimes used in the process to monitor the generated fracture geometry by measuring the fracture‐induced deformations. Fracture growth parameters obtained from tiltmeter mapping can be used to study the effectiveness of such stimulations. In this work, we present a novel scheme that uses the ensemble Kalman Filter (EnKF) to assimilate tiltmeter data using a simple process model to describe the evolution of fracture growth parameters, and an observation model that maps the fracture geometry with the observed tilt. The forward observation model is based on the analytical solution for computing the displacements and tilts due to a point source displacement discontinuity in an elastic half‐space developed by Okada 1 . The displacement and tilts for any given fracture geometry are then obtained by numerical integration of this solution, by considering multiple point sources to be located at the quadrature points. The proposed method is validated using synthetic data sets generated from polygon and elliptical shaped fracture geometries. Finally, real data from a field site, where asymmetry was measured from the intersections of the hydraulic fracture with offset boreholes, have been analyzed. Preliminary results show that, in addition to extracting the fracture dip, orientation, and volume, the procedure is able to satisfactorily predict fracture growth parameters when the fracture is relatively close to the tiltmeter array and provides some insight into the development of asymmetry when the measurements are relatively far from the fracture plane. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
An iterative ensemble Kalman filter for reservoir engineering applications   总被引:1,自引:0,他引:1  
The study has been focused on examining the usage and the applicability of ensemble Kalman filtering techniques to the history matching procedures. The ensemble Kalman filter (EnKF) is often applied nowadays to solving such a problem. Meanwhile, traditional EnKF requires assumption of the distribution’s normality. Besides, it is based on the linear update of the analysis equations. These facts may cause problems when filter is used in reservoir applications and result in sampling error. The situation becomes more problematic if the a priori information on the reservoir structure is poor and initial guess about the, e.g., permeability field is far from the actual one. The above circumstance explains a reason to perform some further research concerned with analyzing specific modification of the EnKF-based approach, namely, the iterative EnKF (IEnKF) scheme, which allows restarting the procedure with a new initial guess that is closer to the actual solution and, hence, requires less improvement by the algorithm while providing better estimation of the parameters. The paper presents some examples for which the IEnKF algorithm works better than traditional EnKF. The algorithms are compared while estimating the permeability field in relation to the two-phase, two-dimensional fluid flow model.  相似文献   

15.
16.
The conventional paradigm for predicting future reservoir performance from existing production data involves the construction of reservoir models that match the historical data through iterative history matching. This is generally an expensive and difficult task and often results in models that do not accurately assess the uncertainty of the forecast. We propose an alternative re-formulation of the problem, in which the role of the reservoir model is reconsidered. Instead of using the model to match the historical production, and then forecasting, the model is used in combination with Monte Carlo sampling to establish a statistical relationship between the historical and forecast variables. The estimated relationship is then used in conjunction with the actual production data to produce a statistical forecast. This allows quantifying posterior uncertainty on the forecast variable without explicit inversion or history matching. The main rationale behind this is that the reservoir model is highly complex and even so, still remains a simplified representation of the actual subsurface. As statistical relationships can generally only be constructed in low dimensions, compression and dimension reduction of the reservoir models themselves would result in further oversimplification. Conversely, production data and forecast variables are time series data, which are simpler and much more applicable for dimension reduction techniques. We present a dimension reduction approach based on functional data analysis (FDA), and mixed principal component analysis (mixed PCA), followed by canonical correlation analysis (CCA) to maximize the linear correlation between the forecast and production variables. Using these transformed variables, it is then possible to apply linear Gaussian regression and estimate the statistical relationship between the forecast and historical variables. This relationship is used in combination with the actual observed historical data to estimate the posterior distribution of the forecast variable. Sampling from this posterior and reconstructing the corresponding forecast time series, allows assessing uncertainty on the forecast. This workflow will be demonstrated on a case based on a Libyan reservoir and compared with traditional history matching.  相似文献   

17.
Future trends in the occurrence of heat waves (HW) over Pakistan have been presented using three regional climate models (RCMs), forced by three different global climate models (GCMs) runs under RCP8.5 scenarios. The results of RCMs are obtained from CORDEX (Coordinated Regional climate Downscaling EXperiment) database. Two different approaches for the assessment of HWs are defined, namely Fixed and Relative approaches. Fixed approach is defined for a life-threatening extreme event in which the temperature can reach more than 45 °C for a continuous stretch of several days; however, Relative approach events may not be directly life-threatening, but may cause snow/ice melt flooding and impact on food security of the country in summer and winter seasons, respectively. The results indicate a consistent increase in the occurrence of HWs for both approaches. For the Fixed approach, the increase is evident in the eastern areas of Pakistan, particularly plains of Punjab and Sindh provinces which host many big cities of the country. It is argued that the effect of HWs may also be exacerbated in future due to urban heat island effect. Moreover, summer time HWs for Relative approach is most likely to increase over northern areas of the country which hosts reservoirs of snow and glacier, which may result in events like glacial lake outburst flood and snow/ice melt flooding. Furthermore, the increase in winter time HWs for Relative approach may affect negatively on the wheat production, which in turn can distress the overall food productivity and livelihoods of the country. It is concluded that this study may be a useful document for future planning in order to better adapt to these threats due to climate change.  相似文献   

18.
19.
针对强非均质性对煤层气开采影响大的特点,通过分析测井渗透率分布规律可定量评价煤层的平面和纵向非均质程度。采用达西公式确定煤层基质渗透率,运用双侧向测井资料估算裂缝渗透率,绘制煤层渗透率贡献率在序数百分数坐标下对应的累积分布曲线,定义坐标变换后累积分布曲线的斜率为非均质程度系数。实际应用表明:非均质程度系数能够反映不同煤层物性在横向和纵向上的差异,进而可定量评价煤层的非均质性。  相似文献   

20.
Probabilistic prediction has the ability to convey the intrinsic uncertainty of forecast that helps the decision makers to manage the climate risk more efficiently than deterministic forecasts. In recent times, probabilistic predictions obtained from the products from General Circulation Models (GCMs) have gained considerable attention. The probabilistic forecast can be generated in parametric (assuming Gaussian distribution) as well as non-parametric (counting method) ways. The present study deals with the non-parametric approach that requires no assumption about the form of the forecast distribution for the prediction of Indian summer monsoon rainfall (ISMR) based on the hindcast run of seven general circulation models from 1982 to 2008. Probabilistic prediction from each of the GCM products has been generated by non-parametric methods for tercile categories (viz. below normal (BN), near-normal (NN), and above normal (AN)) and evaluation of their skill is assessed against observed data. Five different types of PMME schemes have been used for combining probabilities from each GCM to improve the forecast skill as compared to the individual GCMs. These schemes are different in nature of assigning the weights for combining probabilities. After a rigorous analysis through Rank Probability Skill Score (RPSS) and relative operating characteristic (ROC) curve, the superiority of PMME has been established over climatological probability. It is also found that, the performances of PMME1 and PMME3 are better than all the other methods whereas PMME3 has showed more improvement over PMME1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号