首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The ensemble Kalman filter has been successfully applied for data assimilation in very large models, including those in reservoir simulation and weather. Two problems become critical in a standard implementation of the ensemble Kalman filter, however, when the ensemble size is small. The first is that the ensemble approximation to cross-covariances of model and state variables to data can indicate the presence of correlations that are not real. These spurious correlations give rise to model or state variable updates in regions that should not be updated. The second problem is that the number of degrees of freedom in the ensemble is only as large as the size of the ensemble, so the assimilation of large amounts of precise, independent data is impossible. Localization of the Kalman gain is almost universal in the weather community, but applications of localization for the ensemble Kalman filter in porous media flow have been somewhat rare. It has been shown, however, that localization of updates to regions of non-zero sensitivity or regions of non-zero cross-covariance improves the performance of the EnKF when the ensemble size is small. Localization is necessary for assimilation of large amounts of independent data. The problem is to define appropriate localization functions for different types of data and different types of variables. We show that the knowledge of sensitivity alone is not sufficient for determination of the region of localization. The region depends also on the prior covariance for model variables and on the past history of data assimilation. Although the goal is to choose localization functions that are large enough to include the true region of non-zero cross-covariance, for EnKF applications, the choice of localization function needs to balance the harm done by spurious covariance resulting from small ensembles and the harm done by excluding real correlations. In this paper, we focus on the distance-based localization and provide insights for choosing suitable localization functions for data assimilation in multiphase flow problems. In practice, we conclude that it is reasonable to choose localization functions based on well patterns, that localization function should be larger than regions of non-zero sensitivity and should extend beyond a single well pattern.  相似文献   

2.
Improving the Ensemble Estimate of the Kalman Gain by Bootstrap Sampling   总被引:1,自引:1,他引:0  
Using a small ensemble size in the ensemble Kalman filter methodology is efficient for updating numerical reservoir models but can result in poor updates following spurious correlations between observations and model variables. The most common approach for reducing the effect of spurious correlations on model updates is multiplication of the estimated covariance by a tapering function that eliminates all correlations beyond a prespecified distance. Distance-dependent tapering is not always appropriate, however. In this paper, we describe efficient methods for discriminating between the real and the spurious correlations in the Kalman gain matrix by using the bootstrap method to assess the confidence level of each element from the Kalman gain matrix. The new method is tested on a small linear problem, and on a water flooding reservoir history matching problem. For the water flooding example, a small ensemble size of 30 was used to compute the Kalman gain in both the screened EnKF and standard EnKF methods. The new method resulted in significantly smaller root mean squared errors of the estimated model parameters and greater variability in the final updated ensemble.  相似文献   

3.
4.
Ensemble size is critical to the efficiency and performance of the ensemble Kalman filter, but when the ensemble size is small, the Kalman gain generally cannot be well estimated. To reduce the negative effect of spurious correlations, a regularization process applied on either the covariance or the Kalman gain seems to be necessary. In this paper, we evaluate and compare the estimation errors when two regularization methods including the distance-dependent localization and the bootstrap-based screening are applied on the covariance and on the Kalman gain. The investigations were carried out through two examples: 1D linear problem without dynamics but for which the true Kalman gain can be computed and a 2D highly nonlinear reservoir fluid flow problem. The investigation resulted in three primary conclusions. First, if localizations of two covariance matrices are not consistent, the estimate of the Kalman gain will generally be poor at the observation location. The consistency condition can be difficult to apply for nonlocal observations. Second, the estimate of the Kalman gain that results from covariance regularization is generally subject to greater errors than the estimate of the Kalman gain that results from Kalman gain regularization. Third, in terms of removing spurious correlations in the estimation of spatially correlated variables, the performance of screening Kalman gain is comparable as the performance of localization methods (applied on either covariance or Kalman gain), but screening Kalman gain outperforms the localization methods in terms of generality for application, as the screening method can be used for estimating both spatially correlated and uncorrelated variables, and moreover, no assumption about the prior covariance is required for the screening method.  相似文献   

5.
Ensemble methods present a practical framework for parameter estimation, performance prediction, and uncertainty quantification in subsurface flow and transport modeling. In particular, the ensemble Kalman filter (EnKF) has received significant attention for its promising performance in calibrating heterogeneous subsurface flow models. Since an ensemble of model realizations is used to compute the statistical moments needed to perform the EnKF updates, large ensemble sizes are needed to provide accurate updates and uncertainty assessment. However, for realistic problems that involve large-scale models with computationally demanding flow simulation runs, the EnKF implementation is limited to small-sized ensembles. As a result, spurious numerical correlations can develop and lead to inaccurate EnKF updates, which tend to underestimate or even eliminate the ensemble spread. Ad hoc practical remedies, such as localization, local analysis, and covariance inflation schemes, have been developed and applied to reduce the effect of sampling errors due to small ensemble sizes. In this paper, a fast linear approximate forecast method is proposed as an alternative approach to enable the use of large ensemble sizes in operational settings to obtain more improved sample statistics and EnKF updates. The proposed method first clusters a large number of initial geologic model realizations into a small number of groups. A representative member from each group is used to run a full forward flow simulation. The flow predictions for the remaining realizations in each group are approximated by a linearization around the full simulation results of the representative model (centroid) of the respective cluster. The linearization can be performed using either adjoint-based or ensemble-based gradients. Results from several numerical experiments with two-phase and three-phase flow systems in this paper suggest that the proposed method can be applied to improve the EnKF performance in large-scale problems where the number of full simulation is constrained.  相似文献   

6.
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations) and the updated parameter field becomes scattered and loses important geological features (for example, the contact between high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time, the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems.  相似文献   

7.
In this work, we present an efficient matrix-free ensemble Kalman filter (EnKF) algorithm for the assimilation of large data sets. The EnKF has increasingly become an essential tool for data assimilation of numerical models. It is an attractive assimilation method because it can evolve the model covariance matrix for a non-linear model, through the use of an ensemble of model states, and it is easy to implement for any numerical model. Nevertheless, the computational cost of the EnKF can increase significantly for cases involving the assimilation of large data sets. As more data become available for assimilation, a potential bottleneck in most EnKF algorithms involves the operation of the Kalman gain matrix. To reduce the complexity and cost of assimilating large data sets, a matrix-free EnKF algorithm is proposed. The algorithm uses an efficient matrix-free linear solver, based on the Sherman–Morrison formulas, to solve the implicit linear system within the Kalman gain matrix and compute the analysis. Numerical experiments with a two-dimensional shallow water model on the sphere are presented, where results show the matrix-free implementation outperforming an singular value decomposition-based implementation in computational time.  相似文献   

8.
The adaptive Gaussian mixture filter (AGM) was introduced as a robust filter technique for large-scale applications and an alternative to the well-known ensemble Kalman filter (EnKF). It consists of two analysis steps, one linear update and one weighting/resampling step. The bias of AGM is determined by two parameters, one adaptive weight parameter (forcing the weights to be more uniform to avoid filter collapse) and one predetermined bandwidth parameter which decides the size of the linear update. It has been shown that if the adaptive parameter approaches one and the bandwidth parameter decreases, as an increasing function of the sample size, the filter can achieve asymptotic optimality. For large-scale applications with a limited sample size, the filter solution may be far from optimal as the adaptive parameter gets close to zero depending on how well the samples from the prior distribution match the data. The bandwidth parameter must often be selected significantly different from zero in order to make large enough linear updates to match the data, at the expense of bias in the estimates. In the iterative AGM we introduce here, we take advantage of the fact that the history matching problem is usually estimation of parameters and initial conditions. If the prior distribution of initial conditions and parameters is close to the posterior distribution, it is possible to match the historical data with a small bandwidth parameter and an adaptive weight parameter that gets close to one. Hence, the bias of the filter solution is small. In order to obtain this scenario, we iteratively run the AGM throughout the data history with a very small bandwidth to create a new prior distribution from the updated samples after each iteration. After a few iterations, nearly all samples from the previous iteration match the data, and the above scenario is achieved. A simple toy problem shows that it is possible to reconstruct the true posterior distribution using the iterative version of the AGM. Then a 2D synthetic reservoir is revisited to demonstrate the potential of the new method on large-scale problems.  相似文献   

9.
The ensemble Kalman filter (EnKF) has been successfully applied to data assimilation in steam-assisted gravity drainage (SAGD) process, but applications of localization for the EnKF in the SAGD process have not been studied. Distance-based localization has been reported to be very efficient for assimilation of large amounts of independent data with a small ensemble in water flooding process, but it is not applicable to the SAGD process, since in the SAGD process, oil is produced mainly from the transition zone steam chamber to cold oil instead of the regions around the producer. As the oil production rate is mainly affected by the temperature distribution in the transition zone, temperature-based localization was proposed for automatic history matching of the SAGD process. The regions of the localization function were determined through sensitivity analysis by using a large ensemble with 1000 members. The sensitivity analysis indicated that the regions of cross-correlations between oil production and state variables are much wider than the correlations between production data and model variables. To choose localization regions that are large enough to include the true regions of non-zero cross-covariance, the localization function is defined based on the regions of non-zero covariances of production data to state variables. The non-zero covariances between production data and state variables are distributed in accordance with the steam chamber. This makes the definition of a universal localization function for different state variables easier. Based on the cross-correlation analysis, the temperature range in which oil production is contributed is determined, and beyond or below this range, the localization function reduces from one, and at the critical temperature or steam temperature, the localization function reduces to zero. The temperature-based localization function was obtained through modifying the distance-based localization function. Localization is applied to covariance of data with permeability, saturation, and temperature, as well as the covariance of data with data. A small ensemble (10 ensemble members) was employed in several case studies. Without localization, the variability in the ensemble collapsed very quickly and lost the ability to assimilate later data. The mean variance of model variables dropped dramatically by 95 %, and there was almost no variability in ensemble forecasts, while the prediction was far from the reference with data mismatch keeping up at a high level. At least 50 ensemble members are needed to keep the qualities of matches and forecasts, which significantly increases the computation time. The EnKF with temperature-based localization is able to avoid the collapse of ensemble variability with a small ensemble (10 members), which saves the computation time and gives better history match and prediction results.  相似文献   

10.
The performance of the ensemble Kalman filter (EnKF) for continuous updating of facies location and boundaries in a reservoir model based on production and facies data for a 3D synthetic problem is presented. The occurrence of the different facies types is treated as a random process and the initial distribution was obtained by truncating a bi-Gaussian random field. Because facies data are highly non-Gaussian, re-parameterization was necessary in order to use the EnKF algorithm for data assimilation; two Gaussian random fields are updated in lieu of the static facies parameters. The problem of history matching applied to facies is difficult due to (1) constraints to facies observations at wells are occasionally violated when productions data are assimilated; (2) excessive reduction of variance seems to be a bigger problem with facies than with Gaussian random permeability and porosity fields; and (3) the relationship between facies variables and data is so highly non-linear that the final facies field does not always honor early production data well. Consequently three issues are investigated in this work. Is it possible to iteratively enforce facies constraints when updates due to production data have caused them to be violated? Can localization of adjustments be used for facies to prevent collapse of the variance during the data-assimilation period? Is a forecast from the final state better than a forecast from time zero using the final parameter fields?To investigate these issues, a 3D reservoir simulation model is coupled with the EnKF technique for data assimilation. One approach to enforcing the facies constraint is continuous iteration on all available data, which may lead to inconsistent model states, incorrect weighting of the production data and incorrect adjustment of the state vector. A sequential EnKF where the dynamic and static data are assimilated sequentially is presented and this approach seems to have solved the highlighted problems above. When the ensemble size is small compared to the number of independent data, the localized adjustment of the state vector is a very important technique that may be used to mitigate loss of rank in the ensemble. Implementing a distance-based localization of the facies adjustment appears to mitigate the problem of variance deficiency in the ensembles by ensuring that sufficient variability in the ensemble is maintained throughout the data assimilation period. Finally, when data are assimilated without localization, the prediction results appear to be independent of the starting point. When localization is applied, it is better to predict from the start using the final parameter field rather than continue from the final state.  相似文献   

11.
While 3D seismic has been the basis for geological model building for a long time, time-lapse seismic has primarily been used in a qualitative manner to assist in monitoring reservoir behavior. With the growing acceptance of assisted history matching methods has come an equally rising interest in incorporating 3D or time-lapse seismic data into the history matching process in a more quantitative manner. The common approach in recent studies has been to invert the seismic data to elastic or to dynamic reservoir properties, typically acoustic impedance or saturation changes. Here we consider the use of both 3D and time-lapse seismic amplitude data based on a forward modeling approach that does not require any inversion in the traditional sense. Advantages of such an approach may be better estimation and treatment of model and measurement errors, the combination of two inversion steps into one by removing the explicit inversion to state space variables, and more consistent dependence on the validity of assumptions underlying the inversion process. In this paper, we introduce this approach with the use of an assisted history matching method in mind. Two ensemble-based methods, the ensemble Kalman filter and the ensemble randomized maximum likelihood method, are used to investigate issues arising from the use of seismic amplitude data, and possible solutions are presented. Experiments with a 3D synthetic reservoir model show that additional information on the distribution of reservoir fluids, and on rock properties such as porosity and permeability, can be extracted from the seismic data. The role for localization and iterative methods are discussed in detail.  相似文献   

12.
The application of the ensemble Kalman filter (EnKF) for history matching petroleum reservoir models has been the subject of intense investigation during the past 10 years. Unfortunately, EnKF often fails to provide reasonable data matches for highly nonlinear problems. This fact motivated the development of several iterative ensemble-based methods in the last few years. However, there exists no study comparing the performance of these methods in the literature, especially in terms of their ability to quantify uncertainty correctly. In this paper, we compare the performance of nine ensemble-based methods in terms of the quality of the data matches, quantification of uncertainty, and computational cost. For this purpose, we use a small but highly nonlinear reservoir model so that we can generate the reference posterior distribution of reservoir properties using a very long chain generated by a Markov chain Monte Carlo sampling algorithm. We also consider one adjoint-based implementation of the randomized maximum likelihood method in the comparisons.  相似文献   

13.
The performance of the Ensemble Kalman Filter method (EnKF) depends on the sample size compared to the dimension of the parameters space. In real applications insufficient sampling may result in spurious correlations which reduce the accuracy of the filter with a strong underestimation of the uncertainty. Covariance localization and inflation are common solutions to these problems. The Ensemble Square Root Filters (ESRF) is also better to estimate uncertainty with respect to the EnKF. In this work we propose a method that limits the consequences of sampling errors by means of a convenient generation of the initial ensemble. This regeneration is based on a Stationary Orthogonal-Base Representation (SOBR) obtained via a singular value decomposition of a stationary covariance matrix estimated from the ensemble. The technique is tested on a 2D single phase reservoir and compared with the other common techniques. The evaluation is based on a reference solution obtained with a very large ensemble (one million members) which remove the spurious correlations. The example gives evidence that the SOBR technique is a valid alternative to reduce the effect of sampling error. In addition, when the SOBR method is applied in combination with the ESRF and inflation, it gives the best performance in terms of uncertainty estimation and oil production forecast.  相似文献   

14.
The use of the ensemble smoother (ES) instead of the ensemble Kalman filter increases the nonlinearity of the update step during data assimilation and the need for iterative assimilation methods. A previous version of the iterative ensemble smoother based on Gauss–Newton formulation was able to match data relatively well but only after a large number of iterations. A multiple data assimilation method (MDA) was generally more efficient for large problems but lacked ability to continue “iterating” if the data mismatch was too large. In this paper, we develop an efficient, iterative ensemble smoother algorithm based on the Levenberg–Marquardt (LM) method of regularizing the update direction and choosing the step length. The incorporation of the LM damping parameter reduces the tendency to add model roughness at early iterations when the update step is highly nonlinear, as it often is when all data are assimilated simultaneously. In addition, the ensemble approximation of the Hessian is modified in a way that simplifies computation and increases stability. We also report on a simplified algorithm in which the model mismatch term in the updating equation is neglected. We thoroughly evaluated the new algorithm based on the modified LM method, LM-ensemble randomized maximum likelihood (LM-EnRML), and the simplified version of the algorithm, LM-EnRML (approx), on three test cases. The first is a highly nonlinear single-variable problem for which results can be compared against the true conditional pdf. The second test case is a one-dimensional two-phase flow problem in which the permeability of 31 grid cells is uncertain. In this case, Markov chain Monte Carlo results are available for comparison with ensemble-based results. The third test case is the Brugge benchmark case with both 10 and 20 years of history. The efficiency and quality of results of the new algorithms were compared with the standard ES (without iteration), the ensemble-based Gauss–Newton formulation, the standard ensemble-based LM formulation, and the MDA. Because of the high level of nonlinearity, the standard ES performed poorly on all test cases. The MDA often performed well, especially at early iterations where the reduction in data mismatch was quite rapid. The best results, however, were always achieved with the new iterative ensemble smoother algorithms, LM-EnRML and LM-EnRML (approx).  相似文献   

15.
The ensemble Kalman filter (EnKF), an efficient data assimilation method showing advantages in many numerical experiments, is deficient when used in approximating covariance from an ensemble of small size. Implicit localization is used to add distance-related weight to covariance and filter spurious correlations which weaken the EnKF??s capability to estimate uncertainty correctly. The effect of this kind of localization is studied in two-dimensional (2D) and three-dimensional (3D) synthetic cases. It is found that EnKF with localization can capture reliably both the mean and variance of the hydraulic conductivity field with higher efficiency; it can also greatly stabilize the assimilation process as a small-size ensemble is used. Sensitivity experiments are conducted to explore the effect of localization function format and filter lengths. It is suggested that too long or too short filter lengths will prevent implicit localization from modifying the covariance appropriately. Steep localization functions will greatly disturb local dynamics like the 0-1 function even if the function is continuous; four relatively gentle localization functions succeed in avoiding obvious disturbance to the system and improve estimation. As the degree of localization of the L function increases, the parameter sensitivity becomes weak, making parameter selection easier, but more information may be lost in the assimilation process.  相似文献   

16.
非线性滤波方法与陆面数据同化   总被引:8,自引:4,他引:4  
陆面数据同化研究近几年成为地球科学研究的新兴领域,其中以非线性滤波为代表的数据同化方法发展迅速并得到了广泛应用。在贝叶斯理论框架内,从递推贝叶斯估计理论的角度系统地分析了扩展卡尔曼滤波、无迹卡尔曼滤波、集合卡尔曼滤波、SIR粒子滤波等非线性滤波方法的异同;针对应用比较广泛的集合卡尔曼滤波和SIR粒子滤波应用中存在的问题,论述了几种提高滤波性能的实用方法,如协方差矩阵的Localization方法、协方差矩阵的Inflation方法、双集合卡尔曼滤波方法、扰动集合、扰动大气驱动和模型参数、平方根集合卡尔曼滤波以及粒子滤波算法的改进等。最后总结讨论了各种非线性滤波方法应用中的特点、难点以及各种算法在陆面数据同化中的应用前景和发展方向。  相似文献   

17.
The ensemble Kalman filter (EnKF) has been shown repeatedly to be an effective method for data assimilation in large-scale problems, including those in petroleum engineering. Data assimilation for multiphase flow in porous media is particularly difficult, however, because the relationships between model variables (e.g., permeability and porosity) and observations (e.g., water cut and gas–oil ratio) are highly nonlinear. Because of the linear approximation in the update step and the use of a limited number of realizations in an ensemble, the EnKF has a tendency to systematically underestimate the variance of the model variables. Various approaches have been suggested to reduce the magnitude of this problem, including the application of ensemble filter methods that do not require perturbations to the observed data. On the other hand, iterative least-squares data assimilation methods with perturbations of the observations have been shown to be fairly robust to nonlinearity in the data relationship. In this paper, we present EnKF with perturbed observations as a square root filter in an enlarged state space. By imposing second-order-exact sampling of the observation errors and independence constraints to eliminate the cross-covariance with predicted observation perturbations, we show that it is possible in linear problems to obtain results from EnKF with observation perturbations that are equivalent to ensemble square-root filter results. Results from a standard EnKF, EnKF with second-order-exact sampling of measurement errors that satisfy independence constraints (EnKF (SIC)), and an ensemble square-root filter (ETKF) are compared on various test problems with varying degrees of nonlinearity and dimensions. The first test problem is a simple one-variable quadratic model in which the nonlinearity of the observation operator is varied over a wide range by adjusting the magnitude of the coefficient of the quadratic term. The second problem has increased observation and model dimensions to test the EnKF (SIC) algorithm. The third test problem is a two-dimensional, two-phase reservoir flow problem in which permeability and porosity of every grid cell (5,000 model parameters) are unknown. The EnKF (SIC) and the mean-preserving ETKF (SRF) give similar results when applied to linear problems, and both are better than the standard EnKF. Although the ensemble methods are expected to handle the forecast step well in nonlinear problems, the estimates of the mean and the variance from the analysis step for all variants of ensemble filters are also surprisingly good, with little difference between ensemble methods when applied to nonlinear problems.  相似文献   

18.
We present a method of using classical wavelet-based multiresolution analysis to separate scales in model and observations during data assimilation with the ensemble Kalman filter. In many applications, the underlying physics of a phenomena involve the interaction of features at multiple scales. Blending of observational and model error across scales can result in large forecast inaccuracies since large errors at one scale are interpreted as inexact data at all scales due to the misrepresentation of observational error. Our method uses a partitioning of the range of the observation operator into separate observation scales. This naturally induces a transformation of the observation covariance and we put forward several algorithms to efficiently compute the transformed covariance. Another advantage of our multiresolution ensemble Kalman filter is that scales can be weighted independently to adjust each scale’s affect on the forecast. To demonstrate feasibility, we present applications to a one-dimensional Kuramoto-Sivashinsky (K–S) model with scale-dependent observation noise and an application involving the forecasting of solar photospheric flux. The solar flux application uses the Air Force Data Assimilative Photospheric Transport (ADAPT) model which has model and observation error exhibiting strong scale dependence. Results using our multiresolution ensemble Kalman filter show significant improvement in solar forecast error compared to traditional ensemble Kalman filtering.  相似文献   

19.
The availability of multiple history matched models is essential for proper handling of uncertainty in determining the optimal development of producing hydrocarbon fields. The ensemble Kalman Filter in particular is becoming recognized as an efficient method for quantitative conditioning of multiple models to history data. It is known, however, that the ensemble Kalman Filter (EnKF) may have problems with finding solutions in history matching cases that are highly nonlinear and involve very large numbers of data, such is typical when time-lapse seismic surveys are available. Recently, a parameterization of seismic anomalies due to saturation effects was proposed in terms of arrival times of fronts that reduces both nonlinearity and the effective number of data. A disadvantage of the parameterization in terms of arrival times is that it requires simulation of models beyond the update time. An alternative distance parameterization is proposed here for flood fronts, or more generally, for isolines of arbitrary seismic attributes representing a front that removes the need for additional simulation time. An accurate fast marching method for solution of the Eikonal equation in Cartesian grids is used to calculate distances between observed and simulated fronts, which are used as innovations in the EnKF. Experiments are presented that demonstrate the functioning of the method in synthetic 2D and realistic 3D cases. Results are compared with those resulting from use of saturation data, as they could potentially be inverted from seismic data, with and without localization. The proposed algorithm significantly reduces the number of data while still capturing the essential information. It furthermore removes the need for seismic inversion when the oil-water front is only identified, and it produces a more favorable distribution of simulated data, leading to a very efficient and improved functioning of the EnKF.  相似文献   

20.
Shrinked (1???α) ensemble Kalman filter and α Gaussian mixture filter   总被引:1,自引:0,他引:1  
State estimation in high dimensional systems remains a challenging part of real time analysis. The ensemble Kalman filter addresses this challenge by using Gaussian approximations constructed from a number of samples. This method has been a large success in many applications. Unfortunately, for some cases, Gaussian approximations are no longer valid, and the filter does not work so well. In this paper, we use the idea of the ensemble Kalman filter together with the more theoretically valid particle filter. We outline a Gaussian mixture approach based on shrinking the predicted samples to overcome sample degeneracy, while maintaining non-Gaussian nature. A tuning parameter determines the degree of shrinkage. The computational cost is similar to the ensemble Kalman filter. We compare several filtering methods on three different cases: a target tracking model, the Lorenz 40 model, and a reservoir simulation example conditional on seismic and electromagnetic data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号