首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The degrees of freedom (DOF) in standard ensemble-based data assimilation is limited by the ensemble size. Successful assimilation of a data set with large information content (IC) therefore requires that the DOF is sufficiently large. A too small number of DOF with respect to the IC may result in ensemble collapse, or at least in unwarranted uncertainty reduction in the estimation results. In this situation, one has two options to restore a proper balance between the DOF and the IC: to increase the DOF or to decrease the IC. Spatially dense data sets typically have a large IC. Within subsurface applications, inverted time-lapse seismic data used for reservoir history matching is an example of a spatially dense data set. Such data are considered to have great potential due to their large IC, but they also contain errors that are challenging to characterize properly. The computational cost of running the forward simulations for reservoir history matching with any kind of data is large for field cases, such that a moderately large ensemble size is standard. Realization of the potential in seismic data for ensemble-based reservoir history matching is therefore not straightforward, not only because of the unknown character of the associated data errors, but also due to the imbalance between a large IC and a too small number of DOF. Distance-based localization is often applied to increase the DOF but is example specific and involves cumbersome implementation work. We consider methods to obtain a proper balance between the IC and the DOF when assimilating inverted seismic data for reservoir history matching. To decrease the IC, we consider three ways to reduce the influence of the data space; subspace pseudo inversion, data coarsening, and a novel way of performing front extraction. To increase the DOF, we consider coarse-scale simulation, which allows for an increase in the DOF by increasing the ensemble size without increasing the total computational cost. We also consider a combination of decreasing the IC and increasing the DOF by proposing a novel method consisting of a combination of data coarsening and coarse-scale simulation. The methods were compared on one small and one moderately large example with seismic bulk-velocity fields at four assimilation times as data. The size of the examples allows for calculation of a reference solution obtained with standard ensemble-based data assimilation methodology and an unrealistically large ensemble size. With the reference solution as the yardstick with which the quality of other methods are measured, we find that the novel method combining data coarsening and coarse-scale simulations gave the best results. With very restricted computational resources available, this was the only method that gave satisfactory results.  相似文献   

2.
Upscaled flow functions are often needed to account for the effects of fine-scale permeability heterogeneity in coarse-scale simulation models. We present procedures in which the required coarse-scale flow functions are statistically assigned to an ensemble of upscaled geological models. This can be viewed as an extension and further development of a recently developed ensemble level upscaling (EnLU) approach. The method aims to efficiently generate coarse-scale flow models capable of reproducing the ensemble statistics (e.g., cumulative distribution function) of fine-scale flow predictions for multiple reservoir models. The most expensive part of standard coarsening procedures is typically the generation of upscaled two-phase flow functions (e.g., relative permeabilities). EnLU provides a means for efficiently generating these upscaled functions using stochastic simulation. This involves the use of coarse-block attributes that are both fast to compute and correlate closely with the upscaled two-phase functions. In this paper, improved attributes for use in EnLU, namely the coefficient of variation of the fine-scale single-phase velocity field (computed during computation of upscaled absolute permeability) and the integral range of the fine-scale permeability variogram, are identified. Geostatistical simulation methods, which account for spatial correlations of the statistically generated upscaled functions, are also applied. The overall methodology thus enables the efficient generation of coarse-scale flow models. The procedure is tested on 3D well-driven flow problems with different permeability distributions and variable fluid mobility ratios. EnLU is shown to capture the ensemble statistics of fine-scale flow results (water and oil flow rates as a function of time) with similar accuracy to full flow-based upscaling methods but with computational speedups of more than an order of magnitude.  相似文献   

3.
The ensemble Kalman filter has been successfully applied for data assimilation in very large models, including those in reservoir simulation and weather. Two problems become critical in a standard implementation of the ensemble Kalman filter, however, when the ensemble size is small. The first is that the ensemble approximation to cross-covariances of model and state variables to data can indicate the presence of correlations that are not real. These spurious correlations give rise to model or state variable updates in regions that should not be updated. The second problem is that the number of degrees of freedom in the ensemble is only as large as the size of the ensemble, so the assimilation of large amounts of precise, independent data is impossible. Localization of the Kalman gain is almost universal in the weather community, but applications of localization for the ensemble Kalman filter in porous media flow have been somewhat rare. It has been shown, however, that localization of updates to regions of non-zero sensitivity or regions of non-zero cross-covariance improves the performance of the EnKF when the ensemble size is small. Localization is necessary for assimilation of large amounts of independent data. The problem is to define appropriate localization functions for different types of data and different types of variables. We show that the knowledge of sensitivity alone is not sufficient for determination of the region of localization. The region depends also on the prior covariance for model variables and on the past history of data assimilation. Although the goal is to choose localization functions that are large enough to include the true region of non-zero cross-covariance, for EnKF applications, the choice of localization function needs to balance the harm done by spurious covariance resulting from small ensembles and the harm done by excluding real correlations. In this paper, we focus on the distance-based localization and provide insights for choosing suitable localization functions for data assimilation in multiphase flow problems. In practice, we conclude that it is reasonable to choose localization functions based on well patterns, that localization function should be larger than regions of non-zero sensitivity and should extend beyond a single well pattern.  相似文献   

4.
The performance of the Ensemble Kalman Filter method (EnKF) depends on the sample size compared to the dimension of the parameters space. In real applications insufficient sampling may result in spurious correlations which reduce the accuracy of the filter with a strong underestimation of the uncertainty. Covariance localization and inflation are common solutions to these problems. The Ensemble Square Root Filters (ESRF) is also better to estimate uncertainty with respect to the EnKF. In this work we propose a method that limits the consequences of sampling errors by means of a convenient generation of the initial ensemble. This regeneration is based on a Stationary Orthogonal-Base Representation (SOBR) obtained via a singular value decomposition of a stationary covariance matrix estimated from the ensemble. The technique is tested on a 2D single phase reservoir and compared with the other common techniques. The evaluation is based on a reference solution obtained with a very large ensemble (one million members) which remove the spurious correlations. The example gives evidence that the SOBR technique is a valid alternative to reduce the effect of sampling error. In addition, when the SOBR method is applied in combination with the ESRF and inflation, it gives the best performance in terms of uncertainty estimation and oil production forecast.  相似文献   

5.
The ensemble Kalman filter (EnKF) has been shown repeatedly to be an effective method for data assimilation in large-scale problems, including those in petroleum engineering. Data assimilation for multiphase flow in porous media is particularly difficult, however, because the relationships between model variables (e.g., permeability and porosity) and observations (e.g., water cut and gas–oil ratio) are highly nonlinear. Because of the linear approximation in the update step and the use of a limited number of realizations in an ensemble, the EnKF has a tendency to systematically underestimate the variance of the model variables. Various approaches have been suggested to reduce the magnitude of this problem, including the application of ensemble filter methods that do not require perturbations to the observed data. On the other hand, iterative least-squares data assimilation methods with perturbations of the observations have been shown to be fairly robust to nonlinearity in the data relationship. In this paper, we present EnKF with perturbed observations as a square root filter in an enlarged state space. By imposing second-order-exact sampling of the observation errors and independence constraints to eliminate the cross-covariance with predicted observation perturbations, we show that it is possible in linear problems to obtain results from EnKF with observation perturbations that are equivalent to ensemble square-root filter results. Results from a standard EnKF, EnKF with second-order-exact sampling of measurement errors that satisfy independence constraints (EnKF (SIC)), and an ensemble square-root filter (ETKF) are compared on various test problems with varying degrees of nonlinearity and dimensions. The first test problem is a simple one-variable quadratic model in which the nonlinearity of the observation operator is varied over a wide range by adjusting the magnitude of the coefficient of the quadratic term. The second problem has increased observation and model dimensions to test the EnKF (SIC) algorithm. The third test problem is a two-dimensional, two-phase reservoir flow problem in which permeability and porosity of every grid cell (5,000 model parameters) are unknown. The EnKF (SIC) and the mean-preserving ETKF (SRF) give similar results when applied to linear problems, and both are better than the standard EnKF. Although the ensemble methods are expected to handle the forecast step well in nonlinear problems, the estimates of the mean and the variance from the analysis step for all variants of ensemble filters are also surprisingly good, with little difference between ensemble methods when applied to nonlinear problems.  相似文献   

6.
This paper proposes an augmented Lagrangian method for production optimization in which the cost function to be maximized is defined as an augmented Lagrangian function consisting of the net present value (NPV) and all the equality and inequality constraints except the bound constraints. The bound constraints are dealt with using a trust-region gradient projection method. The paper also presents a way to eliminate the need to convert the inequality constraints to equality constraints with slack variables in the augmented Lagrangian function, which greatly reduces the size of the optimization problem when the number of inequality constraints is large. The proposed method is tested in the context of closed-loop reservoir management benchmark problem based on the Brugge reservoir setup by TNO. In the test, we used the ensemble Kalman filter (EnKF) with covariance localization for data assimilation. Production optimization is done on the updated ensemble mean model from EnKF. The production optimization resulted in a substantial increase in the NPV for the expected reservoir life compared to the base case with reactive control.  相似文献   

7.
The ensemble Kalman filter (EnKF) has been successfully applied to data assimilation in steam-assisted gravity drainage (SAGD) process, but applications of localization for the EnKF in the SAGD process have not been studied. Distance-based localization has been reported to be very efficient for assimilation of large amounts of independent data with a small ensemble in water flooding process, but it is not applicable to the SAGD process, since in the SAGD process, oil is produced mainly from the transition zone steam chamber to cold oil instead of the regions around the producer. As the oil production rate is mainly affected by the temperature distribution in the transition zone, temperature-based localization was proposed for automatic history matching of the SAGD process. The regions of the localization function were determined through sensitivity analysis by using a large ensemble with 1000 members. The sensitivity analysis indicated that the regions of cross-correlations between oil production and state variables are much wider than the correlations between production data and model variables. To choose localization regions that are large enough to include the true regions of non-zero cross-covariance, the localization function is defined based on the regions of non-zero covariances of production data to state variables. The non-zero covariances between production data and state variables are distributed in accordance with the steam chamber. This makes the definition of a universal localization function for different state variables easier. Based on the cross-correlation analysis, the temperature range in which oil production is contributed is determined, and beyond or below this range, the localization function reduces from one, and at the critical temperature or steam temperature, the localization function reduces to zero. The temperature-based localization function was obtained through modifying the distance-based localization function. Localization is applied to covariance of data with permeability, saturation, and temperature, as well as the covariance of data with data. A small ensemble (10 ensemble members) was employed in several case studies. Without localization, the variability in the ensemble collapsed very quickly and lost the ability to assimilate later data. The mean variance of model variables dropped dramatically by 95 %, and there was almost no variability in ensemble forecasts, while the prediction was far from the reference with data mismatch keeping up at a high level. At least 50 ensemble members are needed to keep the qualities of matches and forecasts, which significantly increases the computation time. The EnKF with temperature-based localization is able to avoid the collapse of ensemble variability with a small ensemble (10 members), which saves the computation time and gives better history match and prediction results.  相似文献   

8.
Multiscale finite-volume method for density-driven flow in porous media   总被引:1,自引:0,他引:1  
The multiscale finite-volume (MSFV) method has been developed to solve multiphase flow problems on large and highly heterogeneous domains efficiently. It employs an auxiliary coarse grid, together with its dual, to define and solve a coarse-scale pressure problem. A set of basis functions, which are local solutions on dual cells, is used to interpolate the coarse-grid pressure and obtain an approximate fine-scale pressure distribution. However, if flow takes place in presence of gravity (or capillarity), the basis functions are not good interpolators. To treat this case correctly, a correction function is added to the basis function interpolated pressure. This function, which is similar to a supplementary basis function independent of the coarse-scale pressure, allows for a very accurate fine-scale approximation. In the coarse-scale pressure equation, it appears as an additional source term and can be regarded as a local correction to the coarse-scale operator: It modifies the fluxes across the coarse-cell interfaces defined by the basis functions. Given the closure assumption that localizes the pressure problem in a dual cell, the derivation of the local problem that defines the correction function is exact, and no additional hypothesis is needed. Therefore, as in the original MSFV method, the only closure approximation is the localization assumption. The numerical experiments performed for density-driven flow problems (counter-current flow and lock exchange) demonstrate excellent agreement between the MSFV solutions and the corresponding fine-scale reference solutions.  相似文献   

9.
For the past 10 years or so, a number of so-called multiscale methods have been developed as an alternative approach to upscaling and to accelerate reservoir simulation. The key idea of all these methods is to construct a set of prolongation operators that map between unknowns associated with cells in a fine grid holding the petrophysical properties of the geological reservoir model and unknowns on a coarser grid used for dynamic simulation. The prolongation operators are computed numerically by solving localized flow problems, much in the same way as for flow-based upscaling methods, and can be used to construct a reduced coarse-scale system of flow equations that describe the macro-scale displacement driven by global forces. Unlike effective parameters, the multiscale basis functions have subscale resolution, which ensures that fine-scale heterogeneity is correctly accounted for in a systematic manner. Among all multiscale formulations discussed in the literature, the multiscale restriction-smoothed basis (MsRSB) method has proved to be particularly promising. This method has been implemented in a commercially available simulator and has three main advantages. First, the input grid and its coarse partition can have general polyhedral geometry and unstructured topology. Secondly, MsRSB is accurate and robust when used as an approximate solver and converges relatively fast when used as an iterative fine-scale solver. Finally, the method is formulated on top of a cell-centered, conservative, finite-volume method and is applicable to any flow model for which one can isolate a pressure equation. We discuss numerical challenges posed by contemporary geomodels and report a number of validation cases showing that the MsRSB method is an efficient, robust, and versatile method for simulating complex models of real reservoirs.  相似文献   

10.
Shrinked (1???α) ensemble Kalman filter and α Gaussian mixture filter   总被引:1,自引:0,他引:1  
State estimation in high dimensional systems remains a challenging part of real time analysis. The ensemble Kalman filter addresses this challenge by using Gaussian approximations constructed from a number of samples. This method has been a large success in many applications. Unfortunately, for some cases, Gaussian approximations are no longer valid, and the filter does not work so well. In this paper, we use the idea of the ensemble Kalman filter together with the more theoretically valid particle filter. We outline a Gaussian mixture approach based on shrinking the predicted samples to overcome sample degeneracy, while maintaining non-Gaussian nature. A tuning parameter determines the degree of shrinkage. The computational cost is similar to the ensemble Kalman filter. We compare several filtering methods on three different cases: a target tracking model, the Lorenz 40 model, and a reservoir simulation example conditional on seismic and electromagnetic data.  相似文献   

11.
In conventional waterflooding of an oil field, feedback based optimal control technologies may enable higher oil recovery than with a conventional reactive strategy in which producers are closed based on water breakthrough. To compensate for the inherent geological uncertainties in an oil field, robust optimization has been suggested to improve and robustify optimal control strategies. In robust optimization of an oil reservoir, the water injection and production borehole pressures (bhp) are computed such that the predicted net present value (NPV) of an ensemble of permeability field realizations is maximized. In this paper, we both consider an open-loop optimization scenario, with no feedback, and a closed-loop optimization scenario. The closed-loop scenario is implemented in a moving horizon manner and feedback is obtained using an ensemble Kalman filter for estimation of the permeability field from the production data. For open-loop implementations, previous test case studies presented in the literature, show that a traditional robust optimization strategy (RO) gives a higher expected NPV with lower NPV standard deviation than a conventional reactive strategy. We present and study a test case where the opposite happen: The reactive strategy gives a higher expected NPV with a lower NPV standard deviation than the RO strategy. To improve the RO strategy, we propose a modified robust optimization strategy (modified RO) that can shut in uneconomical producer wells. This strategy inherits the features of both the reactive and the RO strategy. Simulations reveal that the modified RO strategy results in operations with larger returns and less risk than the reactive strategy, the RO strategy, and the certainty equivalent strategy. The returns are measured by the expected NPV and the risk is measured by the standard deviation of the NPV. In closed-loop optimization, we investigate and compare the performance of the RO strategy, the reactive strategy, and the certainty equivalent strategy. The certainty equivalent strategy is based on a single realization of the permeability field. It uses the mean of the ensemble as its permeability field. Simulations reveal that the RO strategy and the certainty equivalent strategy give a higher NPV compared to the reactive strategy. Surprisingly, the RO strategy and the certainty equivalent strategy give similar NPVs. Consequently, the certainty equivalent strategy is preferable in the closed-loop situation as it requires significantly less computational resources than the robust optimization strategy. The similarity of the certainty equivalent and the robust optimization based strategies for the closed-loop situation challenges the intuition of most reservoir engineers. Feedback reduces the uncertainty and this is the reason for the similar performance of the two strategies.  相似文献   

12.
In this paper we present an extension of the ensemble Kalman filter (EnKF) specifically designed for multimodal systems. EnKF data assimilation scheme is less accurate when it is used to approximate systems with multimodal distribution such as reservoir facies models. The algorithm is based on the assumption that both prior and posterior distribution can be approximated by Gaussian mixture and it is validated by the introduction of the concept of finite ensemble representation. The effectiveness of the approach is shown with two applications. The first example is based on Lorenz model. In the second example, the proposed methodology combined with a localization technique is used to update a 2D reservoir facies models. Both applications give evidence of an improved performance of the proposed method respect to the EnKF.  相似文献   

13.
Ensemble-based data assimilation methods have recently become popular for solving reservoir history matching problems, but because of the practical limitation on ensemble size, using localization is necessary to reduce the effect of sampling error and to increase the degrees of freedom for incorporating large amounts of data. Local analysis in the ensemble Kalman filter has been used extensively for very large models in numerical weather prediction. It scales well with the model size and the number of data and is easily parallelized. In the petroleum literature, however, iterative ensemble smoothers with localization of the Kalman gain matrix have become the state-of-the-art approach for ensemble-based history matching. By forming the Kalman gain matrix row-by-row, the analysis step can also be parallelized. Localization regularizes updates to model parameters and state variables using information on the distance between the these variables and the observations. The truncation of small singular values in truncated singular value decomposition (TSVD) at the analysis step provides another type of regularization by projecting updates to dominant directions spanned by the simulated data ensemble. Typically, the combined use of localization and TSVD is necessary for problems with large amounts of data. In this paper, we compare the performance of Kalman gain localization to two forms of local analysis for parameter estimation problems with nonlocal data. The effect of TSVD with different localization methods and with the use of iteration is also analyzed. With several examples, we show that good results can be obtained for all localization methods if the localization range is chosen appropriately, but the optimal localization range differs for the various methods. In general, for local analysis with observation taper, the optimal range is somewhat shorter than the optimal range for other localization methods. Although all methods gave equivalent results when used in an iterative ensemble smoother, the local analysis methods generally converged more quickly than Kalman gain localization when the amount of data is large compared to ensemble size.  相似文献   

14.
Numerical simulation is an essential component of many studies of geological storage of carbon dioxide, but care must be taken to ensure the accuracy of the results. Unlike several other possible sources of simulation errors, which have previously been considered in detail and have well-understood techniques for mitigating their effects, comparatively little discussion of the spatial grid dependence of the dissolution rate of carbon dioxide into the formation water has appeared in the literature, despite its importance to simulation studies of geological storage of carbon dioxide in saline aquifers. In many instances, sufficient refinement of the computational grid can be a practical solution. However, this approach is not always feasible, especially for large-scale simulations in three dimensions requiring multiple realisations, which commonly feature a coarse grid due to constraints on available computational capabilities. A measure of the error in the amount of dissolved carbon dioxide introduced by the use of a finite grid is therefore of great interest. In this study, the use of finite-sized grid blocks is shown to overestimate the amount of dissolved carbon dioxide in short-term results by a factor of 1?+?V f/V p, where V f is the grid block volume at the saturation front and V p is the total grid block volume of the plume. This result can be used in a number of ways to correct the calculated short-term dissolution in coarse-scale simulations so that the amount dissolved agrees better with that obtained from fine-scale simulations.  相似文献   

15.
The generation over two-dimensional grids of normally distributed random fields conditioned on available data is often required in reservoir modeling and mining investigations. Such fields can be obtained from application of turning band or spectral methods. However, both methods have limitations. First, they are only asymptotically exact in that the ensemble of realizations has the correlation structure required only if enough harmonics are used in the spectral method, or enough lines are generated in the turning bands approach. Moreover, the spectral method requires fine tuning of process parameters. As for the turning bands method, it is essentially restricted to processes with stationary and radially symmetric correlation functions. Another approach, which has the advantage of being general and exact, is to use a Cholesky factorization of the covariance matrix representing grid points correlation. For fields of large size, however, the Cholesky factorization can be computationally prohibitive. In this paper, we show that if the data are stationary and generated over a grid with regular mesh, the structure of the data covariance matrix can be exploited to significantly reduce the overall computational burden of conditional simulations based on matrix factorization techniques. A feature of this approach is its computational simplicity and suitability to parallel implementation.  相似文献   

16.
Sampling errors can severely degrade the reliability of estimates of conditional means and uncertainty quantification obtained by the application of the ensemble Kalman filter (EnKF) for data assimilation. A standard recommendation for reducing the spurious correlations and loss of variance due to sampling errors is to use covariance localization. In distance-based localization, the prior (forecast) covariance matrix at each data assimilation step is replaced with the Schur product of a correlation matrix with compact support and the forecast covariance matrix. The most important decision to be made in this localization procedure is the choice of the critical length(s) used to generate this correlation matrix. Here, we give a simple argument that the appropriate choice of critical length(s) should be based both on the underlying principal correlation length(s) of the geological model and the range of the sensitivity matrices. Based on this result, we implement a procedure for covariance localization and demonstrate with a set of distinctive reservoir history-matching examples that this procedure yields improved results over the standard EnKF implementation and over covariance localization with other choices of critical length.  相似文献   

17.
Ensemble Kalman filtering with shrinkage regression techniques   总被引:1,自引:0,他引:1  
The classical ensemble Kalman filter (EnKF) is known to underestimate the prediction uncertainty. This can potentially lead to low forecast precision and an ensemble collapsing into a single realisation. In this paper, we present alternative EnKF updating schemes based on shrinkage methods known from multivariate linear regression. These methods reduce the effects caused by collinear ensemble members and have the same computational properties as the fastest EnKF algorithms previously suggested. In addition, the importance of model selection and validation for prediction purposes is investigated, and a model selection scheme based on cross-validation is introduced. The classical EnKF scheme is compared with the suggested procedures on two-toy examples and one synthetic reservoir case study. Significant improvements are seen, both in terms of forecast precision and prediction uncertainty estimates.  相似文献   

18.
Reservoir management requires periodic updates of the simulation models using the production data available over time. Traditionally, validation of reservoir models with production data is done using a history matching process. Uncertainties in the data, as well as in the model, lead to a nonunique history matching inverse problem. It has been shown that the ensemble Kalman filter (EnKF) is an adequate method for predicting the dynamics of the reservoir. The EnKF is a sequential Monte-Carlo approach that uses an ensemble of reservoir models. For realistic, large-scale applications, the ensemble size needs to be kept small due to computational inefficiency. Consequently, the error space is not well covered (poor cross-correlation matrix approximations) and the updated parameter field becomes scattered and loses important geological features (for example, the contact between high- and low-permeability values). The prior geological knowledge present in the initial time is not found anymore in the final updated parameter. We propose a new approach to overcome some of the EnKF limitations. This paper shows the specifications and results of the ensemble multiscale filter (EnMSF) for automatic history matching. EnMSF replaces, at each update time, the prior sample covariance with a multiscale tree. The global dependence is preserved via the parent–child relation in the tree (nodes at the adjacent scales). After constructing the tree, the Kalman update is performed. The properties of the EnMSF are presented here with a 2D, two-phase (oil and water) small twin experiment, and the results are compared to the EnKF. The advantages of using EnMSF are localization in space and scale, adaptability to prior information, and efficiency in case many measurements are available. These advantages make the EnMSF a practical tool for many data assimilation problems.  相似文献   

19.
A nonlinear ensemble prediction model for typhoon rainstorm has been developed based on particle swarm optimization-neural network (PSO-NN). In this model, PSO algorithm is employed for optimizing the network structure and initial weight of the NN with creating multiple ensemble members. The model input of the ensemble member is the high correlated grid point factors selected from the rainfall forecast field of Japan Meteorological Agency numerical prediction products using the stepwise regression method, and the model output is the future 24 h rainfall forecast of the 89 stations. Results show that the objective prediction model is more accurate than the numerical prediction model which is directly interpolated into the stations, so it can better been implemented for the interpretation and application of numerical prediction products, indicating a potentially better operational weather prediction.  相似文献   

20.
We propose a methodology, called multilevel local–global (MLLG) upscaling, for generating accurate upscaled models of permeabilities or transmissibilities for flow simulation on adapted grids in heterogeneous subsurface formations. The method generates an initial adapted grid based on the given fine-scale reservoir heterogeneity and potential flow paths. It then applies local–global (LG) upscaling for permeability or transmissibility [7], along with adaptivity, in an iterative manner. In each iteration of MLLG, the grid can be adapted where needed to reduce flow solver and upscaling errors. The adaptivity is controlled with a flow-based indicator. The iterative process is continued until consistency between the global solve on the adapted grid and the local solves is obtained. While each application of LG upscaling is also an iterative process, this inner iteration generally takes only one or two iterations to converge. Furthermore, the number of outer iterations is bounded above, and hence, the computational costs of this approach are low. We design a new flow-based weighting of transmissibility values in LG upscaling that significantly improves the accuracy of LG and MLLG over traditional local transmissibility calculations. For highly heterogeneous (e.g., channelized) systems, the integration of grid adaptivity and LG upscaling is shown to consistently provide more accurate coarse-scale models for global flow, relative to reference fine-scale results, than do existing upscaling techniques applied to uniform grids of similar densities. Another attractive property of the integration of upscaling and adaptivity is that process dependency is strongly reduced, that is, the approach computes accurate global flow results also for flows driven by boundary conditions different from the generic boundary conditions used to compute the upscaled parameters. The method is demonstrated on Cartesian cell-based anisotropic refinement (CCAR) grids, but it can be applied to other adaptation strategies for structured grids and extended to unstructured grids.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号