首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the applicability of a model order reduction technique to the solution of transport of passive scalars in homogeneous and heterogeneous porous media. Transport dynamics are modeled through the advection-dispersion equation (ADE) and we employ Proper Orthogonal Decomposition (POD) as a strategy to reduce the computational burden associated with the numerical solution of the ADE. Our application of POD relies on solving the governing ADE for selected times, termed snapshots. The latter are then employed to achieve the desired model order reduction. We introduce a new technique, termed Snapshot Splitting Technique (SST), which allows enriching the dimension of the POD subspace and damping the temporal increase of the modeling error. Coupling SST with a modeling strategy based on alternating over diverse time scales the solution of the full numerical transport model to its reduced counterpart allows extending the benefit of POD over a prolonged temporal window so that the salient features of the process can be captured at a reduced computational cost. The selection of the time scales across which the solution of the full and reduced model are alternated is linked to the Péclet number (P e), representing the interplay between advective and dispersive processes taking place in the system. Thus, the method is adaptive in space and time across the heterogenous structure of the domain through the combined use of POD and SST and by way of alternating the solution of the full and reduced models. We find that the width of the time scale within which the POD-based reduced model solution provides accurate results tends to increase with decreasing P e. This suggests that the effects of local-scale dispersive processes facilitate the POD method to capture the salient features of the system dynamics embedded in the selected snapshots. Since the dimension of the reduced model is much lower than that of the full numerical model, the methodology we propose enables one to accurately simulate transport at a markedly reduced computational cost.  相似文献   

2.
In this paper, the centralized return centers location evaluation problem in a reverse logistics network is investigated. This problem is solved via an integrated analytic network process-fuzzy technique for order preference by similarity to ideal solution approach. Analytic network process allows us to evaluate criteria preferences while considering interdependence between them. On the other hand, technique for order preference by similarity to ideal solution decreases the number of computational steps compared to simple analytic network process evaluation. An important characteristic of the centralized return centers location evaluation problem, vagueness, is adapted to the methodology via the usage of fuzzy numbers in the technique for order preference by similarity to ideal solution approach. Finally, a numerical example is given to demonstrate the usefulness of the methodology. The results indicate that, this integrated multi-criteria decision making methodology is suitable for the decision making problems that needs considering multiple criteria conflicting each other. Also, by using this methodology, the interdependences between the criteria may be considered for these kinds of problems in a flexible and systematic manner.  相似文献   

3.
In this work, we construct a new methodology for enhancing the predictive accuracy of sequential methods for coupling flow and geomechanics while preserving low computational cost. The new computational approach is developed within the framework of the fixed-stress split algorithm procedure in conjunction with data assimilation based on the ensemble Kalman filter (EnKF). In this context, we identify the high-fidelity model with the two-way formulation where additional source term appears in the flow equation containing the time derivative of total mean stress. The iterative scheme is then interlaced with data assimilation steps, which also incorporate the modeling error inherent to the EnKF framework. Such a procedure gives rise to an “enhanced one-way formulation,” exhibiting substantial improvement in accuracy compared with the classical one-way method. The governing equations are discretized by mixed finite elements, and numerical simulation of a 2D slab problem between injection and production wells illustrate the tremendous achievement of the method proposed herein.  相似文献   

4.
An alternative coupled large deformation formulation combined with a meshfree approach is proposed for flow–deformation analysis of saturated porous media. The formulation proposed is based on the Updated Lagrangian (UL) approach, except that the spatial derivatives are defined with respect to the configuration of the medium at the last time step rather than the configuration at the last iteration. In this way, the Cauchy stresses are calculated directly, rendering the second Piola–Kirchhoff stress tensor not necessary for the numerical solution of the equilibrium equations. Moreover, in contrast with the UL approach, the nodal shape function derivatives are calculated once in each time step and stored for use in subsequent iterations, which reduces the computational cost of the algorithm. Stress objectivity is satisfied using the Jaumann stress rate, and the spatial discretisation of the governing equations is achieved using the standard Galerkin method. The equations of equilibrium are satisfied directly, and the nonlinear parts of the system matrix are derived independent of the stresses of the medium resulting in a stable numerical algorithm. Temporal discretisation is effected based on a three‐point approximation technique that avoids spurious ripple effects and has second‐order accuracy. The radial point interpolation method is used to construct the shape functions. The application of the formulation and the significance of large deformation effects on the numerical results are demonstrated through several numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
We discuss an adaptive resolution system for modeling regional air pollution based on the chemical transport model STEM. The grid adaptivity is implemented using the generic adaptive mesh refinement tool Paramesh, which enables the grid management operations while harnessing the power of parallel computers. The computational algorithm is based on a decomposition of the domain, with the solution in different subdomains being computed with different spatial resolutions. Various refinement criteria that adaptively control the fine grid placement are analyzed to maximize the solution accuracy while maintaining an acceptable computational cost. Numerical experiments in a large-scale parallel setting (~0.5 billion variables) confirm that adaptive resolution, based on a well-chosen refinement criterion, leads to the decrease in spatial error with an acceptable increase in computational time. Fully dynamic grid adaptivity for air quality models is relatively new. We extend previous work on chemical and transport modeling by using dynamically adaptive grid resolution. Advantages and shortcomings of the present approach are also discussed.  相似文献   

6.
The sparse polynomial chaos expansion (SPCE) methodology is an efficient approach that deals with uncertainties propagation in case of high‐dimensional problems (i.e., when a large number of random variables is involved). This methodology significantly reduces the computational cost with respect to the classical full PCE methodology. Notice however that when dealing with computationally‐expensive deterministic models, the time cost remains important even with the use of the SPCE. In this paper, an efficient combined use of the SPCE methodology and the Global Sensitivity Analysis is proposed to solve such problem. The proposed methodology is firstly validated using a relatively non‐expensive deterministic model that involves the computation of the PDF of the ultimate bearing capacity of a strip footing resting on a weightless spatially varying soil where the soil cohesion and angle of internal friction are modeled by two anisotropic non‐Gaussian cross‐correlated random fields. This methodology is then applied to an expensive model that considers the case of a ponderable soil. A brief parametric study is presented in this case to show the efficiency of the proposed methodology. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, a multiscale homogenization approach is developed for fully coupled saturated porous media to represent the idealized sugar cube model, which is generally employed in fractured porous media on the basis of dual porosity models. In this manner, an extended version of the Hill-Mandel theory that incorporates the microdynamic effects into the multiscale analysis is presented, and the concept of the deformable dual porosity model is demonstrated. Numerical simulations are performed employing the multiscale analysis and dual porosity model, and the results are compared with the direct numerical simulation through 2 numerical examples. Finally, a combined multiscale-dual porosity technique is introduced by employing a bridge between these 2 techniques as an alternative approach that reduces the computational cost of numerical simulation in modeling of heterogeneous deformable porous media.  相似文献   

8.
A generic framework for the computation of derivative information required for gradient-based optimization using sequentially coupled subsurface simulation models is presented. The proposed approach allows for the computation of any derivative information with no modification of the mathematical framework. It only requires the forward model Jacobians and the objective function to be appropriately defined. The flexibility of the framework is demonstrated by its application in different reservoir management studies. The performance of the gradient computation strategy is demonstrated in a synthetic water-flooding model, where the forward model is constructed based on a sequentially coupled flow-transport system. The methodology is illustrated for a synthetic model, with different types of applications of data assimilation and life-cycle optimization. Results are compared with the classical fully coupled (FIM) forward simulation. Based on the presented numerical examples, it is demonstrated how, without any modifications of the basic framework, the solution of gradient-based optimization models can be obtained for any given set of coupled equations. The sequential derivative computation methods deliver similar results compared to FIM methods, while being computationally more efficient.  相似文献   

9.
In this article, an approach for the efficient numerical solution of multi-species reactive transport problems in porous media is described. The objective of this approach is to reformulate the given system of partial and ordinary differential equations (PDEs, ODEs) and algebraic equations (AEs), describing local equilibrium, in such a way that the couplings and nonlinearities are concentrated in a rather small number of equations, leading to the decoupling of some linear partial differential equations from the nonlinear system. Thus, the system is handled in the spirit of a global implicit approach (one step method) avoiding operator splitting techniques, solved by Newton’s method as the basic algorithmic ingredient. The reduction of the problem size helps to limit the large computational costs of numerical simulations of such problems. If the model contains equilibrium precipitation-dissolution reactions of minerals, then these are considered as complementarity conditions and rewritten as semismooth equations, and the whole nonlinear system is solved by the semismooth Newton method.  相似文献   

10.
We explore the ability of the greedy algorithm to serve as an effective tool for the construction of reduced-order models for the solution of fully saturated groundwater flow in the presence of randomly distributed transmissivities. The use of a reduced model is particularly appealing in the context of numerical Monte Carlo (MC) simulations that are typically performed, e.g., within environmental risk assessment protocols. In this context, model order reduction techniques enable one to construct a surrogate model to reduce the computational burden associated with the solution of the partial differential equation governing the evolution of the system. These techniques approximate the model solution with a linear combination of spatially distributed basis functions calculated from a small set of full model simulations. The number and the spatial behavior of these basis functions determine the computational efficiency of the reduced model and the accuracy of the approximated solution. The greedy algorithm provides a deterministic procedure to select the basis functions and build the reduced-order model. Starting from a single basis function, the algorithm enriches the set of basis functions until the largest error between the full and the reduced model solutions is lower than a predefined tolerance. The comparison between the standard MC and the reduced-order approach is performed through a two-dimensional steady-state groundwater flow scenario in the presence of a uniform (in the mean) hydraulic head gradient. The natural logarithm of the aquifer transmissivity is modeled as a second-order stationary Gaussian random field. The accuracy of the reduced basis model is assessed as a function of the correlation scale and variance of the log-transmissivity. We explore the performance of the reduced model in terms of the number of iterations of the greedy algorithm and selected metrics quantifying the discrepancy between the sample distributions of hydraulic heads computed with the full and the reduced model. Our results show that the reduced model is accurate and is highly efficient in the presence of a small variance and/or a large correlation length of the log-transmissivity field. The flow scenarios associated with large variances and small correlation lengths require an increased number of basis functions to accurately describe the collection of the MC solutions, thus reducing significantly the computational advantages associated with the reduced model.  相似文献   

11.
The high computational costs associated with the implicit formulation of discontinuous deformation analysis (DDA) have been one of the major obstacles for its implementation to engineering problems involving jointed rock masses with large numbers of blocks. In this paper, the Newmark-based predictor-corrector solution (NPC) approach was modified to improve the performance of the original DDA solution module in modeling discontinuous problems. The equation of motion for a discrete block system is first established with emphasis on the consideration of contact constraints. A family of modified Newmark-based predictor-corrector integration (MNPC) scheme is then proposed and implemented into a unified analysis framework. Comparisons are made between the proposed approach and the widely used constant acceleration (CA) integration approach and central difference (CD) approach, regarding the stability and numerical damping features for a single-degree-of-freedom model, where the implications of the proposed approach on open-close iteration are also discussed. The validity of the proposed approach is verified by several benchmarking examples, and it is then applied to two typical problems with different numbers of blocks. The results show that the original CA approach in DDA is efficient for the simulation of quasi-static deformation of jointed rock masses, while the proposed MNPC approach leads to improved computational efficiency for dynamic analysis of large-scale jointed rock masses. The MNPC approach therefore provides an additional option for efficient DDA of jointed rock masses.  相似文献   

12.
Uncertainty quantification for geomechanical and reservoir predictions is in general a computationally intensive problem, especially if a direct Monte Carlo approach with large numbers of full-physics simulations is used. A common solution to this problem, well-known for the fluid flow simulations, is the adoption of surrogate modeling approximating the physical behavior with respect to variations in uncertain parameters. The objective of this work is the quantification of such uncertainty both within geomechanical predictions and fluid-flow predictions using a specific surrogate modeling technique, which is based on a functional approach. The methodology realizes an approximation of full-physics simulated outputs that are varying in time and space when uncertainty parameters are changed, particularly important for the prediction of uncertainty in vertical displacement resulting from geomechanical modeling. The developed methodology has been applied both to a subsidence uncertainty quantification example and to a real reservoir forecast risk assessment. The surrogate quality obtained with these applications confirms that the proposed method makes it possible to perform reliable time–space varying dependent risk assessment with a low computational cost, provided the uncertainty space is low-dimensional.  相似文献   

13.
In this paper, we present a computational framework for the simulation of coupled flow and reservoir geomechanics. The physical model is restricted to Biot’s theory of single-phase flow and linear poroelasticity, but is sufficiently general to be extended to multiphase flow problems and inelastic behavior. The distinctive technical aspects of our approach are: (1) the space discretization of the equations. The unknown variables are the pressure, the fluid velocity, and the rock displacements. We recognize that these variables are of very different nature, and need to be discretized differently. We propose a mixed finite element space discretization, which is stable, convergent, locally mass conservative, and employs a single computational grid. To ensure stability and robustness, we perform an implicit time integration of the fluid flow equations. (2) The strategies for the solution of the coupled system. We compare different solution strategies, including the fully coupled approach, the usual (conditionally stable) iteratively coupled approach, and a less common unconditionally stable sequential scheme. We show that the latter scheme corresponds to a modified block Jacobi method, which also enjoys improved convergence properties. This computational model has been implemented in an object-oriented reservoir simulator, whose modular design allows for further extensions and enhancements. We show several representative numerical simulations that illustrate the effectiveness of the approach.  相似文献   

14.
Bubble–particle encounter during flotation is governed by liquid flow relative to the rising bubble, which is a function of the adsorbed frothers, collectors, and other surfactants and surface contaminants. Due to surface contamination, the bubble surface in flotation has been considered as immobile (rigid). However, surface contamination can be swept to the backside of the rising bubble due to the relative liquid flow, leaving the front surface of the rising bubble mobile with a non-zero tangential component of the liquid velocity. The bubble with a mobile surface was considered by Sutherland who applied the potential flow condition and analyzed the bubble–particle encounter using a simplified particle motion equation without inertia. The Sutherland model was found to over-predict the encounter efficiency and has been improved by incorporating inertial forces which are amplified at the mobile surface with a non-zero tangential velocity component of the liquid phase. An analytical solution was obtained for the encounter efficiency using approximate equations and is called the Generalized Sutherland Equation (GSE). In this paper, the bubble–particle encounter interaction with the potential flow condition has been analyzed by solving the full motion equation for the particle employing a numerical computational approach. The GSE model was compared with the exact numerical results for the encounter efficiency. The comparison only shows good agreement between the GSE prediction and the numerical data for ultrafine particles (< 10 μm in diameter), the inertial forces of which are vanishingly small. For non-ultrafine particles, a significant deviation of the GSE model from the numerical data has been observed. Details of the numerical methodology and solutions for the (collision) angle of tangency and encounter efficiency are described.  相似文献   

15.
张天龙  曾鹏  李天斌  孙小平 《岩土力学》2020,41(9):3098-3108
相较于极限平衡法,强度折减法在计算边坡稳定性系数上有许多优势,但更大的计算量在一定程度上限制了其在边坡可靠度分析中的应用。为了有效地减少可靠度分析中数值模型的计算次数,以减轻使用强度折减法所带来的计算压力,引入了基于主动学习径向基函数(ARBF)代理模型的高效分析方法:利用主动学习函数在极限状态面附近搜索训练样本更新代理模型,加快模型训练的收敛速度;采用线性核径向基插值函数简化模型参数优化过程,建立简洁、稳定的代理模型。此外,为了充分发挥主动学习代理模型的优势,提出针对土质边坡特性的初始采样策略。当得到稳定的代理模型后,结合蒙特卡罗模拟计算边坡的系统失稳概率。作为对比,基于两个典型边坡算例,测试了两种经典的可靠度方法:主动学习克里金模型(AK)和二次响应面法(QRSM),论证了引入的主动学习径向基函数代理模型在计算效率上的高效性和计算模型上的稳定性。  相似文献   

16.
One of the important recent advances in the field of hurricane/storm modelling has been the development of high-fidelity numerical simulation models for reliable and accurate prediction of wave and surge responses. The computational cost associated with these models has simultaneously created an incentive for researchers to investigate surrogate modelling (i.e. metamodeling) and interpolation/regression methodologies to efficiently approximate hurricane/storm responses exploiting existing databases of high-fidelity simulations. Moving least squares (MLS) response surfaces were recently proposed as such an approximation methodology, providing the ability to efficiently describe different responses of interest (such as surge and wave heights) in a large coastal region that may involve thousands of points for which the hurricane impact needs to be estimated. This paper discusses further implementation details and focuses on optimization characteristics of this surrogate modelling approach. The approximation of different response characteristics is considered, and special attention is given to predicting the storm surge for inland locations, for which the possibility of the location remaining dry needs to be additionally addressed. The optimal selection of the basis functions for the response surface and of the parameters of the MLS character of the approximation is discussed in detail, and the impact of the number of high-fidelity simulations informing the surrogate model is also investigated. Different normalizations of the response as well as choices for the objective function for the optimization problem are considered, and their impact on the accuracy of the resultant (under these choices) surrogate model is examined. Details for implementation of the methodology for efficient coastal risk assessment are reviewed, and the influence in the analysis of the model prediction error introduced through the surrogate modelling is discussed. A case study is provided, utilizing a recently developed database of high-fidelity simulations for the Hawaiian Islands.  相似文献   

17.
In this work, we present an efficient matrix-free ensemble Kalman filter (EnKF) algorithm for the assimilation of large data sets. The EnKF has increasingly become an essential tool for data assimilation of numerical models. It is an attractive assimilation method because it can evolve the model covariance matrix for a non-linear model, through the use of an ensemble of model states, and it is easy to implement for any numerical model. Nevertheless, the computational cost of the EnKF can increase significantly for cases involving the assimilation of large data sets. As more data become available for assimilation, a potential bottleneck in most EnKF algorithms involves the operation of the Kalman gain matrix. To reduce the complexity and cost of assimilating large data sets, a matrix-free EnKF algorithm is proposed. The algorithm uses an efficient matrix-free linear solver, based on the Sherman–Morrison formulas, to solve the implicit linear system within the Kalman gain matrix and compute the analysis. Numerical experiments with a two-dimensional shallow water model on the sphere are presented, where results show the matrix-free implementation outperforming an singular value decomposition-based implementation in computational time.  相似文献   

18.
位移反分析的粒子群优化-高斯过程协同优化方法   总被引:2,自引:0,他引:2  
针对采用随机全局优化技术进行岩土工程位移反分析存在数值计算量大、效率低的问题,将粒子群优化算法与高斯过程机器学习技术相结合,提出了位移反分析的粒子群优化-高斯过程协同优化方法。该方法利用全局寻优性能优异的粒子群优化算法进行寻优的基础上,采用高斯过程机器学习模型不断地总结历史经验,预测包含全局最优解的最有前景区域,通过提高粒子群搜索效率并降低适应度评价次数,进而有效地降低位移反分析过程中的数值计算工作量。多种测试函数的数学验证和工程算例的研究结果表明该方法是可行的,与传统方法相比较,可显著地降低位移反分析的计算耗时。  相似文献   

19.
In this paper, we propose a new methodology to automatically find a model that fits on an experimental variogram. Starting with a linear combination of some basic authorized structures (for instance, spherical and exponential), a numerical algorithm is used to compute the parameters, which minimize a distance between the model and the experimental variogram. The initial values are automatically chosen and the algorithm is iterative. After this first step, parameters with a negligible influence are discarded from the model and the more parsimonious model is estimated by using the numerical algorithm again. This process is iterated until no more parameters can be discarded. A procedure based on a profiled cost function is also developed in order to use the numerical algorithm for multivariate data sets (possibly with a lot of variables) modeled in the scope of a linear model of coregionalization. The efficiency of the method is illustrated on several examples (including variogram maps) and on two multivariate cases.  相似文献   

20.
The in-situ upgrading (ISU) of bitumen and oil shale is a very challenging process to model numerically because of the large number of components that need to be modelled using a system of equations that are both highly non-linear and strongly coupled. Operator splitting methods are one way of potentially improving computational performance. Each numerical operator in a process is modelled separately, allowing the best solution method to be used for the given numerical operator. A significant drawback to the approach is that decoupling the governing equations introduces an additional source of numerical error, known as the splitting error. The best splitting method for modelling a given process minimises the splitting error whilst improving computational performance compared to a fully implicit approach. Although operator splitting has been widely used for the modelling of reactive-transport problems, it has not yet been applied to the modelling of ISU. One reason is that it is not clear which operator splitting technique to use. Numerous such techniques are described in the literature and each leads to a different splitting error. While this error has been extensively analysed for linear operators for a wide range of methods, the results cannot be extended to general non-linear systems. It is therefore not clear which of these techniques is most appropriate for the modelling of ISU. In this paper, we investigate the application of various operator splitting techniques to the modelling of the ISU of bitumen and oil shale. The techniques were tested on a simplified model of the physical system in which a solid or heavy liquid component is decomposed by pyrolysis into lighter liquid and gas components. The operator splitting techniques examined include the sequential split operator (SSO), the Strang-Marchuk split operator (SMSO) and the iterative split operator (ISO). They were evaluated on various test cases by considering the evolution of the discretization error as a function of the time-step size compared with the results obtained from a fully implicit simulation. We observed that the error was least for a splitting scheme where the thermal conduction was performed first, followed by the chemical reaction step and finally the heat and mass convection operator (SSO-CKA). This method was then applied to a more realistic model of the ISU of bitumen with multiple components, and we were able to obtain a speed-up of between 3 and 5.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号