首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 14 毫秒
1.
2.
Eric Tate 《Natural Hazards》2012,63(2):325-347
Social vulnerability indices have emerged over the past decade as quantitative measures of the social dimensions of natural hazards vulnerability. But how reliable are the index rankings? Validation of indices with external reference data has posed a persistent challenge in large part because social vulnerability is multidimensional and not directly observable. This article applies global sensitivity analyses to internally validate the methods used in the most common social vulnerability index designs: deductive, hierarchical, and inductive. Uncertainty analysis is performed to assess the robustness of index ranks when reasonable alternative index configurations are modeled. The hierarchical design was found to be the most accurate, while the inductive model was the most precise. Sensitivity analysis is employed to understand which decisions in the vulnerability index construction process have the greatest influence on the stability of output rankings. The deductive index ranks are found to be the most sensitive to the choice of transformation method, hierarchical models to the selection of weighting scheme, and inductive indices to the indicator set and scale of analysis. Specific recommendations for each stage of index construction are provided so that the next generation of social vulnerability indices can be developed with a greater degree of transparency, robustness, and reliability.  相似文献   

3.
4.
The subset simulation (SS) method is a probabilistic approach which is devoted to efficiently calculating a small failure probability. Contrary to Monte Carlo Simulation (MCS) methodology which is very time-expensive when evaluating a small failure probability, the SS method has the advantage of assessing the small failure probability in a much shorter time. However, this approach does not provide any information about the probability density function (PDF) of the system response. In addition, it does not provide any information about the contribution of each input uncertain parameter in the variability of this response. Finally, the SS approach cannot be used to calculate the partial safety factors which are generally obtained from a reliability analysis. To overcome these shortcomings, the SS approach is combined herein with the Collocation-based Stochastic Response Surface Method (CSRSM) to compute these outputs. This combination is carried out by using the different values of the system response obtained by the SS approach for the determination of the unknown coefficients of the polynomial chaos expansion in CSRSM. An example problem that involves the computation of the ultimate bearing capacity of a strip footing is presented to demonstrate the efficiency of the proposed procedure. The validation of the present method is performed by comparison with MCS methodology applied on the original deterministic model. Finally, a probabilistic parametric study is presented and discussed.  相似文献   

5.
The effects of permeability variability on uncertainty of the results of a hydrocarbon biodegradation model are addressed. The model includes saturated and unsaturated flow, multi-species transport, heat transport, and bacterial-growth processes. A stochastic approach was used in the uncertainty analysis. Sensitivity analyses were conducted, taking into consideration the effects of heterogeneity. The Monte Carlo method was used, with permeability as the input stochastic variable. Results showed that uncertainty increases with time. This can lead to difficulties regarding cleanup decision making such as predicting the timeframe to reach an aquifer cleanup goal. It was not possible to replace the heterogeneous system with a homogeneous one through the use of effective parameters that preserve an equivalent behavior of the two systems. Effective permeability is space and time dependent and also depends on values of bioactivity parameters. The study also emphasized the importance of accurately measuring certain bacterial parameters, namely, maximum substrate uptake rate for degradation and cell yield coefficient. Uncertainties regarding nutrient and oxygen uptake and saturation parameters were less important for the current application.
Aly I. El-KadiEmail:
  相似文献   

6.
Dong  Guiming  Wang  Ying  Tian  Juan  Fan  Zhihong 《Hydrogeology Journal》2021,29(5):1871-1883

In the numerical simulation of groundwater flow, uncertainties often affect the precision of the simulation results. Stochastic and statistical approaches such as the Monte Carlo method, the Neumann expansion method and the Taylor series expansion, are commonly employed to estimate uncertainty in the final output. Based on the first-order interval perturbation method, a combination of the interval and perturbation methods is proposed as a viable alternative and compared to the well-known equal interval continuous sampling method (EICSM). The approach was realized using the GFModel (an unsaturated-saturated groundwater flow simulation model) program. This study exemplifies scenarios of three distinct interval parameters, namely, the hydraulic conductivities of six equal parts of the aquifer, their boundary head conditions, and several hydrogeological parameters (e.g. specific storativity and extraction rate of wells). The results show that the relative errors of deviation of the groundwater head extremums (RDGE) in the late stage of simulation are controlled within approximately ±5% when the changing rate of the hydrogeological parameter is no more than 0.2. From the viewpoint of the groundwater head extremums, the relative errors can be controlled within ±1.5%. The relative errors of the groundwater head variation are within approximately ±5% when the changing rate is no more than 0.2. The proposed method of this study is applicable to unsteady-state confined water flow systems.

  相似文献   

7.
8.
The determination of the optimal type and placement of a nonconventional well in a heterogeneous reservoir represents a challenging optimization problem. This determination is significantly more complicated if uncertainty in the reservoir geology is included in the optimization. In this study, a genetic algorithm is applied to optimize the deployment of nonconventional wells. Geological uncertainty is accounted for by optimizing over multiple reservoir models (realizations) subject to a prescribed risk attitude. To reduce the excessive computational requirements of the base method, a new statistical proxy (which provides fast estimates of the objective function) based on cluster analysis is introduced into the optimization process. This proxy provides an estimate of the cumulative distribution function (CDF) of the scenario performance, which enables the quantification of proxy uncertainty. Knowledge of the proxy-based performance estimate in conjunction with the proxy CDF enables the systematic selection of the most appropriate scenarios for full simulation. Application of the overall method for the optimization of monobore and dual-lateral well placement demonstrates the performance of the hybrid optimization procedure. Specifically, it is shown that by simulating only 10% or 20% of the scenarios (as determined by application of the proxy), optimization results very close to those achieved by simulating all cases are obtained.  相似文献   

9.
In this paper we present error and performance analysis of quasi-Monte Carlo algorithms for solving multidimensional integrals (up to 100 dimensions) on the grid using MPI. We take into account the fact that the Grid is a potentially heterogeneous computing environment, where the user does not know the specifics of the target architecture. Therefore parallel algorithms should be able to adapt to this heterogeneity, providing automated load-balancing. Monte Carlo algorithms can be tailored to such environments, provided parallel pseudorandom number generators are available. The use of quasi-Monte Carlo algorithms poses more difficulties. In both cases the efficient implementation of the algorithms depends on the functionality of the corresponding packages for generating pseudorandom or quasirandom numbers. We propose efficient parallel implementation of the Sobol sequence for a grid environment and we demonstrate numerical experiments on a heterogeneous grid. To achieve high parallel efficiency we use a newly developed special grid service called Job Track Service which provides efficient management of available computing resources through reservations.  相似文献   

10.
Monte Carlo Simulation (MCS) method has been widely used in probabilistic analysis of slope stability, and it provides a robust and simple way to assess failure probability. However, MCS method does not offer insight into the relative contributions of various uncertainties (e.g., inherent spatial variability of soil properties and subsurface stratigraphy) to the failure probability and suffers from a lack of resolution and efficiency at small probability levels. This paper develop a probabilistic failure analysis approach that makes use of the failure samples generated in the MCS and analyzes these failure samples to assess the effects of various uncertainties on slope failure probability. The approach contains two major components: hypothesis tests for prioritizing effects of various uncertainties and Bayesian analysis for further quantifying their effects. Equations are derived for the hypothesis tests and Bayesian analysis. The probabilistic failure analysis requires a large number of failure samples in MCS, and an advanced Monte Carlo Simulation called Subset Simulation is employed to improve efficiency of generating failure samples in MCS. As an illustration, the proposed probabilistic failure analysis approach is applied to study a design scenario of James Bay Dyke. The hypothesis tests show that the uncertainty of undrained shear strength of lacustrine clay has the most significant effect on the slope failure probability, while the uncertainty of the clay crust thickness contributes the least. The effect of the former is then further quantified by a Bayesian analysis. Both hypothesis test results and Bayesian analysis results are validated against independent sensitivity studies. It is shown that probabilistic failure analysis provides results that are equivalent to those from additional sensitivity studies, but it has the advantage of avoiding additional computational times and efforts for repeated runs of MCS in sensitivity studies.  相似文献   

11.
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate (and expensive) forward simulations are run with fewer samples, while less accurate (and inexpensive) forward simulations are run with a larger number of samples. Selecting the number of expensive and inexpensive simulations based on the number of coarse degrees of freedom, one can show that MLMC methods can provide better accuracy at the same cost as Monte Carlo (MC) methods. The main objective of the paper is twofold. First, we would like to compare NLSO and LSO mixed MsFEMs. Further, we use both approaches in the context of MLMC to speedup MC calculations.  相似文献   

12.
Two finite element algorithms suitable for long term simulation of geothermal reservoirs are presented. Both methods use a diagonal mass matrix and a Newton iteration scheme. The first scheme solves the 2N unsymmetric algebraic equations resulting from the finite element discretization of the equations governing the flow of heat and mass in porous media by using a banded equation solver. The second method, suitable for problems in which the transmissibility terms are small compared to the accumulation terms, reduces the set of N equations for the Newton corrections to a symmetric system. Comparison with finite difference schemes indicates that the proposed algorithms are competitive with existing methods.  相似文献   

13.
We present a parallel framework for history matching and uncertainty characterization based on the Kalman filter update equation for the application of reservoir simulation. The main advantages of ensemble-based data assimilation methods are that they can handle large-scale numerical models with a high degree of nonlinearity and large amount of data, making them perfectly suited for coupling with a reservoir simulator. However, the sequential implementation is computationally expensive as the methods require relatively high number of reservoir simulation runs. Therefore, the main focus of this work is to develop a parallel data assimilation framework with minimum changes into the reservoir simulator source code. In this framework, multiple concurrent realizations are computed on several partitions of a parallel machine. These realizations are further subdivided among different processors, and communication is performed at data assimilation times. Although this parallel framework is general and can be used for different ensemble techniques, we discuss the methodology and compare results of two algorithms, the ensemble Kalman filter (EnKF) and the ensemble smoother (ES). Computational results show that the absolute runtime is greatly reduced using a parallel implementation versus a serial one. In particular, a parallel efficiency of about 35 % is obtained for the EnKF, and an efficiency of more than 50 % is obtained for the ES.  相似文献   

14.
渗透率是油气藏储层评价、产能计算及制定合理的开发方案所需关键参数之一。针对常规稳态法测试致密储层岩石渗透率效率低下、试验过程易受环境温度影响等缺点,使用脉冲衰减渗透率测试仪对39块致密岩石稳态法克氏渗透率与非稳态脉冲衰减渗透率进行系统的对比研究,分析岩样渗透率、试验操作方法及有效应力组合方式等因素对测试结果的影响。研究结果表明,相同净围压条件下致密岩石脉冲渗透率小于克氏渗透率,脉冲渗透率平均约为克氏渗透率的47.26%,岩样渗透率越小,则两种渗透率之间的差别越大。误差分析表明,脉冲渗透率试验初期岩样短时间承受的9 MPa高围压和其高围压、高孔隙压力组合的有效应力施加方式对脉冲渗透率测试结果具有一定程度的影响,但仍无法完全解释两种渗透率的总体误差。数学拟合表明,取自同一区块的露头砂岩岩样脉冲与克氏渗透率相对误差与脉冲渗透率具有较好地对数函数关系,由数学推导获得该露头砂岩储层岩石脉冲渗透率与克氏渗透率的转换关系。  相似文献   

15.
本文从油气藏地质成因分析的基础资料入手,总结该项研究的主要内容包括:构造、储层和油气水分布特征等3方面.结合自身科研实践,认为油气藏地质成因分析方法主要包括:野外露头和现代沉积考察、岩心观察描述、显微镜下薄片观察鉴定、地球物理解释预测、地质统计学分析、分析测试、各种物理模拟和数值模拟、油气藏动态监测和生产动态等,并阐述了不同研究方法的优缺点.目前油气藏地质成因分析存在的主要问题包括:对油气藏地质成因分析重视程度不够、研究方法偏定性、地质成因分析和油气藏表征结合不紧密、油气藏表征精度制约了地质成因分析的准确度、油气藏地质成因分析综合性不强、特殊类型油气藏地质成因分析还存在诸多难题等.本文指出了该研究未来发展方向.主要包括:依靠油气藏地质成因分析解决油气田开发中的难题、通过各种模拟方法提高油气藏地质成因分析定量化水平、加强地质成因分析以提高油气藏表征水平、利用油气藏表征促进地质成因分析进步、拓展油气藏地质成因分析在油气田开发中应用的领域、特殊类型油气藏地质成因分析等.  相似文献   

16.
Multi-criteria decision-making methods support decision makers in all stages of the decision-making process by providing useful data. However, criteria are not always certain as uncertainty is a feature of the real world. MCDM methods under uncertainty and fuzzy systems are accepted as suitable techniques in conflicting problems that cannot be represented by numerical values, in particular in energy analysis and planning. In this paper, a modified TOPSIS method for multi-criteria group decision-making with qualitative linguistic labels is proposed. This method addresses uncertainty considering different levels of precision. Each decision maker’s judgment on the performance of alternatives with respect to each criterion is expressed by qualitative linguistic labels. The new method takes into account linguistic data provided by the decision makers without any previous aggregation. Decision maker judgments are incorporated into the proposed method to generate a complete ranking of alternatives. An application in energy planning is presented as an illustrative case example in which energy policy alternatives are ranked. Seven energy alternatives under nine criteria were evaluated according to the opinion of three environmental and energy experts. The weights of the criteria are determined by fuzzy AHP, and the alternatives are ranked using qualitative TOPSIS. The proposed approach is compared with a modified fuzzy TOPSIS method, showing the advantages of the proposed approach when dealing with linguistic assessments to model uncertainty and imprecision. Although the new approach requires less cognitive effort to decision makers, it yields similar results.  相似文献   

17.
We propose a workflow for decision making under uncertainty aiming at comparing different field development plan scenarios. The approach applies to mature fields where the residual uncertainty is estimated using a probabilistic inversion approach. Moreover, a robust optimization method is presented to optimize controllable parameters in the presence of uncertainty. The key element of this approach is the use of response surface model to reduce the very high number of simulator model evaluations that are classically needed to perform such workflows. The major issue is to be able to build an efficient and reliable response surface. This is achieved using a Gaussian process (kriging) statistical model and using a particular training set (experimental design) developed to take into account the variable correlation induced by the probabilistic inversion process. For the problem of optimization under uncertainty, an iterative training set is proposed, aiming at refining the response surface iteratively such as to effectively reduce approximation errors and converging faster to the true solution. The workflow is illustrated on a realistic test case of a mature field where the approach is used to compare two new development plan scenarios both in terms of expectation and of risk mitigation and to optimize well position parameters in the presence of uncertainty.  相似文献   

18.
在敞开体系中,用HF、HNO3和HClO4溶解电感耦合等离子体质谱法同时测定土壤样品中的15种稀土元素。在高分辨等离子体质谱仪(Element2)上建立了土壤样品中稀土元素含量的ICP-MS分析方法,经土壤国家一级标准物质分析验证,结果与标准值相符,测定的15种稀土元素的相对标准偏差均小于10%,加标回收率为96.5%~114.7%。实验表明,该方法不但操作简便快速,而且具有灵敏度高、检出限低,重现性好等优点,举例说明了测量不确定度的评定程序。  相似文献   

19.
20.
库岸滑坡涌浪经验估算方法对比分析   总被引:4,自引:0,他引:4  
黄锦林  张婷  李嘉琳 《岩土力学》2014,35(Z1):133-140
分别介绍了3种常用的库岸滑坡涌浪经验估算方法--美国土木工程学会推荐方法、潘家铮方法和水科院经验公式法。针对乐昌峡水库鹅公带古滑坡体滑坡涌浪预测问题,采用3种常用经验估算方法,并构建几何比尺1:150的物理模型,对设计洪水位和正常蓄水位条件下滑坡体不同滑速所产生的涌浪进行计算和测试。通过滑坡入水点对岸山坡涌浪爬高、坝前涌浪爬高和B3测点涌浪高度结果的对比分析,发现3种常用经验估算法计算结果差异较大,其中潘家铮方法与模型试验结果最为接近。通过对计算结果差异的进一步分析,推荐在采用经验估算法进行库岸滑坡涌浪预测时选择潘家铮方法,此外,还解释了物理模型试验中所出现的一些现象,可供库岸滑坡涌浪预测参考。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号