共查询到20条相似文献,搜索用时 43 毫秒
1.
Landslides are one of the most dangerous types of natural disasters, and damage due to landslides has been increasing in certain regions of the world because of increased precipitation. Policy decision makers require reliable information that can be used to establish spatial adaptation plans to protect people from landslide hazards. Researchers presently identify areas susceptible to landslides using various spatial distribution models. However, such data are associated with a high amount of uncertainty. This study focuses on quantifying the uncertainty of several spatial distribution models and identifying the effectiveness of various ensemble methods that can be used to provide reliable information to support policy decisions. The area of study was Inje-gun, Republic of Korea. Ten models were selected to assess landslide susceptibility. Moreover, five ensemble methods were selected for the aggregated results of the 10 models. The uncertainty was quantified using the coefficient of variation and the uncertainty map we developed revealed areas with strongly differing values among single models. A matrix map was created using an ensemble map and a coefficient of variation map. Using matrix analysis, we identified the areas that are most susceptible to landslides according to the ensemble model with a low uncertainty. Thus, the ensemble model can be a useful tool for supporting decision makers. The framework of this study can also be employed to support the establishment of landslide adaptation plans in other areas of the Republic of Korea and in other countries. 相似文献
2.
Modeling snow hydrology for cold regions remains a problematic aspect of many hydro-environmental models. Temperature-index methods are commonly used and are routinely justified under the auspices that process-based models require too many input data. To test this claim, we used a physical, process-based model to simulate snowmelt at four locations across the conterminous US using energy components estimated from measured daily maximum and minimum temperature, i.e. using only the same data required for temperature-index models. The results showed good agreement between observed and predicted snow water equivalents, average R2>0.9. We duplicated the simulations using a simple temperature-index model best fitted to the data and results were poorer, R2<0.8. At one site we applied the process-based model without substantial parameter estimation, and there were no significant (=0.05) differences between these results and those obtained using temperature-estimated parameters, despite relatively poorly predicted specific energy budget components ( R2<0.8). These results encourage the use of mechanistic snowmelt modeling approaches in hydrological models, especially in distributed hydrological models for which landscape snow distribution may be controlled by spatially distributed components of the environmental energy budget. 相似文献
3.
The need for the combination of seismic data with real-time wave height information for an effective prediction of tsunami impact is emphasized in the paper. A preliminary, but comprehensive study of arrival times, wave heights and run-up values at a number of locations and tide gage stations throughout the Indian Ocean seaboard is presented. Open ocean wave height data from satellite observations are analyzed and used in the reconstruction of a tsunami source mechanism for the December 26, 2004 event. The reconstructed source is then used to numerically estimate tsunami impact along the Indian Ocean seaboard, including wave height, and arrival times at 12 tide gage stations, and inundation at 3 locations on the coast of India. The December 2004, as well as the March 28, 2005 tsunamis are investigated and their differences in terms of tsunami generation are analyzed and presented as a clear example of the need for both, seismic and real-time tsunami data for a reliable tsunami warning system in the Indian Ocean. 相似文献
4.
The main objective of this study was to quantify the error associated with input data, including various resolutions of elevation datasets and Manning’s roughness for travel time computation and floodplain mapping. This was accomplished on the test bed, the Grand River (Ohio, USA) using the HEC-RAS model. LiDAR data integrated with survey data provided conservative predictions, whereas coarser elevation datasets provided a positive difference in the travel time (11.03–15.01%) and inundation area (32.56–44.52%). The minimum differences in travel time and inundation area were 0.50–4.33% and 3.55–7.16%, respectively, when the result from LiDAR integrated with survey data was compared with a 10-m DEM integrated with survey data. The results suggest that a 10-m DEM in the channel and LiDAR data in the floodplain combined with survey data would be appropriate for a flood warning system. Additionally, Manning’s roughness of the channel section was found to be more sensitive than that of the floodplain. The decrease in inundation area was highest (8.97%) for the lower value of Manning’s roughness. 相似文献
5.
现役古建木结构普遍存在残损现象,这将影响结构的抗震性能。本文以北京故宫的咸福宫西配殿为研究对象,通过简化其屋顶、斗拱、榫卯和柱脚节点建立结构的理想模型,并在此基础上考虑材料老化和节点性能降低等因素建立其残损现状模型。通过地震易损性分析,得到古建木结构的地震易损性曲线并进行理想和残损结构的震害等级及其发生概率对比。研究结果表明:残损现象降低了咸福宫西配殿的刚度和自振频率;相比于理想结构,咸福宫西配殿残损结构在小震作用下发生轻微损坏的概率增大21.1%,在中震作用下发生中等破坏的概率增大3.7%,大震作用下发生严重破坏的概率增大10.6%;咸福宫西配殿在大震作用下发生倒塌的概率很小,体现了木结构具有良好的抗震性能。 相似文献
7.
In shallow water table‐controlled environments, surface water management impacts groundwater table levels and soil water dynamics. The study goal was to simulate soil water dynamics in response to canal stage raises considering uncertainty in measured soil water content. Water and Agrochemicals in the soil, crop and Vadose Environment (WAVE) was applied to simulate unsaturated flow above a shallow aquifer. Global sensitivity analysis was performed to identify model input factors with the greatest influence on predicted soil water content. Nash–Sutcliffe increased and Root Mean Square Error reduced when uncertainties in measured data were considered in goodness‐of‐fit calculations using measurement probability distributions and probable asymmetric error boundaries, implying that appropriate model performance evaluation should be carried out using uncertainty ranges instead of single values. Although uncertainty in the experimental measured data limited evaluation of the absolute predictions by the model, WAVE was found a useful exploratory tool for estimating temporal variation in soil water content. Visual analysis of soil water content time series under proposed changes in canal stage management indicated that sites with land surface elevation of less than 2.0‐m NGVD29 were predicted to periodically experience saturated conditions in the root zone and shortening of the growing season if canal stage is raised more than 9 cm and maintained at this level. The models developed could be combined with high‐resolution digital elevation models in future studies to identify areas with the greatest risk of experiencing saturated root zone. The study also highlighted the need to incorporate measurement uncertainty when evaluating performance of unsaturated flow models. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
8.
渗透率是储层评价和油气藏开发的关键参数.传统测井方法与常规机器学习方法估算的渗透率都是固定值.但由于测井数据本身存在噪声, 渗透率的预测结果可能受到噪声的影响出现测量性的随机误差(即任意不确定性); 同时, 当测试数据与训练数据存在差异时, 机器学习模型在预测渗透率时可能出现模型参数的不确定性(即认知不确定性).为实现渗透率的准确预测并量化两种不确定性对结果的影响, 本文提出基于数据分布域变换和贝叶斯神经网络同时实现渗透率预测及其不确定性的估计.提出方法主要包括两个部分: 一部分是不同域数据分布的相互转换, 另一部分是基于贝叶斯理论的神经网络渗透率建模预测和不确定性估计.由于贝叶斯神经网络存在数据分布的假设, 当标签的概率分布与网络的分布保持一致时, 贝叶斯神经网络可以更好的学习到数据之间的关系.因此通过寻找一个函数将一个原始域的渗透率标签转换为目标域的与渗透率有关的变量(我们称为目标域渗透率), 使得该变量符合贝叶斯神经网络的分布假设.我们使用贝叶斯神经网络预测目标域渗透率以及任意不确定性和认知不确定性.随后, 通过分布域的逆变换, 我们将目标域渗透率还原回原始域渗透率.应用本文方法到某油田的18口井的测井数据中, 使用16口井的数据进行训练, 2口井进行测试.测试井的预测渗透率与真实渗透率基本一致.同时, 任意不确定性的预测结果提供了渗透率预测值受到的测井数据噪声影响的位置.认知不确定的预测结果说明数据量少的位置具有更高的认知不确定性.我们提出的这一流程不仅显示了在储层表征方面的巨大潜力, 同时可以降低测井解释时的风险. 相似文献
9.
We introduce the Bayesian hierarchical modeling approach for analyzing observational data from marine ecological studies using a data set intended for inference on the effects of bottom-water hypoxia on macrobenthic communities in the northern Gulf of Mexico off the coast of Louisiana, USA. We illustrate (1) the process of developing a model, (2) the use of the hierarchical model results for statistical inference through innovative graphical presentation, and (3) a comparison to the conventional linear modeling approach (ANOVA). Our results indicate that the Bayesian hierarchical approach is better able to detect a “treatment” effect than classical ANOVA while avoiding several arbitrary assumptions necessary for linear models, and is also more easily interpreted when presented graphically. These results suggest that the hierarchical modeling approach is a better alternative than conventional linear models and should be considered for the analysis of observational field data from marine systems. 相似文献
10.
Fragility analysis for highway bridges has become increasingly important in the risk assessment of highway transportation networks exposed to seismic hazards. This study introduces a methodology to calculate fragility that considers multi-dimensional performance limit state parameters and makes a first attempt to develop fragility curves for a multi-span continuous (MSC) concrete girder bridge considering two performance limit state parameters: column ductility and transverse deformation in the abutments. The main purpose of this paper is to show that the performance limit states, which are compared with the seismic response parameters in the calculation of fragility, should be properly modeled as randomly interdependent variables instead of deterministic quantities. The sensitivity of fragility curves is also investigated when the dependency between the limit states is different. The results indicate that the proposed method can be used to describe the vulnerable behavior of bridges which are sensitive to multiple response parameters and that the fragility information generated by this method will be more reliable and likely to be implemented into transportation network loss estimation. 相似文献
11.
Infrastructure owners and operators, or governmental agencies, need rapid screening tools to prioritize detailed risk assessment and retrofit resources allocation. This paper provides one such tool, for use by highway administrations, based on Bayesian belief network (BBN) and aimed at replacing so‐called generic or typological seismic fragility functions for reinforced concrete girder bridges. Resources for detailed assessments should be allocated to bridges with highest consequence of damage, for which site hazard, bridge fragility, and traffic data are needed. The proposed BBN based model is used to quantify seismic fragility of bridges based on data that can be obtained by visual inspection and engineering drawings. Results show that the predicted fragilities are of sufficient accuracy for establishing relative ranking and prioritizing. While the actual data and seismic hazard employed to train the network (establishing conditional probability tables) refer to the Italian bridge stock, the network structure and engineering judgment can easily be adopted for bridges in different geographical locations. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
13.
准确预估南海海啸风险是有效防灾减灾的前提.前人一般把弹性半无限空间背景下解算出来的海底位错直接等同于初始海啸分布,继而开展海啸传播过程研究.由于断层破裂并非瞬时完成,破裂过程会导致初始海啸波高小于海底位错量,即初始海啸衰减.本文基于高精度地形和高密度网格,求解非线性浅水方程,分别针对马尼拉断裂带的南段、中段和北段,构建南海海啸传播数值模型,试图定量考察初始海啸衰减作用对南海海啸的影响.模拟结果表明一定幅度的初始波高衰减将导致几乎相同幅度的海啸波高衰减,相应的偏差可以忽略.在保守的初始海啸衰减幅度(10%)下,模拟结果显示我国东南沿海、越南东部沿海和巴拉望岛为海啸危险区.另外,模型显示科里奥利力导致的波高变化幅度小于5 cm且其分布样式符合预期,这进一步佐证了数值模型的可靠性,也表明在实际南海海啸模拟中可以忽略科里奥利力进而提高计算效率.结合前人的沉积学认识和本文的数值模拟结果,本文认为南澳岛、西沙东岛和越南绥和周边曾同时遭受海啸侵袭,产生海啸的断裂带最有可能是马尼拉断裂带南段.后续有必要加强南澳岛、西沙东岛和越南东部的沉积学研究,识别更早的海啸事件,以期有力约束南海下次海啸事件的发生年份. 相似文献
14.
Empirical fragility curves, constructed from databases of thousands of building-damage observations, are commonly used for earthquake risk assessments, particularly in Europe and Japan, where building stocks are often difficult to model analytically (e.g. old masonry structures or timber dwellings). Curves from different studies, however, display considerable differences, which lead to high uncertainty in the assessed seismic risk. One potential reason for this dispersion is the almost universal neglect of the spatial variability in ground motions and the epistemic uncertainty in ground-motion prediction. In this paper, databases of building damage are simulated using ground-motion fields that take account of spatial variability and a known fragility curve. These databases are then inverted, applying a standard approach for the derivation of empirical fragility curves, and the difference with the known curve is studied. A parametric analysis is conducted to investigate the impact of various assumptions on the results. By this approach, it is concluded that ground-motion variability leads to flatter fragility curves and that the epistemic uncertainty in the ground-motion prediction equation used can have a dramatic impact on the derived curves. Without dense ground-motion recording networks in the epicentral area empirical curves will remain highly uncertain. Moreover, the use of aggregated damage observations appears to substantially increase uncertainty in the empirical fragility assessment. In contrast, the use of limited randomly-chosen un-aggregated samples in the affected area can result in good predictions of fragility. 相似文献
15.
Abstract Different approaches used in hydrological modelling are compared in terms of the way each one takes the rainfall data into account. We examine the errors associated with accounting for rainfall variability, whether in hydrological modelling (distributed vs lumped models) or in computing catchment rainfall, as well as the impact of each approach on the representativeness of the parameters it uses. The database consists of 1859 rainfall events, distributed on 500 basins, located in the southeast of France with areas ranging from 6.2 to 2851 km 2. The study uses as reference the hydrographs computed by a distributed hydrological model from radar rainfall. This allows us to compare and to test the effects of various simplifications to the process when taking rainfall information (complete rain field vs sampled rainfall) and rainfall–runoff modelling (lumped vs distributed) into account. The results appear to show that, in general, the sampling effect can lead to errors in discharge at the outlet that are as great as, or even greater than, those one would get with a fully lumped approach. We found that small catchments are more sensitive to the uncertainties in catchment rainfall input generated by sampling rainfall data as seen through a raingauge network. Conversely, the larger catchments are more sensitive to uncertainties generated when the spatial variability of rainfall events is not taken into account. These uncertainties can be compensated for relatively easily by recalibrating the parameters of the hydrological model, although such recalibrations cause the parameter in question to completely lose physical meaning. Citation Arnaud, P., Lavabre, J., Fouchier, C., Diss, S. & Javelle, P. (2011) Sensitivity of hydrological models to uncertainty of rainfall input. Hydrol. Sci. J. 56(3), 397–410. 相似文献
16.
Unit hydrographs (UHs), along with design rainfalls, are frequently used to determine the discharge hydrograph for design
and evaluation of hydraulic structures. Due to the presence of various uncertainties in its derivation, the resulting UH is
inevitably subject to uncertainty. Consequently, the performance of hydraulic structures under the design storm condition
is uncertain. This paper integrates the linearly constrained Monte-Carlo simulation with the UH theory and routing techniques
to evaluate the reliability of hydraulic structures. The linear constraint is considered because the water volume of each
generated design direct runoff hydrograph should be equal to that of the design effective rainfall hyetograph or the water
volume of each generated UH must be equal to one inch (or cm) over the watershed. For illustration, the proposed methodology
is applied to evaluate the overtopping risk of a hypothetical flood detention reservoir downstream of Tong-Tou watershed in
Taiwan. 相似文献
17.
Unit hydrographs (UHs), along with design rainfalls, are frequently used to determine the discharge hydrograph for design
and evaluation of hydraulic structures. Due to the presence of various uncertainties in its derivation, the resulting UH is
inevitably subject to uncertainty. Consequently, the performance of hydraulic structures under the design storm condition
is uncertain. This paper integrates the linearly constrained Monte-Carlo simulation with the UH theory and routing techniques
to evaluate the reliability of hydraulic structures. The linear constraint is considered because the water volume of each
generated design direct runoff hydrograph should be equal to that of the design effective rainfall hyetograph or the water
volume of each generated UH must be equal to one inch (or cm) over the watershed. For illustration, the proposed methodology
is applied to evaluate the overtopping risk of a hypothetical flood detention reservoir downstream of Tong-Tou watershed in
Taiwan. 相似文献
19.
This paper develops a new method for decision-making under uncertainty. The method, Bayesian Programming (BP), addresses a class of two-stage decision problems with features that are common in environmental and water resources. BP is applicable to two-stage combinatorial problems characterized by uncertainty in unobservable parameters, only some of which is resolved upon observation of the outcome of the first-stage decision. The framework also naturally accommodates stochastic behavior, which has the effect of impeding uncertainty resolution. With the incorporation of systematic methods for decision search and Monte Carlo methods for Bayesian analysis, BP addresses limitations of other decision-analytic approaches for this class of problems, including conventional decision tree analysis and stochastic programming. The methodology is demonstrated with an illustrative problem of water quality pollution control. Its effectiveness for this problem is compared to alternative approaches, including a single-stage model in which expected costs are minimized and a deterministic model in which uncertain parameters are replaced by their mean values. A new term, the expected value of including uncertainty resolution, or EVIUR, is introduced and evaluated for the illustrative problem. It is a measure of the worth of incorporating the experimental value of decisions into an optimal decision-making framework. For the illustrative problem, the two-stage adaptive management framework extracted up to approximately 50% of the gains of perfect information. The strength and limitations of the method are discussed and conclusions are presented. 相似文献
|