首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The general modular Bayesian procedure is applied to provide a probabilistic tsunami hazard assessment (PTHA) for the Messina Strait Area (MSA), Italy. This is the first study in an Italian area where the potential tsunamigenic events caused by both submarine seismic sources (SSSs) and submarine mass failures (SMFs) are examined in a probabilistic assessment. The SSSs are localized on active faults in MSA as indicated by the instrumental data of the catalogue of the Italian seismicity; the SMFs are spatially identified using their propensity to failure in the Ionian and Tyrrhenian Seas on the basis of mean slope and mean depth, and using marine geology background knowledge. In both cases the associated probability of occurrence is provided. The run-ups were calculated at key sites that are main cities and/or important sites along the Eastern Sicily and the Southern Calabria coasts where tsunami events were recorded in the past. The posterior probability distribution combines the prior probability and the likelihood calculated in the MSA. The prior probability is based on the physical model of the tsunami process, and the likelihood is based on the historical data collected by the historical catalogues, background knowledge, and marine geological information. The posterior SSSs and SMFs tsunami probabilities are comparable and are combined to produce a final probability for a full PTHA in MSA.  相似文献   

2.
Seismic hazard analysis is based on data and models, which both are imprecise and uncertain. Especially the interpretation of historical information into earthquake parameters, e.g. earthquake size and location, yields ambiguous and imprecise data. Models based on probability distributions have been developed in order to quantify and represent these uncertainties. Nevertheless, the majority of the procedures applied in seismic hazard assessment do not take into account these uncertainties, nor do they show the variance of the results. Therefore, a procedure based on Bayesian statistics was developed to estimate return periods for different ground motion intensities (MSK scale).Bayesian techniques provide a mathematical model to estimate the distribution of random variables in presence of uncertainties. The developed method estimates the probability distribution of the number of occurrences in a Poisson process described by the parameter . The input data are the historical occurrences of intensities for a particular site, represented by a discrete probability distribution for each earthquake. The calculation of these historical occurrences requires a careful preparation of all input parameters, i.e. a modelling of their uncertainties. The obtained results show that the variance of the recurrence rate is smaller in regions with higher seismic activity than in less active regions. It can also be demonstrated that long return periods cannot be estimated with confidence, because the time period of observation is too short. This indicates that the long return periods obtained by seismic source methods only reflects the delineated seismic sources and the chosen earthquake size distribution law.  相似文献   

3.
Probabilistic Analysis of Tsunami Hazards*   总被引:2,自引:1,他引:2  
Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis). * The U.S. Government’s right to retain a non-exclusive, royalty-free license in and to any copyright is acknowledged.  相似文献   

4.
A probabilistic tsunami hazard assessment is performed for the Makran subduction zone (MSZ) at the northwestern Indian Ocean employing a combination of probability evaluation of offshore earthquake occurrence and numerical modeling of resulting tsunamis. In our method, we extend the Kijko and Sellevoll’s (1992) probabilistic analysis from earthquakes to tsunamis. The results suggest that the southern coasts of Iran and Pakistan, as well as Muscat, Oman are the most vulnerable areas among those studied. The probability of having tsunami waves exceeding 5 m over a 50-year period in these coasts is estimated as 17.5%. For moderate tsunamis, this probability is estimated as high as 45%. We recommend the application of this method as a fresh approach for doing probabilistic hazard assessment for tsunamis. Finally, we emphasize that given the lack of sufficient information on the mechanism of large earthquake generation in the MSZ, and inadequate data on Makran’s paleo and historical earthquakes, this study can be regarded as the first generation of PTHA for this region and more studies should be done in the future.  相似文献   

5.
This paper presents a methodology for assessing local probability distributions by disjunctive kriging when the available data set contains some imprecise measurements, like noisy or soft information or interval constraints. The basic idea consists in replacing the set of imprecise data by a set of pseudohard data simulated from their posterior distribution; an iterative algorithm based on the Gibbs sampler is proposed to achieve such a simulation step. The whole procedure is repeated many times and the final result is the average of the disjunctive kriging estimates computed from each simulated data set. Being data-independent, the kriging weights need to be calculated only once, which enables fast computing. The simulation procedure requires encoding each datum as a pre-posterior distribution and assuming a Markov property to allow the updating of pre-posterior distributions into posterior ones. Although it suffers some imperfections, disjunctive kriging turns out to be a much more flexible approach than conditional expectation, because of the vast class of models that allows its computation, namely isofactorial models.  相似文献   

6.
The Bayesian Maximum Entropy (BME) method of spatial analysis and mapping provides definite rules for incorporating prior information, hard and soft data into the mapping process. It has certain unique features that make it a loyal guardian of plausible reasoning under conditions of uncertainty. BME is a general approach that does not make any assumptions regarding the linearity of the estimator, the normality of the underlying probability laws, or the homogeneity of the spatial distribution. By capitalizing on various sources of information and data, BME introduces an epistemological framework that produces predictive maps that are more accurate and in many cases computationally more efficient than those derived by traditional techniques. In fact, kriging techniques can be derived as special cases of the BME approach, under restrictive assumptions regarding the prior information and the data available. BME is a more rigorous approach than indicator kriging for incorporating soft data. The BME formulation, in fact, applies in a spatial or a spatiotemporal domain and its extension to the case of block and vector random fields is straightforward. New theoretical results are presented and numerical examples are discussed, which use the BME approach to account for important sources of knowledge in a systematic manner. BME can be useful in practical situations in which prior information can be used to compensate for the limited amount of measurements available (e.g., preliminary or feasibility study levels) or soft data are available that can be combined with hard data to improve mapping significantly. BME may be then viewed as an effort towards the development of a more general framework of spatial/temporal analysis and mapping, which includes traditional geostatistics as its limiting case, and it also provides the means to derive novel results that could not be obtained by traditional geostatistics.  相似文献   

7.
Interpretation of geophysical data or other indirect measurements provides large-scale soft secondary data for modeling hard primary data variables. Calibration allows such soft data to be expressed as prior probability distributions of nonlinear block averages of the primary variable; poorer quality soft data leads to prior distributions with large variance, better quality soft data leads to prior distributions with low variance. Another important feature of most soft data is that the quality is spatially variable; soft data may be very good in some areas while poorer in other areas. The main aim of this paper is to propose a new method of integrating such soft data, which is large-scale and has locally variable precision. The technique of simulated annealing is used to construct stochastic realizations that reflect the uncertainty in the soft data. This is done by constraining the cumulative probability values of the block average values to follow a specified distribution. These probability values are determined by the local soft prior distribution and a nonlinear average of the small-scale simulated values within the block, which are all known. For each realization to accurately capture the information contained in the soft data distributions, we show that the probability values should be uniformly distributed between 0 and 1. An objective function is then proposed for a simulated annealing based approach to enforce this uniform probability constraint. The theoretical justification of this approach is discussed, implementation details are considered, and an example is presented.  相似文献   

8.
9.
Empirical, theoretical or hybrid methods can be used for the vulnerability analysis of structures to evaluate the seismic damage data and to obtain probability damage matrices. The information on observed structural damage after earthquakes has critical importance to drive empirical vulnerability methods. The purpose of this paper is to evaluate the damage distributions based on the data observed in Erzincan-1992, Dinar-1995 and Kocaeli-1999 earthquakes in Turkey utilizing two probability models—Modified Binomial Distribution (MBiD) and Modified Beta Distribution (MBeD). Based on these analyses, it was possible (a) to compare the advantages and limitations of the two probability models with respect to their capabilities in modelling the observed damage distributions; (b) to evaluate the damage assessment for reinforced concrete and masonry buildings in Turkey based on these models; (c) to model the damage distribution of different sub-groups such as buildings with different number of storeys or soil conditions according to the both models. The results indicate that (a) MBeD is more suitable than the MBiD to model the observed damage data for both reinforced concrete and masonry buildings in Turkey; (b) the sub-groups with lower number of stories are located in the lower intensity levels, while the sub-groups with higher number of stories depending on local site condition are concentrated in the higher intensity levels, thus site conditions should also be considered in the assessment of the intensity levels; (c) the detailed local models decrease the uncertainties of loss estimation since the damage distribution of sub-groups can be more accurately modelled compared to the general damage distribution models.  相似文献   

10.
Probabilistic Tsunami Hazard Analysis (PTHA) can be used to evaluate and quantify tsunami hazards for planning of integrated community-level preparedness, including mitigation of casualties and dollar losses, and to study resilient solutions for coastal communities. PTHA can provide several outputs such as the intensity measures (IMs) of the hazard quantified as a function of the recurrence interval of a tsunami event. In this paper, PTHA is developed using a logic tree approach based on numerical modeling for tsunami generated along the Cascadia Subduction Zone. The PTHA is applied to a community on the US Pacific Northwest Coast located in Newport, Oregon. Results of the PTHA are provided for five IMs: inundation depth, flow speed, specific momentum flux, arrival time, and duration of inundation. The first three IMs are predictors of tsunami impact on the natural and built environment, and the last two are useful for tsunami evacuation and immediate response planning. Results for the five IMs are presented as annual exceedance probability for sites within the community along several transects with varying bathymetric and topographic features. Community-level characteristics of spatial distribution of each IM for three recurrence intervals (500, 1000, 2500 year) are provided. Results highlight the different pattern of IMs between land and river transects, and significant magnitude variation of IMs due to complex bathymetry and topographic conditions at the various recurrence intervals. IMs show relatively higher magnitudes near the coastline, at the low elevation regions, and at the harbor channel. In addition, results indicate a positive correlation between inundation depth and other IMs near the coastline, but a weaker correlation at inland locations. Values of the Froude number ranged 0.1–1.0 over the inland inundation area. In general, the results in this study highlight the spatial differences in IMs and suggest the need to include multiple IMs for resilience planning for a coastal community subjected to tsunami hazards.  相似文献   

11.
Bayesian inference modeling may be applied to empirical stochastic prediction in geomorphology where outcomes of geomorphic processes can be expressed by probability density functions. Natural variations in process outputs are accommodated by the probability model. Uncertainty in the values of model parameters is reduced by considering statistically independent prior information on long-term, parameter behavior. Formal combination of model and parameter information yields a Bayesian probability distribution that accounts for parameter uncertainty, but not for model uncertainty or systematic error which is ignored herein. Prior information is determined by ordinary objective or subjective methods of geomorphic investigation. Examples involving simple stochastic models are given, as applied to the prediction of shifts in river courses, alpine rock avalanches, and fluctuating river bed levels. Bayesian inference models may be applied spatially and temporally as well as to functions of a random variable. They provide technically superior forecasts, for a given shortterm data set, to those of extrapolation or stochastic simulation models. In applications the contribution of the field geomorphologist is of fundamental quantitative importance.  相似文献   

12.
We describe a new approach for simulation of multiphase flows through heterogeneous porous media, such as oil reservoirs. The method, which is based on the wavelet transformation of the spatial distribution of the single-phase permeabilities, incorporates in the upscaled computational grid all the relevant data on the permeability, porosity, and other important properties of a porous medium at all the length scales. The upscaling method generates a nonuniform computational grid which preserves the resolved structure of the geological model in the near-well zones as well as in the high-permeability sectors and upscales the rest of the geological model. As such, the method is a multiscale one that preserves all the important information across all the relevant length scales. Using a robust front-detection method which eliminates the numerical dispersion by a high-order total variation diminishing method (suitable for the type of nonuniform upscaled grid that we generate), we obtain highly accurate results with a greatly reduced computational cost. The speed-up in the computations is up to over three orders of magnitude, depending on the degree of heterogeneity of the model. To demonstrate the accuracy and efficiency of our methods, five distinct models (including one with fractures) of heterogeneous porous media are considered, and two-phase flows in the models are studied, with and without the capillary pressure.  相似文献   

13.
The most direct method of design flood estimation is at-site flood frequency analysis, which relies on a relatively long period of recorded streamflow data at a given site. Selection of an appropriate probability distribution and associated parameter estimation procedure is of prime importance in at-site flood frequency analysis. The choice of the probability distribution for a given application is generally made arbitrarily as there is no sound physical basis to justify the selection. In this study, an attempt is made to investigate the suitability of as many as fifteen different probability distributions and three parameter estimation methods based on a large Australian annual maximum flood data set. A total of four goodness-of-fit tests are adopted, i.e., the Akaike information criterion, the Bayesian information criterion, Anderson–Darling test, and Kolmogorov–Smirnov test, to identify the best-fit probability distributions. Furthermore, the L-moments ratio diagram is used to make a visual assessment of the alternative distributions. It has been found that a single distribution cannot be specified as the best-fit distribution for all the Australian states as it was recommended in the Australian rainfall and runoff 1987. The log-Pearson 3, generalized extreme value, and generalized Pareto distributions have been identified as the top three best-fit distributions. It is thus recommended that these three distributions should be compared as a minimum in practical applications when making the final selection of the best-fit probability distribution in a given application in Australia.  相似文献   

14.
Establishing robust models for predicting precipitation processes can yield a significant aspect for many applications in water resource engineering and environmental prospective. In particular, understanding precipitation phenomena is crucial for managing the effects of flooding in watersheds. In this research, a regional precipitation pattern modeling was undertaken using three intelligent predictive models incorporating artificial neural network (ANN), support vector machine (SVM) and random forest (RF) methods. The modeling was carried out using monthly time scale precipitation information in a semi-arid environment located in Iraq. Twenty weather stations covering the entire region were used to construct the predictive models. At the initial stage, the region was divided into three climatic districts based on documented research. Initially, modeling was carried out for each district using historical information from regionally distributed meteorological stations for calibration. Subsequently, cross-station modeling was undertaken for each district using precipitation data from other districts. The study demonstrated that cross-station modeling was an effective means of predicting the spatial distribution of precipitation in watersheds with limited meteorological data.  相似文献   

15.
We present a general framework that enables decision-making when a threshold in a process is about to be exceeded (an event). Measurements are combined with prior information to update the probability of such an event. This prior information is derived from the results of an ensemble of model realisations that span the uncertainty present in the model before any measurements are collected; only probability updates need to be calculated, which makes the procedure very fast once the basic ensemble of realisations has been set up. The procedure is demonstrated with an example where gas field production is restricted to a maximum amount of subsidence. Starting with 100 realisations spanning the prior uncertainty of the process, the measurements collected during monitoring bolster some of the realisations and expose others as irrelevant. In this procedure, more data will mean a sharper determination of the posterior probability. We show the use of two different types of limits, a maximum allowed value of subsidence and a maximum allowed value of subsidence rate for all measurement points at all times. These limits have been applied in real world cases. The framework is general and is able to deal with other types of limits in just the same way. It can also be used to optimise monitoring strategies by assessing the effect of the number, position and timing of the measurement points. Furthermore, in such a synthetic study, the prior realisations do not need to be updated; spanning the range of uncertainty with appropriate prior models is sufficient.  相似文献   

16.
Sinkholes usually have a higher probability of occurrence and a greater genetic diversity in evaporite terrains than in carbonate karst areas. This is because evaporites have a higher solubility and, commonly, a lower mechanical strength. Subsidence damage resulting from evaporite dissolution generates substantial losses throughout the world, but the causes are only well understood in a few areas. To deal with these hazards, a phased approach is needed for sinkhole identification, investigation, prediction, and mitigation. Identification techniques include field surveys and geomorphological mapping combined with accounts from local people and historical sources. Detailed sinkhole maps can be constructed from sequential historical maps, recent topographical maps, and digital elevation models (DEMs) complemented with building-damage surveying, remote sensing, and high-resolution geodetic surveys. On a more detailed level, information from exposed paleosubsidence features (paleokarst), speleological explorations, geophysical investigations, trenching, dating techniques, and boreholes may help in investigating dissolution and subsidence features. Information on the hydrogeological pathways including caves, springs, and swallow holes are particularly important especially when corroborated by tracer tests. These diverse data sources make a valuable database—the karst inventory. From this dataset, sinkhole susceptibility zonations (relative probability) may be produced based on the spatial distribution of the features and good knowledge of the local geology. Sinkhole distribution can be investigated by spatial distribution analysis techniques including studies of preferential elongation, alignment, and nearest neighbor analysis. More objective susceptibility models may be obtained by analyzing the statistical relationships between the known sinkholes and the conditioning factors. Chronological information on sinkhole formation is required to estimate the probability of occurrence of sinkholes (number of sinkholes/km2 year). Such spatial and temporal predictions, frequently derived from limited records and based on the assumption that past sinkhole activity may be extrapolated to the future, are non-corroborated hypotheses. Validation methods allow us to assess the predictive capability of the susceptibility maps and to transform them into probability maps. Avoiding the most hazardous areas by preventive planning is the safest strategy for development in sinkhole-prone areas. Corrective measures could be applied to reduce the dissolution activity and subsidence processes. A more practical solution for safe development is to reduce the vulnerability of the structures by using subsidence-proof designs.  相似文献   

17.
作为近年来爆炸式发展的方法模型,机器学习为地质找矿提供了新的思维和研究方法.本文探讨矿产预测研究的理论方法体系,总结机器学习在矿产预测领域的特征信息提取和信息综合集成两个方面的应用现状,并讨论机器学习在矿产资源定量预测领域面临的训练样本稀少且不均衡、模型训练中缺乏不确定性评估、缺少反哺研究、方法选择等困难和挑战.进一步...  相似文献   

18.
The Bayesian extreme-value distribution of earthquake occurrences has been used to estimate the seismic hazard in 12 seismogenic zones of the North-East Indian peninsula. The Bayesian approach has been used very efficiently to combine the prior information on seismicity obtained from geological data with historical observations in many seismogenic zones of the world. The basic parameters to obtain the prior estimate of seismicity are the seismic moment, slip rate, earthquake recurrence rate and magnitude. These estimates are then updated in terms of Bayes’ theorem and historical evaluations of seismicity associated with each zone. From the Bayesian analysis of extreme earthquake occurrences for North-East Indian peninsula, it is found that for T = 5 years, the probability of occurrences of magnitude (M w = 5.0–5.5) is greater than 0.9 for all zones. For M w = 6.0, four zones namely Z1 (Central Himalayas), Z5 (Indo-Burma border), Z7 (Burmese arc) and Z8 (Burma region) exhibit high probabilities. Lower probability is shown by some zones namely␣Z4, Z12, and rest of the zones Z2, Z3, Z6, Z9, Z10 and Z11 show moderate probabilities.  相似文献   

19.
The problem of incorporating the available seismological information provided by the major events of the historical catalog with those for the short period of instrumental data is investigated. Assuming that the frequency-magnitude law of Gutenberg and Richter is valid for both periods, an estimation procedure for the main parameter of this law and the rate of earthquake occurrence for historical period is presented. Application of the proposed method is demonstrated, using both real and simulated data. An extension to allow for variable quality of complete data with different magnitude values is also included.  相似文献   

20.
Markov Chain Random Fields for Estimation of?Categorical Variables   总被引:3,自引:0,他引:3  
Multi-dimensional Markov chain conditional simulation (or interpolation) models have potential for predicting and simulating categorical variables more accurately from sample data because they can incorporate interclass relationships. This paper introduces a Markov chain random field (MCRF) theory for building one to multi-dimensional Markov chain models for conditional simulation (or interpolation). A MCRF is defined as a single spatial Markov chain that moves (or jumps) in a space, with its conditional probability distribution at each location entirely depending on its nearest known neighbors in different directions. A general solution for conditional probability distribution of a random variable in a MCRF is derived explicitly based on the Bayes’ theorem and conditional independence assumption. One to multi-dimensional Markov chain models for prediction and conditional simulation of categorical variables can be drawn from the general solution and MCRF-based multi-dimensional Markov chain models are nonlinear.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号