首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, a special nonhomogeneous Poisson process known as the Weibull process has been proposed by C-H. Ho for fitting historical volcanic eruptions. Revisiting this model, we learn that it possesses some undesirable features which make it an unsatisfactory tool in this context. We then consider the entire question of a nonstationary model in the light of availability and completeness of data. In our view, a nonstationary model is unnecessary and perhaps undesirable. We propose the Weibull renewal process as an alternative to the simple (homogeneous) Poisson process. For a renewal process the interevent times are independent and distributed identically with distribution function F where, in the Weibull renewal process, F has the Weibull distribution, which has the exponential as a special situation. Testing for a Weibull distribution can be achieved by testing for exponentiality of the data under a simple transformation. Another alternative considered is the lognormal distribution for F. Whereas the homogeneous Poisson process represents purely random (memoryless) occurrences, the lognormal distribution corresponds to periodic behavior and the Weibull distribution encompasses both periodicity and clustering, which aids us in characterizing the volcano. Data from the same volcanoes considered by Ho were analyzed again and we determined there is no reason to reject the hypothesis of Weibull interevent times although the lognormal interevent times were not supported. Prediction intervals for the next event are compared with Ho's nonhomogeneous model and the Weibull renewal process seems to produce more plausible results.  相似文献   

2.
CHEN  Y.  Liu  J.  Chen  L.  Chen  Q.  Chan  L. S. 《Natural Hazards》1998,17(3):251-267
A global seismic hazard assessment was conducted using the probabilistic approach in conjunction with a modified means of evaluating the seismicity parameters. The earthquake occurrence rate function was formulated for area source cells from recent instrumental earthquake catalogs. For the statistical application of the G–R relation of each source cell, the upper- and lower-bound magnitudes were determined from, respectively, historical earthquake data using a Kernel smoothing operator and detection thresholds of recent catalogs. The seismic hazard at a particular site was obtained by integrating the hazard contribution from influencing cells, and the results were combined with the Poisson distribution to obtain the seismic hazard in terms of the intensity at 10% probability of exceedance for the next 50 years. The seismic hazard maps for three countries, constructed using the same method, agree well with the existing maps obtained by different methods. The method is applicable to both oceanic and continental regions, and for any specific duration of time. It can be used for those regions without detailed geological information or where the relation between existing faults and earthquake occurrence is not clear.  相似文献   

3.
A simple Poisson process is more specifically known as a homogeneous Poisson process since the rate was assumed independent of time t. The homogeneous Poisson model generally gives a good fit to many volcanoes for forecasting volcanic eruptions. If eruptions occur according to a homogeneous Poisson process, the repose times between consecutive eruptions are independent exponential variables with mean=1/. The exponential distribution is applicable when the eruptions occur at random and are not due to aging, etc. It is interesting to note that a general population of volcanoes can be related to a nonhomogeneous Poisson process with intensity factor(t). In this paper, specifically, we consider a more general Weibull distribution, WEI (, ), for volcanism. A Weibull process is appropriate for three types of volcanoes: increasing-eruption-rate (>1), decreasing-eruption-rate (<1), and constant-eruption-rate (=1). Statistical methods (parameter estimation, hypothesis testing, and prediction intervals) are provided to analyze the following five volcanoes: Also, Etna, Kilauea, St. Helens, and Yake-Dake. We conclude that the generalized model can be considered a goodness-of-fit test for a simple exponential model (a homogeneous Poisson model), and is preferable for practical use for some nonhomogeneous Poisson volcanoes with monotonic eruptive rates.  相似文献   

4.
In this paper, we consider a Markov renewal process (MRP) to model tropical cyclones occurred in Bangladesh during 1877–2009. The model takes into account both the occurrence history and some physical constraints to capture the main physical characteristics of the storm surge process. We assume that the sequence of cyclones constitutes a Markov chain, and sojourn times follow a Weibull distribution. The parameters of the Weibull MRP jointly with transition probabilities are estimated using the maximum likelihood method. The model shows a good fit with the real events, and probabilities of occurrence of different types of cyclones are calculated for various lengths of time interval using the model. Stationary probabilities and mean recurrence times are also calculated. A brief comparison with a Poisson model and a marked Poisson model has also been demonstrated.  相似文献   

5.
A general approach for the estimation of tsunami height and hazard in the vicinity of active volcanoes has been developed. An empirical relationship has been developed to estimate the height of the tsunami generated for an eruption of a given size. This relationship can be used to estimate the tsunami hazard based on the frequency of eruptive activity of a particular volcano. This technique is then applied to the estimation of tsunami hazard from the eruption of the Augustine volcano in Alaska. Modification of this approach to account for a less than satisfactory data base and differing volcanic characteristics is also discussed with the case of the Augustine volcano as an example. This approach can be used elsewhere with only slight modifications and, for the first time, provides a technique to estimate tsunami hazard from volcanic activity, similar to a well-established approach for the estimation of tsunami hazard from earthquake activity.  相似文献   

6.
The primary use of the natural hazards data archived at the National Geophysical Data Center (NGDC) and co-located World Data Center for Solid Earth Geophysics (WDC for SEG) is for the mitigation of future disasters. Among the responsibilities of NGDC/WDC for SEG is archiving and disseminating hazards data to city planners, educators, engineers and others engaged in mitigation efforts (approximately 150,000 users per week on our web site). Therefore, it is the purpose of this paper to educate the hazards' community about some of the limitations of these data. It is hoped that enlightened users would have a greater appreciation of data errors and possible sources of misinterpretation of the data.Personnel at NGDC/WDC for SEG are in a unique position to discuss the limitations of hazards data since we compile data from original and secondary sources. We are also in direct contact with the data users and know the applications that they make of hazard data, and the misjudgments that often occur when data limitations are not known.Most hazard catalogs cover periods of less than 200 years and are reasonably complete and accurate for only the past 20 to 50 years. Such catalogs are not sufficient to investigate long term hazard variations. Earthquake, tsunami, and volcano data catalogs, acquired and integrated at NGDC/WDC for SEG, illustrate artificial long-term variations created by cultural and scientific reporting changes, which can introduce unanticipated non-random variations into the catalogs. Inconsistencies are often related to changes in the way magnitudes are calculated, evolving network equipment, and network discontinuities of operation and personnel, among other error sources.Before statistical hazard studies can be done, catalogs need to be clearly understood to identify systematic patterns of an observational nature.  相似文献   

7.
Y. Y. Kagan 《Tectonophysics》1997,270(3-4):207-219
This note discusses three interconnected statistical problems concerning the Parkfield sequence of moderate earthquakes and the Parkfield prediction experiment: (a) Is it possible that the quasi-periodic Parkfield sequence of characteristic earthquakes is no uncommon, specific phenomenon (the research hypothesis), but can be explained by a preferential selection from available earthquake catalogs? To this end we formulate the null hypothesis (earthquakes occur according to the Poisson process in time and their size follows the Gutenberg-Richter relation). We test whether the null hypothesis can be rejected as an explanation for the Parkfield sequence. (b) If the null hypothesis cannot be refuted, what is the probability of magnitude m ≥ 6 earthquake occurrence in the Parkfield region? (c) The direct goal of the Parkfield experiment is the registration of precursory phenomena prior to a m6 earthquake. However, in the absence of the characteristic earthquake, can the experiment resolve which of the two competing hypotheses is true in a reasonable time? Statistical analysis is hindered by an insufficiently rigorous definition of the research model and inadequate or ambiguous data. However, we show that the null hypothesis cannot be decisively rejected. The quasi-periodic pattern of intermediate size earthquakes in the Parkfield area is a statistical event likely to occur by chance if it has been preferentially selected from available earthquake catalogs. The observed magnitude-frequency curves for small and intermediate earthquakes in the Parkfield area agree with the theoretical distribution computed on the basis of a modified Gutenberg-Richter law (gamma distribution), using deformation rates for the San Andreas fault. We show that the size distribution of the Parkfield characteristic earthquakes can also be attributed to selection bias. According to the null hypothesis, the yearly probability of a m ≥ 6 earthquake originating in the Parkfield area is less than 1%, signifying that several more decades of observation may be needed before the expected event occurs. By its design, the Parkfield experiment cannot be expected to yield statistically significant conclusions on the validity of the research hypothesis for many decades.  相似文献   

8.
In this article, we model the volcanism near the proposed nuclear waste repository at Yucca Mountain, Nevada, U.S.A. by estimating the instantaneous recurrence rate using a nonhomogeneous Poisson process with Weibull intensity and by using a homogeneous Poisson process to predict future eruptions. We then quantify the probability that any single eruption is disruptive in terms of a (prior) probability distribution, since not every eruption would result in disruption of the repository. Bayesian analysis is performed to evaluate the volcanic risk. Based on the Quaternary data, a 90% confidence interval for the instantaneous recurrence rate near the Yucca Mountain site is (1.85×10–6/yr, 1.26×10–5/yr). Also, using-these confidence bounds, the corresponding 90% confidence interval for the risk (probability of at least one disruptive eruption) for an isolation time of 104 years is (1.0×10–3, 6.7×10–3), if it is assumed that the intensity remains constant during the projected time frame.  相似文献   

9.
Debris flow with intermittent surges is a major natural hazard. Accurate estimation of the total volume of debris flow is a challenge for academic researchers and engineering practitioners. This paper has proposed a new model for the total volume estimation based on 15 years of observations in Jiangjia Valley, China, from 1987 to 2004. The model uses two input variables: debris flow moving time and average surge peak discharge. The Weibull distribution formula is adopted to describe the relationship between the debris flow surge peak discharge and its relative frequency. By integrating the Weibull function and two-point curve fitting, the relationship between the maximum discharge and average surge peak discharge can be established. The total debris flow volume is linked with the debris flow moving time and the average peak discharge. With statistical regression, the debris flow moving time is derived from the debris flow total time. The proposed model has fitted very well with the validation data and outperformed the existing models. This study has provided a new and more accurate way for estimating the total volume of debris flows with intermittent surges in engineering practice.  相似文献   

10.
In view of the growing importance of stochastic earthquake modeling in disaster preparation, the present study introduces a new family of exponentiated Weibull distribution and examines its performance in earthquake interevent time analysis in a stationary point process. This three-parameter (one scale and two shapes) distribution not only covers the Weibull distribution, exponentiated exponential distribution, Burr-type X distribution, Rayleigh distribution, and exponential distribution as special sub-families, but also offers monotone and non-monotone hazard shapes. Here we first describe some of the exponentiated Weibull distribution properties, such as the survival rate, mode, median, and hazard rate. We then provide statistical inference and goodness-of-fit measures to examine the suitability of exponentiated Weibull model in comparison with other popular models, like exponential, gamma, lognormal, Weibull, and exponentiated exponential. Finally, we conduct real data analysis to assess the usefulness and flexibility of exponentiated Weibull distribution in the context of seismic interevent time modeling and associated applications. Results suggest that the exponentiated Weibull distribution has a comparable performance with other popular distributions of its nature. However, further investigations are necessary to confirm the importance and flexibility of exponentiated Weibull distribution in statistical seismology.  相似文献   

11.
The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum-likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historical data.  相似文献   

12.
Whether the earthquake occurrences follow a Poisson process model is a widely debated issue. The Poisson process model has great conceptual appeal and those who rejected it under pressure of empirical evidence have tried to restore it by trying to identify main events and suppressing foreshocks and aftershocks. The approach here is to estimate the density functions for the waiting times of the future earthquakes. For this purpose, the notion of Gram-Charlier series which is a standard method for the estimation of density functions has been extended based on the orthogonality properties of certain polynomials such as Laguerre and Legendre. It is argued that it is best to estimate density functions in the context of a particular null hypothesis. Using the results of estimation a simple test has been designed to establish that earthquakes do not occur as independent events, thus violating one of the postulates of a Poisson process model. Both methodological and utilitarian aspects are dealt with.  相似文献   

13.
Some Bayesian methods of dealing with inaccurate or vague data are introduced in the framework of seismic hazard assessment. Inaccurate data affected by heterogeneous errors are modeled by a probability distribution instead of the usual value plus a random error representation; these data are generically called imprecise. The earthquake size and the number of events in a certain time are modeled as imprecise data. Imprecise data allow us to introduce into the estimation procedures the uncertainty inherent in the inaccuracy and heterogeneity of the measuring systems from which the data were obtained. The problem of estimating the parameter of a Poisson process is shown to be feasible by the use of Bayesian techniques and imprecise data. This background technique can be applied to a general problem of seismic hazard estimation. Initially, data in a regional earthquake catalog are assumed imprecise both in size and location (i.e errors in the epicenter or spreading over a given source). By means of scattered attenuation laws, the regional catalog can be translated into a so-called site catalog of imprecise events. The site catalog is then used to estimate return periods or occurrence probabilities, taking into account all sources of uncertainty. Special attention is paid to priors in the Bayesian estimation. They can be used to introduce additional information as well as scattered frequency-size laws for local events. A simple example is presented to illustrate the capabilities of this methodology.  相似文献   

14.
In the present paper, the parameters affecting the uncertainties on the estimation of M max have been investigated by exploring different methodologies being used in the analysis of seismicity catalogue and estimation of seismicity parameters. A critical issue to be addressed before any scientific analysis is to assess the quality, consistency, and homogeneity of the data. The empirical relationships between different magnitude scales have been used for conversions for homogenization of seismicity catalogues to be used for further seismic hazard assessment studies. An endeavour has been made to quantify the uncertainties due to magnitude conversions and the seismic hazard parameters are then estimated using different methods to consider the epistemic uncertainty in the process. The study area chosen is around Delhi. The b value and the magnitude of completeness for the four seismogenic sources considered around Delhi varied more than 40% using the three catalogues compiled based on different magnitude conversion relationships. The effect of the uncertainties has been then shown on the estimation of M max and the probabilities of occurrence of different magnitudes. It has been emphasized to consider the uncertainties and their quantification to carry out seismic hazard assessment and in turn the seismic microzonation.  相似文献   

15.
The ground motion hazard for Sumatra and the Malaysian peninsula is calculated in a probabilistic framework, using procedures developed for the US National Seismic Hazard Maps. We constructed regional earthquake source models and used standard published and modified attenuation equations to calculate peak ground acceleration at 2% and 10% probability of exceedance in 50 years for rock site conditions. We developed or modified earthquake catalogs and declustered these catalogs to include only independent earthquakes. The resulting catalogs were used to define four source zones that characterize earthquakes in four tectonic environments: subduction zone interface earthquakes, subduction zone deep intraslab earthquakes, strike-slip transform earthquakes, and intraplate earthquakes. The recurrence rates and sizes of historical earthquakes on known faults and across zones were also determined from this modified catalog. In addition to the source zones, our seismic source model considers two major faults that are known historically to generate large earthquakes: the Sumatran subduction zone and the Sumatran transform fault. Several published studies were used to describe earthquakes along these faults during historical and pre-historical time, as well as to identify segmentation models of faults. Peak horizontal ground accelerations were calculated using ground motion prediction relations that were developed from seismic data obtained from the crustal interplate environment, crustal intraplate environment, along the subduction zone interface, and from deep intraslab earthquakes. Most of these relations, however, have not been developed for large distances that are needed for calculating the hazard across the Malaysian peninsula, and none were developed for earthquake ground motions generated in an interplate tectonic environment that are propagated into an intraplate tectonic environment. For the interplate and intraplate crustal earthquakes, we have applied ground-motion prediction relations that are consistent with California (interplate) and India (intraplate) strong motion data that we collected for distances beyond 200 km. For the subduction zone equations, we recognized that the published relationships at large distances were not consistent with global earthquake data that we collected and modified the relations to be compatible with the global subduction zone ground motions. In this analysis, we have used alternative source and attenuation models and weighted them to account for our uncertainty in which model is most appropriate for Sumatra or for the Malaysian peninsula. The resulting peak horizontal ground accelerations for 2% probability of exceedance in 50 years range from over 100% g to about 10% g across Sumatra and generally less than 20% g across most of the Malaysian peninsula. The ground motions at 10% probability of exceedance in 50 years are typically about 60% of the ground motions derived for a hazard level at 2% probability of exceedance in 50 years. The largest contributors to hazard are from the Sumatran faults.  相似文献   

16.
The purpose of this article is to study the three-parameter (scale, shape, and location) generalized exponential (GE) distribution and examine its suitability in probabilistic earthquake recurrence modeling. The GE distribution shares many physical properties of the gamma and Weibull distributions. This distribution, unlike the exponential distribution, overcomes the burden of memoryless property. For shape parameter  β> 1, the GE distribution offers increasing hazard function, which is in accordance with the elastic rebound theory of earthquake generation. In the present study, we consider a real, complete, and homogeneous earthquake catalog of 20 events with magnitude above 7.0 (Yadav et al. in Pure Appl Geophys 167:1331–1342, 2010) from northeast India and its adjacent regions (20°–32°N and 87°–100°E) to analyze earthquake inter-occurrence time from the GE distribution. We apply the modified maximum likelihood estimation method to estimate model parameters. We then perform a number of goodness-of-fit tests to evaluate the suitability of the GE model to other competitive models, such as the gamma and Weibull models. It is observed that for the present data set, the GE distribution has a better and more economical representation than the gamma and Weibull distributions. Finally, a few conditional probability curves (hazard curves) are presented to demonstrate the significance of the GE distribution in probabilistic assessment of earthquake hazards.  相似文献   

17.
Modeling landslide recurrence in Seattle, Washington, USA   总被引:5,自引:0,他引:5  
To manage the hazard associated with shallow landslides, decision makers need an understanding of where and when landslides may occur. A variety of approaches have been used to estimate the hazard from shallow, rainfall-triggered landslides, such as empirical rainfall threshold methods or probabilistic methods based on historical records. The wide availability of Geographic Information Systems (GIS) and digital topographic data has led to the development of analytic methods for landslide hazard estimation that couple steady-state hydrological models with slope stability calculations. Because these methods typically neglect the transient effects of infiltration on slope stability, results cannot be linked with historical or forecasted rainfall sequences. Estimates of the frequency of conditions likely to cause landslides are critical for quantitative risk and hazard assessments. We present results to demonstrate how a transient infiltration model coupled with an infinite slope stability calculation may be used to assess shallow landslide frequency in the City of Seattle, Washington, USA. A module called CRF (Critical RainFall) for estimating deterministic rainfall thresholds has been integrated in the TRIGRS (Transient Rainfall Infiltration and Grid-based Slope-Stability) model that combines a transient, one-dimensional analytic solution for pore-pressure response to rainfall infiltration with an infinite slope stability calculation. Input data for the extended model include topographic slope, colluvial thickness, initial water-table depth, material properties, and rainfall durations. This approach is combined with a statistical treatment of rainfall using a GEV (General Extreme Value) probabilistic distribution to produce maps showing the shallow landslide recurrence induced, on a spatially distributed basis, as a function of rainfall duration and hillslope characteristics.  相似文献   

18.
Areas of low strain rate are typically characterized by low to moderate seismicity. The earthquake catalogs for these regions do not usually include large earthquakes because of their long recurrence periods. In cases where the recurrence period of large earthquakes is much longer than the catalog time span, probabilistic seismic hazard is underestimated. The information provided by geological and paleo-seismological studies can potentially improve seismic hazard estimation through renewal models, which assume characteristic earthquakes. In this work, we compare the differences produced when active faults in the northwestern margin of the València trough are introduced in hazard analysis. The differences between the models demonstrate that the introduction of faults in zones characterized by low seismic activity can give rise to significant changes in the hazard values and location. The earthquake and fault seismic parameters (recurrence interval, segmentation or fault length that controls the maximum magnitude earthquake and time elapsed since the last event or Te) were studied to ascertain their effect on the final hazard results. The most critical parameter is the recurrence interval, where shorter recurrences produce higher hazard values. The next most important parameter is the fault segmentation. Higher hazard values are obtained when the fault has segments capable of producing big earthquakes. Finally, the least critical parameter is the time elapsed since the last event (Te), when longer Te produces higher hazard values.  相似文献   

19.
为合理定量评价草本植物根径、抗拉力和抗拉强度等指标分布特征,本研究选取生长于青海河南县地区的黄花棘豆(Oxytropis ochrocephala)、早熟禾(Poa annua)、紫花针茅(Stipa purpurea)、青藏苔草(Carex moorcroftii)和矮嵩草(Kobresia humilis)5种草本植物为研究对象,通过室内单根拉伸试验对上述草本单根抗拉力、根径和抗拉强度进行了测定;在此基础上,利用正态分布、伽马分布、泊松分布、瑞利分布以及威布尔分布等统计模型对上述指标的分布进行了定性拟合分析,最后采用柯尔莫哥洛夫-斯米洛夫检验对该指标进行了定量检验。结果表明:5种统计模型中,泊松分布对3项指标值分布的描述性相对较差,威布尔和伽马分布则对上述3项指标的分布描述较好,剩余分布对指标描述的适用性则介于两者之间。此外,所有指标均不服从泊松分布,根径均服从正态分布、伽马分布和威布尔分布,个别植物服从瑞利分布,抗拉力和抗拉强度均服从伽马分布和威布尔分布,个别植物服从瑞利分布和正态分布;除紫花针茅根径最优分布为威布尔分布外,其余植物根径最优分布均为伽马分布,除矮嵩草抗拉力服从威布尔分布外,剩余植物抗拉力最优分布均为伽马分布,紫花针茅根系抗拉强度最优分布为正态分布、青藏苔草和矮嵩草为伽马分布、早熟禾与黄花棘豆最优分布均为威布尔分布。该项研究结果对于实现合理定量评价草本植物根系根径、抗拉力和抗拉强度性能具有重要的理论参考价值。  相似文献   

20.
全过程沉降预测的新模型与方法   总被引:25,自引:4,他引:21  
地面沉降是普通存在的一种环境灾害为此提出了一种新模型,它概括了泊松曲线模型与Verhulst模型,能准确预测全过程沉降量的变化规律。提出了将非线性回归与3次样条插值相结合求解新模型的思路与方法,突破了泊松曲线模型所用三段计算法的局限。实例分析结果表明:所提出的方法能准确地求得非线性模型的解;新模型及方法与Verhulst模型及方法相比,能使模型计算值与实测值之间的残差大幅度减小。新模型为岩土工程设计提供了新的科学依据。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号