首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
In climatology and hydrology, univariate Extreme Value Theory has become a powerful tool to model the distribution of extreme events. The Generalized Pareto Distribution (GPD) is routinely applied to model excesses in space or time by letting the two GPD parameters depend on appropriate covariates. Two possible pitfalls of this strategy are the modeling and the interpretation of the scale and shape GPD parameters estimates which are often and incorrectly viewed as independent variables. In this note we first recall a statistical technique that makes the GPD estimates less correlated within a Maximum Likelihood (ML) estimation approach. In a second step we propose novel reparametrizations for two method-of-moments particularly popular in hydrology: the Probability Weighted Moment (PWM) method and its generalized version (GPWM). Finally these three inference methods (ML, PWM and GPWM) are compared and discussed with respect to the issue of correlations.  相似文献   

2.
—?The maximum possible (regional) magnitude Mmax and other seismic hazard parameters like β which is the slope of Gutenberg-Richter law, and λ which is the intensity (rate) of seismic activity are estimated in eight seismic regions of the west side of the circum-Pacific belt. The Bayesian approach, as described by (Pisarenko et?al., 1996; Pisarenko and Lyubushin, 1997, 1999) is a straightforward technique of estimating the seismic hazard. The main assumptions for the method applied are a Poissonian character of seismic events flow, a frequency-magnitude law of Gutenberg-Richter's type with cutoff maximum value for the estimated parameter and a seismic catalog, which have a rather sizeable number of events. We also estimated the quantiles of the probabilistic distribution of the “apparent” Mmax for future given time-length intervals.  相似文献   

3.
Halphen laws have been proposed as a complete system of distributions with sufficient statistics that lead to estimation with minimum variance. The Halphen system provides a flexibility to fit a large variety of data sets from natural events. In this paper we present the method of moments (MM) to estimate the Halphen type B and IB distribution parameters. Their computation is very fast when compared to those given by the maximum likelihood method (ML). Furthermore, this estimation method is very easy to implement since the formulae are explicit. Some simulations show the equivalence of both methods when estimating the quantiles for finite sample size.  相似文献   

4.
 Estimation of confidence limits and intervals for the two- and three-parameter Weibull distributions are presented based on the methods of moment (MOM), probability weighted moments (PWM), and maximum likelihood (ML). The asymptotic variances of the MOM, PWM, and ML quantile estimators are derived as a function of the sample size, return period, and parameters. Such variances can be used for estimating the confidence limits and confidence intervals of the population quantiles. Except for the two-parameter Weibull model, the formulas obtained do not have simple forms but can be evaluated numerically. Simulation experiments were performed to verify the applicability of the derived confidence intervals of quantiles. The results show that overall, the ML method for estimating the confidence limits performs better than the other two methods in terms of bias and mean square error. This is specially so for γ≥0.5 even for small sample sizes (e.g. N=10). However, the drawback of the ML method for determining the confidence limits is that it requires that the shape parameter be bigger than 2. The Weibull model based on the MOM, ML, and PWM estimation methods was applied to fit the distribution of annual 7-day low flows and 6-h maximum annual rainfall data. The results showed that the differences in the estimated quantiles based on the three methods are not large, generally are less than 10%. However, the differences between the confidence limits and confidence intervals obtained by the three estimation methods may be more significant. For instance, for the 7-day low flows the ratio between the estimated confidence interval to the estimated quantile based on ML is about 17% for T≥2 while it is about 30% for estimation based on MOM and PWM methods. In addition, the analysis of the rainfall data using the three-parameter Weibull showed that while ML parameters can be estimated, the corresponding confidence limits and intervals could not be found because the shape parameter was smaller than 2.  相似文献   

5.
Extremes of stream flow and precipitation are commonly modeled by heavytailed distributions. While scrutinizing annual flow maxima or the peaks over threshold, the largest sample elements are quite often suspected to be low quality data, outliers or values corresponding to much longer return periods than the observation period. Since the interest is primarily in the estimation of the right tail (in the case of floods or heavy rainfalls), sensitivity of upper quantiles to largest elements of a series constitutes a problem of special concern. This study investigated the sensitivity problem using the log-Gumbel distribution by generating samples of different sizes (n) and different values of the coefficient of variation by Monte Carlo experiments. Parameters of the log-Gumbel distribution were estimated by the probability weighted moments (PWMs) method, method of moments (MOMs) and maximum likelihood method (MLM), both for complete samples and the samples deprived of their largest elements. In the latter case, the distribution censored by the non-exceedance probability threshold, F T , was considered. Using F T instead of the censored threshold T creates possibility of controlling estimator property. The effect of the F T value on the performance of the quantile estimates was then examined. It is shown that right censoring of data need not reduce an accuracy of large quantile estimates if the method of PWMs or MOMs is employed. Moreover allowing bias of estimates one can get the gain in variance and in mean square error of large quantiles even if ML method is used.  相似文献   

6.
Recent results in extreme value theory suggest a new technique for statistical estimation of distribution tails (Embrechts et al., 1997), based on a limit theorem known as the Gnedenko-Pickands-Balkema-de Haan theorem. This theorem gives a natural limit law for peak-over-threshold values in the form of the Generalized Pareto Distribution (GPD), which is a family of distributions with two parameters. The GPD has been successfully applied in a number of statistical problems related to finance, insurance, hydrology, and other domains. Here, we apply the GPD approach to the well-known seismological problem of earthquake energy distribution described by the Gutenberg-Richter seismic moment-frequency law. We analyze shallow earthquakes (depth h<70 km) in the Harvard catalog over the period 1977–2000 in 12 seismic zones. The GPD is found to approximate the tails of the seismic moment distributions quite well over the lower threshold approximately M 1024 dyne-cm, or somewhat above (i.e., moment-magnitudes larger than m W =5.3). We confirm that the b-value is very different (b=2.06 ± 0.30) in mid-ocean ridges compared to other zones (b=1.00 ± 0.04) with a very high statistical confidence and propose a physical mechanism contrasting crack-type rupture with dislocation-type behavior. The GPD can as well be applied in many problems of seismic hazard assessment on a regional scale. However, in certain cases, deviations from the GPD at the very end of the tail may occur, in particular for large samples signaling a novel regime.  相似文献   

7.
In this study, the parameter estimations for the 3-parameter generalized logistic (GL) distribution are presented based on the methods of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM). The asymptotic variances of the MOM, ML, and PWM quantile estimators for the GL distribution are expressed as functions of the sample size, return period, and parameters. A Monte Carlo simulation was performed to verify the derived expressions for variances and covariances between parameters and to evaluate the applicability of the derived asymptotic variances of quantiles for the MOM, ML and PWM methods. The simulation results generally show good agreement with the analytical results estimated from the asymptotic variances of parameters and quantiles when the shape parameter (β) of the GL distribution is between −0.10 and 0.10 for the MOM method and between −0.25 and 0.45 for the ML and PWM methods, respectively. In addition, the actual sample variances and the root mean square error (RMSE) of asymptotic variances of quantiles for various sample sizes, return periods, and shape parameters were presented. In order to evaluate the applicability of the estimation methods to real data and to compare the values of estimated parameter, quantiles, and confidence intervals based on each parameter estimation method, the GL distribution was fitted to the 24-h annual maximum rainfall data at Pohang, Korea.  相似文献   

8.
地震活动的简化模型研究   总被引:12,自引:3,他引:9  
耿鲁明  石耀霖 《地震》1993,(1):68-75
大陆地震活动带的地震活动表现为平静和活跃相交替的轮回活动特征,各轮回中强震活动呈现出时间和空间上的迁移特征。为了探索这种活动特征的机制,本文试图用数值模型来探讨在统一构造应力场作用下的地震活动带的地震活动特征。模拟中的事件表现出类似于实际地震活动的一些特征,例如震级—频度关系和地震平静—活跃的交替特征等。如果把模型中元件上接近破裂强度的高应力值与前兆加以联系的话,我们发现模型中的前兆变化具有复杂性。模型中大事件的前兆并不是集中在未来大震的周围,而是在整个系统中大范围内出现。系统中前兆出现的程度与地震大小之间没有确定的对应关系。统计表明从前兆出现的范围只能估计出未来发生地震的上限值。  相似文献   

9.
Earthquake Researeh in Ch一na461 .METHODLet R be some value measured or estimated as a sequenee ina’‘Past”time interval(一丁,O)(I)万‘月,=(RI,…,R。),R,之R。,R=nlaX l二f匕11(RI,…,R,,) Values(l)eould have an arbitrary Physieal nature.BelowweshalleonsiderEq.(l)asearthquakemagnitudes in a given seismic aetive region or logarithms of seismie Peak ground aeeelerations at习given site.Ro isa而nimum eutoff value;it 15 defined by Possibilities of registration systems or wasehosen as the …  相似文献   

10.
11.
The occurrences of extreme pollution events have serious effects on human health, environmental ecosystems, and the national economy. To gain a better understanding of this issue, risk assessments on the behavior of these events must be effectively designed to anticipate the likelihood of their occurrence. In this study, we propose using the intensity–duration–frequency (IDF) technique to describe the relationship of pollution intensity (i) to its duration (d) and return period (T). As a case study, we used data from the city of Klang, Malaysia. The construction of IDF curves involves a process of determining a partial duration series of an extreme pollution event. Based on PDS data, a generalized Pareto distribution (GPD) is used to represent its probabilistic behaviors. The estimated return period and IDF curves for pollution intensities corresponding to various return periods are determined based on the fitted GPD model. The results reveal that pollution intensities in Klang tend to increase with increases in the length of time between return periods. Although the IDF curves show different magnitudes for different return periods, all the curves show similar increasing trends. In fact, longer return periods are associated with higher estimates of pollution intensity. Based on the study results, we can conclude that the IDF approach provides a good basis for decision-makers to evaluate the expected risk of future extreme pollution events.  相似文献   

12.
The Iranian Plateau does not appear to be a single crustal block, but an assemblage of zones comprising the Alborz—Azerbaijan, Zagros, Kopeh—Dagh, Makran, and Central and East Iran. The Gumbel’s III asymptotic distribution method (GIII) and maximum magnitude expected by Kijko—Sellevoll method is applied in order to check the potentiality of the each seismogenic zone in the Iranian Plateau for the future occurrence of maximum magnitude (Mmax). For this purpose, a homogeneous and complete seismicity database of the instrumental period during 1900–2012 is used in 29 seismogenic zones of the examined region. The spatial mapping of hazard parameters (upper bound magnitude (ω), most probable earthquake magnitude in next 100 years (M100) and maximum magnitude expected by maximum magnitude estimated by Kijko—Sellevoll method (max MK ? Smax) reveals that Central and East Iran, Alborz and Azerbaijan, Kopeh—Dagh and SE Zagros are a dangerous place for the next occurrence of a large earthquake.  相似文献   

13.
Changes in river flow regime resulted in a surge in the number of methods of non-stationary flood frequency analysis. Common assumption is the time-invariant distribution function with time-dependent location and scale parameters while the shape parameters are time-invariant. Here, instead of location and scale parameters of the distribution, the mean and standard deviation are used. We analyse the accuracy of the two methods in respect to estimation of time-dependent first two moments, time-invariant skewness and time-dependent upper quantiles. The method of maximum likelihood (ML) with time covariate is confronted with the Two Stage (TS) one (combining Weighted Least Squares and L-moments techniques). Comparison is made by Monte Carlo simulations. Assuming parent distribution which ensures the asymptotic superiority of ML method, the Generalized Extreme Value distribution with various values of linearly changing in time first two moments, constant skewness, and various time-series lengths are considered. Analysis of results indicates the superiority of TS methods in all analyzed aspects. Moreover, the estimates from TS method are more resistant to probability distribution choice, as demonstrated by Polish rivers’ case studies.  相似文献   

14.
Parametric models are commonly used in frequency analysis of extreme hydrological events. To estimate extreme quantiles associated to high return periods, these models are not always appropriate. Therefore, estimators based on extreme value theory (EVT) are proposed in the literature. The Weissman estimator is one of the popular EVT-based semi-parametric estimators of extreme quantiles. In the present paper we propose a new family of EVT-based semi-parametric estimators of extreme quantiles. To built this new family of estimators, the basic idea consists in assigning the weights to the k observations being used. Numerical experiments on simulated data are performed and a case study is presented. Results show that the proposed estimators are smooth, stable, less sensitive, and less biased than Weissman estimator.  相似文献   

15.
—?A maximum-likelihood (ML) estimator of the correlation dimension d 2 of fractal sets of points not affected by the left-hand truncation of their inter-distances is defined. Such truncation might produce significant biases of the ML estimates of d 2 when the observed scale range of the phenomenon is very narrow, as often occurs in seismological studies. A second very simple algorithm based on the determination of the first two moments of the inter-distances distribution (SOM) is also proposed, itself not biased by the left-hand truncation effect. The asymptotic variance of the ML estimates is given. Statistical tests carried out on data samples with different sizes extracted from populations of inter-distances following a power law, suggested that the sample variance of the estimates obtained by the proposed methods are not significantly different, and are well estimated by the asymptotic variance also for samples containing a few hundred inter-distances. To examine the effects of different sources of systematic errors, the two estimators were also applied to sets of inter-distances between points belonging to statistical fractal distributions, baker's maps and experimental distributions of earthquake epicentres. For a full evaluation of the results achieved by the methods proposed here, these were compared with those obtained by the ML estimator for untruncated samples or by the least-squares algorithm.  相似文献   

16.
Water quality is often highly variable both in space and time, which poses challenges for modelling the more extreme concentrations. This study developed an alternative approach to predicting water quality quantiles at individual locations. We focused on river water quality data that were collected over 25 years, at 102 catchments across the State of Victoria, Australia. We analysed and modelled spatial patterns of the 10th, 25th, 50th, 75th and 90th percentiles of the concentrations of sediments, nutrients and salt, with six common constituents: total suspended solids (TSS), total phosphorus (TP), filterable reactive phosphorus (FRP), total Kjeldahl nitrogen (TKN), nitrate-nitrite (NOx), and electrical conductivity (EC). To predict the spatial variation of each quantile for each constituent, we developed statistical regression models and exhaustively searched through 50 catchment characteristics to identify the best set of predictors for that quantile. The models predict the spatial variation in individual quantiles of TSS, TKN and EC well (66%–96% spatial variation explained), while those for TP, FRP and NOx have lower performance (37%–73% spatial variation explained). The most common factors that influence the spatial variations of the different constituents and quantiles are: annual temperature, percentage of cropping land area in catchment and channel slope. The statistical models developed can be used to predict how low- and high-concentration quantiles change with landscape characteristics, and thus provide a useful tool for catchment managers to inform planning and policy making with changing climate and land use conditions.  相似文献   

17.
— Tsunamis are generated by displacement or motion of large volumes of water. While there are several documented cases of tsunami generation by volcanic eruptions and landslides, most observed tsunamis are attributed to earthquakes. Kinematic models of tsunami generation by earthquakes — where specified fault size and slip determine seafloor and sea-surface vertical motion — quantitatively explain far-field tsunami wave records. On the other hand, submarine landslides in subduction zones and other tectonic settings can generate large tsunamis that are hazardous along near-source coasts. Furthermore, the ongoing exploration of the oceans has found evidence for large paleo-landslides in many places, not just subduction zones. Thus, we want to know the relative contribution of faulting and landslides to tsunami generation. For earthquakes, only a small fraction of the minimum earthquake energy (less than 1% for typical parameter choices for shallow underthrusting earthquakes) can be converted into tsunami wave energy; yet, this is enough energy to generate terrible tsunamis. For submarine landslides, tsunami wave generation and landslide motion interact in a dynamic coupling. The dynamic problem of a 2-D translational slider block on a constant-angle slope can be solved using a Green's function approach for the wave transients. The key result is that the largest waves are generated when the ratio of initial water depth above the block to downslope vertical drop of the block H 0 /W sin δ is less than 1. The conversion factor of gravitational energy into tsunami wave energy varies from 0% for a slow-velocity slide in deep water, to about 50% for a fast-velocity slide in shallow water and a motion abruptly truncated. To compare maximum tsunami wave amplitudes in the source region, great earthquakes produce amplitudes of a few meters at a wavelength fixed by the fault width of 100 km or so. For submarine landslides, tsunami wave heights — as measured by b, block height — are small for most of the parameter regime. However, for low initial dynamic friction and values of H 0 /W sin δ less than 1, tsunami wave heights in the downslope and upslope directions reach b and b/4, respectively.Wavelengths of these large waves scale with block width. For significant submarine slides, the value of b can range from meters up to the kilometer scale. Thus, the extreme case of efficient tsunami generation by landslides produces dramatic hazards scenarios.  相似文献   

18.
Prior to an earthquake, natural seismicity is correlated across multiple spatial and temporal scales. Many studies have indicated that an earthquake is hard to accurately predict by a single time-dependent precursory method. In this study, we attempt to combine four earthquake prediction methods, i.e. the Pattern Informatics (PI), Load/Unload Response Ratio (LURR), State Vector (SV), and Accelerating Moment Release (AMR) to estimate future earthquake potential. The PI technique is founded on the premise that the change in the seismicity rate is a proxy for the change in the underlying stress. We first use the PI method to quantify localized changes surrounding the epicenters of large earthquakes to objectively quantify the anomalous areas (hot spots) of the upcoming events. Next, we delineate the seismic hazard regions by integrating with regional active fault zones and small earthquake activities. Then, we further evaluate the earthquake potential in the seismic hazard regions using the LURR, SV and AMR methods. Retrospective tests of this new approach on the large earthquakes (M > 6.5) which have occurred in western China over the last 3 years show that the LURR and SV time series usually climb to an anomalously high peak months to years prior to occurrence of a large earthquake. And, the asymptote time, t c, “predicted” by the AMR method correspond to the time of the actual events. The results may suggest that the multi-methods combined approach can be a useful tool to provide stronger constraints on forecasts of the time and location of future large events.  相似文献   

19.
With the objective of modelling annual rainfall maximum intensities in different geographical zones of Chile, we have created a Bayesian inference method for the generalized extreme value type I distribution (Gumbel distribution). We considered an uninformative prior distribution for the location parameter, μ, and three different prior distributions for the scale parameter, σ. Under these conditions we obtained the posterior distribution of (μ, σ) and associated summary statistics such as modes, expected values, quantiles and credibility intervals. In order to predict and estimate return periods, we obtained the posterior distribution of future observations, its expected value, quantiles and credibility intervals. To obtain several of these posterior summary measures it was necessary to utilize both numerical and Laplace approximations. Furthermore we estimate return period curves and intensity–duration–frequency curves.  相似文献   

20.
The stability of the power law scaling of earthquake recurrence time distribution in a given space–time window is investigated, taking into account the magnitude of completeness and the effective starting time of aftershock sequences in earthquake catalogs from Southern California and Japan. A new method is introduced for sampling at different distances from a network of target events. This method allows the recurrence times to be sampled many times on the same area. Two power laws with unknown exponents are assumed to govern short- and long-recurrence-time ranges. This assumption is developed analytically and shown to imply simple correlation between these power laws. In practice, the results show that this correlation structure is not satisfied for short magnitude cutoffs (m c = 2.5, 3.5, 4.5), and hence the recurrence time distribution departs from the power law scaling. The scaling parameters obtained from the stack of the distributions corresponding to different magnitude thresholds are quite different for different regions of study. It is also found that significantly different scaling parameters adjust the distribution for different magnitude thresholds. In particular, the power law exponents decrease when the magnitude cutoff increases, resulting in a slower decrease of the recurrence time distribution, especially for short time ranges. For example, in the case of Japan, the exponent p2 of the power law scaling at large recurrence times follows roughly the relation: , where m c is the magnitude cutoff. In case of Southern California, it is shown that Weibull distribution provides a better alternative fit to the data for moderate and large time scales.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号