首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The first results of the almost one year drop size distribution (DSD) measurement in the Czech Republic are summarised in this study. The ESA-ESTEC 2D videodistrometer was used to measure the rain drop parameters. The average DSD is shown to be of the gamma type. One minute DSDs were evaluated to test the accuracy of analytical DSD models. Parameters of gamma distribution and exponential distribution functions were evaluated for the whole data set as well as for the various rain rate intervals. Regression technique and the method of moments were applied to estimate the parameters of DSD. It is shown that the parameter value strongly depends on the method of computation as well as on the rain type. Its average value is about 0.59 for the average (smooth) one minute DSD while an average value of un-smoothed DSD is 11.0 (moment method) or 5.4 (regression technique). The Joss's shape parameter and the Tokay-Short's parameter CS estimating roughly the rain type are also discussed (if CS>1, the event should be convective). The tendency of increasing numerical value of the CS parameter with the increasing rain rate was observed (the DSDs were distributed into classes respecting the rain rate value) and thus the idea of the convectivity occurrence bounded with the higher CS parameter value was supported. The study also compares the parameters of the average DSD with the averages of parameter values of all 4 183 one minute DSDs.  相似文献   

2.
Chain dependent models for daily precipitation typically model the occurrence process as a Markov chain and the precipitation intensity process using one of several probability distributions. It has been argued that the mixed exponential distribution is a superior model for the rainfall intensity process, since the value of its information criterion (Akaike information criterion or Bayesian information criterion) when fit to precipitation data is usually less than the more commonly used gamma distribution. The differences between the criterion values of the best and lesser models are generally small relative to the magnitude of the criterion value, which raises the question of whether these differences are statistically significant. Using a likelihood ratio statistic and nesting the gamma and mixed exponential distributions in a parent distribution, we show indirectly that generally the superiority of the mixed exponential distribution over the gamma distribution for modeling precipitation intensity is statistically significant. Comparisons are also made with a common-a gamma model, which are less informative.  相似文献   

3.
Simplified, vertically-averaged soil moisture models have been widely used to describe and study eco-hydrological processes in water-limited ecosystems. The principal aim of these models is to understand how the main physical and biological processes linking soil, vegetation, and climate impact on the statistical properties of soil moisture. A key component of these models is the stochastic nature of daily rainfall, which is mathematically described as a compound Poisson process with daily rainfall amounts drawn from an exponential distribution. Since measurements show that the exponential distribution is often not the best candidate to fit daily rainfall, we compare the soil moisture probability density functions obtained from a soil water balance model with daily rainfall depths assumed to be distributed as exponential, mixed-exponential, and gamma. This model with different daily rainfall distributions is applied to a catchment in New South Wales, Australia, in order to show that the estimation of the seasonal statistics of soil moisture might be improved when using the distribution that better fits daily rainfall data. This study also shows that the choice of the daily rainfall distributions might considerably affect the estimation of vegetation water-stress, leakage and runoff occurrence, and the whole water balance.  相似文献   

4.
Probabilistic performance assessment requires the development of probability distributions that can predict different performance levels of structures with reasonable accuracy. This study evaluates the performance of a non-seismically designed multi-column bridge bent retrofitted with four different alternatives, and based on their performance under an ensemble of earthquake records it proposes accurate prediction models and distribution fits for different performance criteria as a case study. Here, finite element methods have been implemented where each retrofitting technique has been modeled and numerically validated with the experimental results. Different statistical distributions are employed to represent the variation in the considered performance criteria for the retrofitted bridge bents. The Kolmogorov-Smirnov goodness-of-fit test was carried out to compare different distributions and find the suitable distribution for each performance criteria. An important conclusion drawn here is that the yield displacement of CFRP, steel, and ECC jacketed bridge bents are best described by a gamma distribution. The crushing displacement and crushing base shear of all four retrofitted bent follow a normal and Weibull distribution, respectively. A probabilistic model is developed to approximate the seismic performance of retrofitted bridge bents. These probabilistic models and response functions developed in this study allow for the performance prediction of retrofitted bridge bents.  相似文献   

5.
The occurrence of the September 28, 2004 M w = 6.0 mainshock at Parkfield, California, has significantly increased the mean and aperiodicity of the series of time intervals between mainshocks in this segment of the San Andreas fault. We use five different statistical distributions as renewal models to fit this new series and to estimate the time-dependent probability of the next Parkfield mainshock. Three of these distributions (lognormal, gamma and Weibull) are frequently used in reliability and time-to-failure problems. The other two come from physically-based models of earthquake recurrence (the Brownian Passage Time Model and the Minimalist Model). The differences resulting from these five renewal models are emphasized.  相似文献   

6.
This study aims to model the joint probability distribution of drought duration, severity and inter-arrival time using a trivariate Plackett copula. The drought duration and inter-arrival time each follow the Weibull distribution and the drought severity follows the gamma distribution. Parameters of these univariate distributions are estimated using the method of moments (MOM), maximum likelihood method (MLM), probability weighted moments (PWM), and a genetic algorithm (GA); whereas parameters of the bivariate and trivariate Plackett copulas are estimated using the log-pseudolikelihood function method (LPLF) and GA. Streamflow data from three gaging stations, Zhuangtou, Taian and Tianyang, located in the Wei River basin, China, are employed to test the trivariate Plackett copula. The results show that the Plackett copula is capable of yielding bivariate and trivariate probability distributions of correlated drought variables.  相似文献   

7.
The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.  相似文献   

8.
Chondrule mass frequency distributions determined from the Bjurböle, Chainpur, Allegan, Saratov, Elenovka and Nikolskoe meteorites have been tested to see if they could be fitted to either the Rosin or Weibull statistical functions. Whereas none of the distributions gave a fit to Rosin's law, they could all (with the exception of Nikolskoe) be fitted to the Weibull function suggesting similar origins and/or histories. Although it is possible to produce a Weibull distribution from a Rosin distribution by the removal of lower mass particles, it is easier to envisage the Weibull mass distribution of meteoritic chondrules as a feature of the chondrule-forming event or of material from which they formed.  相似文献   

9.
The ability to calculate the oil droplet size distribution (DSD) and its dynamic behavior in the water column is important in oil spill modeling. Breaking waves disperse oil from a surface slick into the water column as droplets of varying sizes. Oil droplets undergo further breakup and coalescence in the water column due to the turbulence. Available models simulate oil DSD based on empirical/equilibrium equations. However, the oil DSD evolution due to subsequent droplet breakup and coalescence in the water column can be best represented by a dynamic population model. This paper develops a phenomenological model to calculate the oil DSD in wave breaking conditions and ocean turbulence and is based on droplet breakup and coalescence. Its results are compared with data from laboratory experiments that include different oil types, different weathering times, and different breaking wave heights. The model comparisons showed a good agreement with experimental data.  相似文献   

10.
A mixed model is proposed to fit earthquake interevent time distribution. In this model, the whole distribution is constructed by mixing the distribution of clustered seismicity, with a suitable distribution of background seismicity. Namely, the fit is tested assuming a clustered seismicity component modeled by a non-homogeneous Poisson process and a background component modeled using different hypothetical models (exponential, gamma and Weibull). For southern California, Japan, and Turkey, the best fit is found when a Weibull distribution is implemented as a model for background seismicity. Our study uses earthquake random sampling method we introduced recently. It is performed here to account for space–time clustering of earthquakes at different distances from a given source and to increase the number of samples used to estimate earthquake interevent time distribution and its power law scaling. For Japan, the contribution of clustered pairs of events to the whole distribution is analyzed for different magnitude cutoffs, m c, and different time periods. The results show that power laws are mainly produced by the dominance of correlated pairs at small and long time ranges. In particular, both power laws, observed at short and long time ranges, can be attributed to time–space clustering revealed by the standard Gardner and Knopoff’s declustering windows.  相似文献   

11.
The majority of continental arc volcanoes go through decades or centuries of inactivity, thus, communities become inured to their threat. Here we demonstrate a method to quantify hazard from sporadically active volcanoes and to develop probabilistic eruption forecasts. We compiled an eruption-event record for the last c. 9,500 years at Mt Taranaki, New Zealand through detailed radiocarbon dating of recent deposits and a sediment core from a nearby lake. This is the highest-precision record ever collected from the volcano, but it still probably underestimates the frequency of eruptions, which will only be better approximated by adding data from more sediment core sites in different tephra-dispersal directions. A mixture of Weibull distributions provided the best fit to the inter-event period data for the 123 events. Depending on which date is accepted for the last event, the mixture-of-Weibulls model probability is at least 0.37–0.48 for a new eruption from Mt Taranaki in the next 50 years. A polymodal distribution of inter-event periods indicates that a range of nested processes control eruption recurrence at this type of arc volcano. These could possibly be related by further statistical analysis to intrinsic factors such as step-wise processes of magma rise, assembly and storage.  相似文献   

12.
The main characteristics of the significant wave height in an area of increased interest, the north Atlantic ocean, are studied based on satellite records and corresponding simulations obtained from the numerical wave prediction model WAM. The two data sets are analyzed by means of a variety of statistical measures mainly focusing on the distributions that they form. Moreover, new techniques for the estimation and minimization of the discrepancies between the observed and modeled values are proposed based on ideas and methodologies from a relatively new branch of mathematics, information geometry. The results obtained prove that the modeled values overestimate the corresponding observations through the whole study period. On the other hand, 2-parameter Weibull distributions fit well the data in the study. However, one cannot use the same probability density function for describing the whole study area since the corresponding scale and shape parameters deviate significantly for points belonging to different regions. This variation should be taken into account in optimization or assimilation procedures, which is possible by means of information geometry techniques.  相似文献   

13.
Abstract

Monthly rainfall amounts are distributed according to different frequency distribution functions in different parts of the world. However, in extremely arid regions gamma probability distribution functions are most often found to fit the existing data well. Libyan monthly rainfall distributions are found to abide by gamma probability distribution function which is confirmed on the basis of chi-square tests. Almost all the rainfall sequences recorded for at least the last 20 years in Libya are investigated statistically and gamma distribution parameters are calculated at existing stations. The shape and scale parameters are then regionalized and hence it becomes possible to find the parameter values at any desired location within the study area and then to generate synthetic sequences according to the gamma distribution. Predictions of 10, 25, 50 and 100 mm rainfall amounts are achieved by this probability function.  相似文献   

14.
Multicomponent probability distributions such as the two‐component Gumbel distribution are sometimes applied to annual flood maxima when individual floods are seen as belonging to different classes, depending on physical processes or time of year. However, hydrological inconsistencies may arise if only nonclassified annual maxima are available to estimate the component distribution parameters. In particular, an unconstrained best fit to annual flood maxima may yield some component distributions with a high probability of simulating floods with negative discharge. In such situations, multicomponent distributions cannot be justified as an improved approximation to a local physical reality of mixed flood types, even though a good data fit is achieved. This effect usefully illustrates that a good match to data is no guarantee against degeneracy of hydrological models. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
The quantile of a probability distribution, known as return period or hydrological design value of a hydrological variable is the value corresponding to fixed non-exceedence probability and is very important notion in hydrology. In hydraulic engineering design and water resources management, confidence interval (CI) estimation for a population quantile is of primary interest and among other applications, is used to assess the pollution level of a contaminant in water, air etc. The accuracy on such estimation directly influences the engineering investments and safety. The two parameter Weibull, Pareto, Lognormal, Inverse Gaussian, Gamma are some commonly used probability models in such applications. In spite of its practical importance, the problem of CI estimation of a quantile of these widely applicable distributions has been less attended in the literature. In this paper, a new method is proposed to obtain a CI for a quantile of any distribution for which [or the probability distribution of any one-to-one function of the underlying random variable (RV)] generalized pivotal quantities (GPQs) exist for its parameters. The proposed method is elucidated by constructing CIs for quantiles of Weibull, Pareto, Lognormal, Extreme value distribution of type-I for minimum, Exponential and Normal distributions for complete as well as type II singly right censored samples. The empirical performance evaluation of the proposed method evinced that the proposed method has exact well concentrated coverage probabilities near the nominal level even for small uncensored samples as small as 5 and for censored samples as long as the proportion of censored observations is up to 0.70. The existing methods for Weibull distribution have poor or dispersed coverage probabilities with respect to the nominal level for complete samples. Applications of the proposed method in ground water monitoring and in the assessment of air pollution are illustrated for practitioners.  相似文献   

16.
17.
Abstract

Three approaches to modelling the duration of streamflow droughts at eight southern African sites are considered; a non-parametric method (that of Kaplan-Meier) is compared with the fitting of two simple parametric models: the exponential and Weibull. All techniques allow the instantaneous probability of a drought coming to an end to differ between wet and dry seasons, using the concept of censored data. Model-fitting is discussed, and the Kaplan-Meier estimates permit an assessment of the fit of the parametric models, with the aim of finding a parsimonious model for the data, which can be used for predictive purposes. In most cases considered herein, either the exponential or Weibull approach is found to be adequate.  相似文献   

18.
Temporal distribution of earthquakes with M w > 6 in the Dasht-e-Bayaz region, eastern Iran has been investigated using time-dependent models. Based on these types of models, it is assumed that the times between consecutive large earthquakes follow a certain statistical distribution. For this purpose, four time-dependent inter-event distributions including the Weibull, Gamma, Lognormal, and the Brownian Passage Time (BPT) are used in this study and the associated parameters are estimated using the method of maximum likelihood estimation. The suitable distribution is selected based on logarithm likelihood function and Bayesian Information Criterion. The probability of the occurrence of the next large earthquake during a specified interval of time was calculated for each model. Then, the concept of conditional probability has been applied to forecast the next major (M w > 6) earthquake in the site of our interest. The emphasis is on statistical methods which attempt to quantify the probability of an earthquake occurring within a specified time, space, and magnitude windows. According to obtained results, the probability of occurrence of an earthquake with M w > 6 in the near future is significantly high.  相似文献   

19.
 The open literature reveals several types of bivariate exponential distributions. Of them only the Nagao–Kadoya distribution (Nagao and Kadoya, 1970, 1971) has a general form with marginals that are standard exponential distributions and the correlation coefficient being 0≤ρ<1. On the basis of the principle that if a theoretical probability distribution can represent statistical properties of sample data, then the computed probabilities from the theoretical model should provide a good fit to observed ones, numerical experiments are executed to investigate the applicability of the Nagao–Kadoya bivariate exponential distribution for modeling the joint distribution of two correlated random variables with exponential marginals. Results indicate that this model is suitable for analyzing the joint distribution of two exponentially distributed variables. The procedure for the use of this model to represent the joint statistical properties of two correlated exponentially distributed variables is also presented.  相似文献   

20.
Probabilistic characterization of environmental variables or data typically involves distributional fitting. Correlations, when present in variables or data, can considerably complicate the fitting process. In this work, effects of high-order correlations on distributional fitting were examined, and how they are technically accounted for was described using two multi-dimensional formulation methods: maximum entropy (ME) and Koehler–Symanowski (KS). The ME method formulates a least-biased distribution by maximizing its entropy, and the KS method uses a formulation that conserves specified marginal distributions. Two bivariate environmental data sets, ambient particulate matter and water quality, were chosen for illustration and discussion. Three metrics (log-likelihood function, root-mean-square error, and bivariate Kolmogorov–Smirnov statistic) were used to evaluate distributional fit. Bootstrap confidence intervals were also employed to help inspect the degree of agreement between distributional and sample moments. It is shown that both methods are capable of fitting the data well and have the potential for practical use. The KS distributions were found to be of good quality, and using the maximum likelihood method for the parameter estimation of a KS distribution is computationally efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号