首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 363 毫秒
1.
 A comparison of different methods for estimating T-year events is presented, all based on the Extreme Value Type I distribution. Series of annual maximum flood from ten gauging stations at the New Zealand South Island have been used. Different methods of predicting the 100-year event and the connected uncertainty have been applied: At-site estimation and regional index-flood estimation with and without accounting for intersite correlation using either the method of moments or the method of probability weighted moments for parameter estimation. Furthermore, estimation at ungauged sites were considered applying either a log-linear relationship between at-site mean annual flood and catchment characteristics or a direct log-linear relationship between 100-year events and catchment characteristics. Comparison of the results shows that the existence of at-site measurements significantly diminishes the prediction uncertainty and that the presence of intersite correlation tends to increase the uncertainty. A simulation study revealed that in regional index-flood estimation the method of probability weighted moments is preferable to method of moment estimation with regard to bias and RMSE.  相似文献   

2.
The log-Pearson type 3 distribution is widely used in North America and Australia for fitting annual flood series. Four different versions of the method of moments used in fitting this distribution are compared using Monte Carlo simulated samples which reflect some of the characteristics of annual flood series observed at some Canadian rivers. The bias, standard error, root mean square error, and skew, of the parameter estimates, and of estimates of events associated with different probabilities of occurrence are examined. Also examined are the correlation coefficients between probabilities of occurrence are examined. Also examined are the correlation coefficients between the parameter estimates and between the sample moments that are used in each of the four methods of estimation. It is observed that variances, covariances and correlation coefficients calculated using the usual first-order asymptotic approximation might have considerable error and therefore should be used with caution. On the basis of mean square error of events with return period above the range covered by the sample it is observed that a method proposed earlier which uses moments of order 1, 2 and 3 in real space performs better than the other three methods although certain of the other methods follow the recommendation put forward by some investigators that higher order moments (moments of order 3 or more) should be avoided in flood frequency estimation. It is argued in the present study that the use of higher order moments should not be avoided simply because they have high variability because it is not only the variability of the moments which determines the degree of variability of the estimated design flood events but also the correlation that exists between these moments. Some recommendations are given at the end of the study aimed at achieving better efficiency in flood frequency research at a period where more and more distributions and methods of estimation are being proposed.  相似文献   

3.
4.
5.
Abstract

The impulse response of a linear convective-diffusion analogy (LD) model used for flow routing in open channels is proposed as a probability distribution for flood frequency analysis. The flood frequency model has two parameters, which are derived using the methods of moments and maximum likelihood. Also derived are errors in quantiles for these parameter estimation methods. The distribution shows that the two methods are equivalent in terms of producing mean values—the important property in case of unknown true distribution function. The flood frequency model is tested using annual peak discharges for the gauging sections of 39 Polish rivers where the average value of the ratio of the coefficient of skewness to the coefficient of variation equals about 2.52, a value closer to the ratio of the LD model than to the gamma or the lognormal model. The likelihood ratio indicates the preference of the LD over the lognormal for 27 out of 39 cases. It is found that the proposed flood frequency model represents flood frequency characteristics well (measured by the moment ratio) when the LD flood routing model is likely to be the best of all linear flow routing models.  相似文献   

6.
The principle of maximum entropy (POME) was used to derive the Pearson type (PT) III distribution. The POME yielded the minimally prejudiced PT III distribution by maximizing the entropy subject to two appropriate constraints which were the mean and the mean of the logarithm of real values about a constant >0. This provided a unique method for parameter estimation. Historical flood data were used to evaluate this method and compare it with the methods of moments and maximum likelihood estimation.  相似文献   

7.
Selection of a flood frequency distribution and associated parameter estimation procedure is an important step in flood frequency analysis. This is however a difficult task due to problems in selecting the best fit distribution from a large number of candidate distributions and parameter estimation procedures available in the literature. This paper presents a case study with flood data from Tasmania in Australia, which examines four model selection criteria: Akaike Information Criterion (AIC), Akaike Information Criterion—second order variant (AICc), Bayesian Information Criterion (BIC) and a modified Anderson–Darling Criterion (ADC). It has been found from the Monte Carlo simulation that ADC is more successful in recognizing the parent distribution correctly than the AIC and BIC when the parent is a three-parameter distribution. On the other hand, AIC and BIC are better in recognizing the parent distribution correctly when the parent is a two-parameter distribution. From the seven different probability distributions examined for Tasmania, it has been found that two-parameter distributions are preferable to three-parameter ones for Tasmania, with Log Normal appears to be the best selection. The paper also evaluates three most widely used parameter estimation procedures for the Log Normal distribution: method of moments (MOM), method of maximum likelihood (MLE) and Bayesian Markov Chain Monte Carlo method (BAY). It has been found that the BAY procedure provides better parameter estimates for the Log Normal distribution, which results in flood quantile estimates with smaller bias and standard error as compared to the MOM and MLE. The findings from this study would be useful in flood frequency analyses in other Australian states and other countries in particular, when selecting an appropriate probability distribution from a number of alternatives.  相似文献   

8.
Abstract

Abstract The identification of flood seasonality is a procedure with many practical applications in hydrology and water resources management. Several statistical methods for capturing flood seasonality have emerged during the last decade. So far, however, little attention has been paid to the uncertainty involved in the use of these methods, as well as to the reliability of their estimates. This paper compares the performance of annual maximum (AM) and peaks-over-threshold (POT) sampling models in flood seasonality estimation. Flood seasonality is determined by two most frequently used methods, one based on directional statistics (DS) and the other on the distribution of monthly relative frequencies of flood occurrence (RF). The performance is evaluated for the AM and three common POT sampling models depending on the estimation method, flood seasonality type and sample record length. The results demonstrate that the POT models outperform the AM model in most analysed scenarios. The POT sampling provides significantly more information on flood seasonality than the AM sampling. For certain flood seasonality types, POT samples can lead to estimation uncertainty that is found in up to ten-times longer AM samples. The performance of the RF method does not depend on the flood seasonality type as much as that of the DS method, which performs poorly on samples generated from complex seasonality distributions.  相似文献   

9.
Abstract

This paper describes a first attempt at developing a regional flood estimation methodology for Lebanon. The analyses are based on instantaneous flood peak data for the whole country, and cover the period from the start of observations in the 1930s to the start of the civil war in the mid-1970s. Three main flood-generating zones are identified, and regional flood growth curves are derived for each zone using the Generalized Extreme Value distribution fitted by probability-weighted moments. Typical parameter values are presented, together with regression coefficients for estimating the mean annual flood. Based on this work, several recommendations are made on the future data collection and analysis requirements to develop a national flood estimation methodology for Lebanon.  相似文献   

10.
Conventional flood frequency analysis is concerned with providing an unbiased estimate of the magnitude of the design flow exceeded with the probabilityp, but sampling uncertainties imply that such estimates will, on average, be exceeded more frequently. An alternative approach is therefore, to derive an estimator which gives an unbiased estimate of flow risk: the difference between the two magnitudes reflects uncertainties in parameter estimation. An empirical procedure has been developed to estimate the mean true exceedance probabilities of conventional estimates made using a GEV distribution fitted by probability weighted moments, and adjustment factors have been determined to enable the estimation of flood magnitudes exceeded with, on average, the desired probability.  相似文献   

11.
Abstract

Applicability of log-Gumbel (LG) and log-logistic (LL) probability distributions in hydrological studies is critically examined under real conditions, where the assumed distribution differs from the true one. The set of alternative distributions consists of five two-parameter distributions with zero lower bound, including LG and LL as well as lognormal (LN), linear diffusion analogy (LD) and gamma (Ga) distributions. The log-Gumbel distribution is considered as both a false and a true distribution. The model error of upper quantiles and of the first two moments is analytically derived for three estimation methods: the method of moments (MOM), the linear moments method (LMM) and the maximum likelihood method (MLM). These estimation methods are used as methods of approximation of one distribution by another distribution. As recommended in the first of this two-part series of papers, MLM turns out to be the worst method, if the assumed LG or LL distribution is not the true one. It produces a huge bias of upper quantiles, which is at least one order higher than that of the other two methods. However, the reverse case, i.e. acceptance of LN, LD or Ga as a hypothetical distribution, while the LG or LL distribution is the true one, gives the MLM bias of reasonable magnitude in upper quantiles. Therefore, one should avoid choosing the LG and LL distributions in flood frequency analysis, especially if MLM is to be applied.  相似文献   

12.
The cross-entropy method with fractile constraints has been developed to estimate a random variable when the data are a set of independent observations of the variable. The method can claim several advantages over existing methods. It uses a reference distribution like the prior distribution in Bayesian analysis and likewise generates a posterior distribution.The method is of interest, in particular, because it satisfies two fundamental requirements for selfconsistency in the analysis of a probabilistic system based on data: a principle of invariance and a principle of data monotonicity.The method is applied to flood analysis. Robustness of the minimum cross-entropy method is compared with other methods: the methods of moments and the maximum likehood.  相似文献   

13.
The index flood procedure coupled with the L‐moments method is applied to the annual flood peaks data taken at all stream‐gauging stations in Turkey having at least 15‐year‐long records. First, screening of the data is done based on the discordancy measure (Di) in terms of the L‐moments. Homogeneity of the total geographical area of Turkey is tested using the L‐moments based heterogeneity measure, H, computed on 500 simulations generated using the four parameter Kappa distribution. The L‐moments analysis of the recorded annual flood peaks data at 543 gauged sites indicates that Turkey as a whole is hydrologically heterogeneous, and 45 of 543 gauged sites are discordant which are discarded from further analyses. The catchment areas of these 543 sites vary from 9·9 to 75121 km2 and their mean annual peak floods vary from 1·72 to 3739·5 m3 s?1. The probability distributions used in the analyses, whose parameters are computed by the L‐moments method are the general extreme values (GEV), generalized logistic (GLO), generalized normal (GNO), Pearson type III (PE3), generalized Pareto (GPA), and five‐parameter Wakeby (WAK). Based on the L‐moment ratio diagrams and the |Zdist|‐statistic criteria, the GEV distribution is identified as the robust distribution for the study area (498 gauged sites). Hence, for estimation of flood magnitudes of various return periods in Turkey, a regional flood frequency relationship is developed using the GEV distribution. Next, the quantiles computed at all of 543 gauged sites by the GEV and the Wakeby distributions are compared with the observed values of the same probability based on two criteria, mean absolute relative error and determination coefficient. Results of these comparisons indicate that both distributions of GEV and Wakeby, whose parameters are computed by the L‐moments method, are adequate in predicting quantile estimates. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
Regression‐based regional flood frequency analysis (RFFA) methods are widely adopted in hydrology. This paper compares two regression‐based RFFA methods using a Bayesian generalized least squares (GLS) modelling framework; the two are quantile regression technique (QRT) and parameter regression technique (PRT). In this study, the QRT focuses on the development of prediction equations for a flood quantile in the range of 2 to 100 years average recurrence intervals (ARI), while the PRT develops prediction equations for the first three moments of the log Pearson Type 3 (LP3) distribution, which are the mean, standard deviation and skew of the logarithms of the annual maximum flows; these regional parameters are then used to fit the LP3 distribution to estimate the desired flood quantiles at a given site. It has been shown that using a method similar to stepwise regression and by employing a number of statistics such as the model error variance, average variance of prediction, Bayesian information criterion and Akaike information criterion, the best set of explanatory variables in the GLS regression can be identified. In this study, a range of statistics and diagnostic plots have been adopted to evaluate the regression models. The method has been applied to 53 catchments in Tasmania, Australia. It has been found that catchment area and design rainfall intensity are the most important explanatory variables in predicting flood quantiles using the QRT. For the PRT, a total of four explanatory variables were adopted for predicting the mean, standard deviation and skew. The developed regression models satisfy the underlying model assumptions quite well; of importance, no outlier sites are detected in the plots of the regression diagnostics of the adopted regression equations. Based on ‘one‐at‐a‐time cross validation’ and a number of evaluation statistics, it has been found that for Tasmania the QRT provides more accurate flood quantile estimates for the higher ARIs while the PRT provides relatively better estimates for the smaller ARIs. The RFFA techniques presented here can easily be adapted to other Australian states and countries to derive more accurate regional flood predictions. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
16.
Changes in river flow regime resulted in a surge in the number of methods of non-stationary flood frequency analysis. Common assumption is the time-invariant distribution function with time-dependent location and scale parameters while the shape parameters are time-invariant. Here, instead of location and scale parameters of the distribution, the mean and standard deviation are used. We analyse the accuracy of the two methods in respect to estimation of time-dependent first two moments, time-invariant skewness and time-dependent upper quantiles. The method of maximum likelihood (ML) with time covariate is confronted with the Two Stage (TS) one (combining Weighted Least Squares and L-moments techniques). Comparison is made by Monte Carlo simulations. Assuming parent distribution which ensures the asymptotic superiority of ML method, the Generalized Extreme Value distribution with various values of linearly changing in time first two moments, constant skewness, and various time-series lengths are considered. Analysis of results indicates the superiority of TS methods in all analyzed aspects. Moreover, the estimates from TS method are more resistant to probability distribution choice, as demonstrated by Polish rivers’ case studies.  相似文献   

17.
Abstract

Flood frequency analysis can be made by using two types of flood peak series, i.e. the annual maximum (AM) and peaks-over-threshold (POT) series. This study presents a comparison of the results of both methods for data from the Litija 1 gauging station on the Sava River in Slovenia. Six commonly used distribution functions and three different parameter estimation techniques were considered in the AM analyses. The results showed a better performance for the method of L-moments (ML) when compared with the conventional moments and maximum likelihood estimation. The combination of the ML and the log-Pearson type 3 distribution gave the best results of all the considered AM cases. The POT method gave better results than the AM method. The binomial distribution did not offer any noticeable improvement over the Poisson distribution for modelling the annual number of exceedences above the threshold.
Editor D. Koutsoyiannis

Citation Bezak, N., Brilly, M., and ?raj, M., 2014. Comparison between the peaks-over-threshold method and the annual maximum method for flood frequency analysis. Hydrological Sciences Journal, 59 (5), 959–977.  相似文献   

18.
Abstract

Flood frequency estimation is crucial in both engineering practice and hydrological research. Regional analysis of flood peak discharges is used for more accurate estimates of flood quantiles in ungauged or poorly gauged catchments. This is based on the identification of homogeneous zones, where the probability distribution of annual maximum peak flows is invariant, except for a scale factor represented by an index flood. The numerous applications of this method have highlighted obtaining accurate estimates of index flood as a critical step, especially in ungauged or poorly gauged sections, where direct estimation by sample mean of annual flood series (AFS) is not possible, or inaccurate. Therein indirect methods have to be used. Most indirect methods are based upon empirical relationships that link index flood to hydrological, climatological and morphological catchment characteristics, developed by means of multi-regression analysis, or simplified lumped representation of rainfall–runoff processes. The limits of these approaches are increasingly evident as the size and spatial variability of the catchment increases. In these cases, the use of a spatially-distributed, physically-based hydrological model, and time continuous simulation of discharge can improve estimation of the index flood. This work presents an application of the FEST-WB model for the reconstruction of 29 years of hourly streamflows for an Alpine snow-fed catchment in northern Italy, to be used for index flood estimation. To extend the length of the simulated discharge time series, meteorological forcings given by daily precipitation and temperature at ground automatic weather stations are disaggregated hourly, and then fed to FEST-WB. The accuracy of the method in estimating index flood depending upon length of the simulated series is discussed, and suggestions for use of the methodology provided.
Editor D. Koutsoyiannis  相似文献   

19.
The generalized gamma (GG) distribution has a density function that can take on many possible forms commonly encountered in hydrologic applications. This fact has led many authors to study the properties of the distribution and to propose various estimation techniques (method of moments, mixed moments, maximum likelihood etc.). We discuss some of the most important properties of this flexible distribution and present a flexible method of parameter estimation, called the generalized method of moments (GMM) which combines any three moments of the GG distribution. The main advantage of this general method is that it has many of the previously proposed methods of estimation as special cases. We also give a general formula for the variance of theT-year eventX T obtained by the GMM along with a general formula for the parameter estimates and also for the covariances and correlation coefficients between any pair of such estimates. By applying the GMM and carefully choosing the order of the moments that are used in the estimation one can significantly reduce the variance ofT-year events for the range of return periods that are of interest.  相似文献   

20.
Similarity and differences between linear flood routing modelling (LFRM) and flood frequency analysis (FFA) techniques are presented. The moment matching used in LFRM to approximate the impulse response function (IRF) was applied in FFA to derive the asymptotic bias caused by the false distribution assumption. Proceeding in this way, other estimation methods were used as approximation methods in FFA to derive the asymptotic bias. Using simulation experiments, the above investigation was extended to evaluate the sampling bias. As a feedback, the maximum likelihood method (MLM) can be used for approximating linear channel response (LCR) by the IRFs of conceptual models. Impulse responses of the convective diffusion and kinematic diffusion models were applied and developed as FFA models. Based on kinematic diffusion LFRM, the equivalence of estimation problems of discrete‐continuous distribution and single‐censored sample are shown both for the method of moments (MOM) and the MLM. Hence, the applicability of MOM is extended for the case of censored samples. Owing to the complexity and non‐linearity of hydrological systems and resulting processes, the use of simple models is often questionable. The rationale of simple models is discussed. The problems of model choice and overparameterization are common in mathematical modelling and FF modelling. Some results for the use of simple models in the stationary FFA are presented. The problems of model discrimination are then discussed. Finally, a conjunction of linear stochastic processes and LFRM is presented. The influence of river courses on stochastic properties of the runoff process is shown by combining Gaussian input with the LCR of the simplified Saint Venant model. It is shown that, from the classification of the ways of their development, both LFRM and FFA can benefit. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号