首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Abstract

Pooling of flood data is widely used to provide a framework to estimate design floods by the Index Flood method. Design flood estimation with this approach involves derivation of a growth curve which shows the relationship between XT and the return period T, where XT ?=?QT /QI and QI is the index flood at the site of interest. An implicit assumption with the Index Flood procedure of pooling analysis is that the XT T relationship is the same at all sites in a homogeneous pooling group, although this assumption would generally be violated to some extent in practical cases, i.e. some degree of heterogeneity exists. In fact, in only some cases is the homogeneity criterion effectively satisfied for Irish conditions. In this paper, the performance of the index-flood pooling analysis is assessed in the Irish low CV (coefficient of variation) hydrology context considering that heterogeneity is taken into account. It is found that the performance of the pooling method is satisfactory provided there are at least 350 station years of data included. Also it is found that, in a highly heterogeneous group, it is more desirable to have many sites with short record lengths than a smaller number of sites with long record lengths. Increased heterogeneity decreases the advantage of pooling group-based estimation over at-site estimation. Only a heterogeneity measure (H1) less than 4.0 can render the pooled estimation preferable to that obtained for at-site estimation for the estimation of 100-year flood. In moderately to highly heterogeneous regions it is preferable to conduct at-site analysis for the estimation of 100-year flood if the record length at the site concerned exceeds 50.

Editor Z.W. Kundzewicz; Associate editor A. Carsteanu

Citation Das, S. and Cunnane, C., 2012. Performance of flood frequency pooling analysis in a low CV context. Hydrological Sciences Journal, 57 (3), 433–444.  相似文献   

2.
《水文科学杂志》2013,58(5):974-991
Abstract

The aim is to build a seasonal flood frequency analysis model and estimate seasonal design floods. The importance of seasonal flood frequency analysis and the advantages of considering seasonal design floods in the derivation of reservoir planning and operating rules are discussed, recognising that seasonal flood frequency models have been in use for over 30 years. A set of non-identical models with non-constant parameters is proposed and developed to describe flows that reflect seasonal flood variation. The peak-over-threshold (POT) sampling method was used, as it is considered to provide significantly more information on flood seasonality than annual maximum (AM) sampling and has better performance in flood seasonality estimation. The number of exceedences is assumed to follow the Poisson distribution (Po), while the peak exceedences are described by the exponential (Ex) and generalized Pareto (GP) distributions and a combination of both, resulting in three models, viz. Po-Ex, Po-GP and Po-Ex/GP. Their performances are analysed and compared. The Geheyan and the Baiyunshan reservoirs were chosen for the case study. The application and statistical experiment results show that each model has its merits and that the Po-Ex/GP model performs best. Use of the Po-Ex/GP model is recommended in seasonal flood frequency analysis for the purpose of deriving reservoir operation rules.  相似文献   

3.
4.
In this paper, a new index is proposed for the selection of the best regional frequency analysis method. First, based on the theory of reliability, the new selective index is developed. The variances of three regional T‐year event estimators are then derived. The proposed methodology is applied to an actual watershed. For each regional method, the reliability of various T‐year regional estimates is computed. Finally, the reliability‐based selective index graph is constructed from which the best regional method can be determined. In addition, the selection result is compared with that based on the traditional index, root mean square error. The proposed new index is recommended as an alternative to the existing indices such as root mean square error, because the influence of uncertainty and the accuracy of estimates are considered. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

5.
Abstract

Flood frequency analysis (FFA) is essential for water resources management. Long flow records improve the precision of estimated quantiles; however, in some cases, sample size in one location is not sufficient to achieve a reliable estimate of the statistical parameters and thus, regional FFA is commonly used to decrease the uncertainty in the prediction. In this paper, the bias of several commonly used parameter estimators, including L-moment, probability weighted moment and maximum likelihood estimation, applied to the general extreme value (GEV) distribution is evaluated using a Monte Carlo simulation. Two bias compensation approaches: compensation based on the shape parameter, and compensation using three GEV parameters, are proposed based on the analysis and the models are then applied to streamflow records in southern Alberta. Compensation efficiency varies among estimators and between compensation approaches. The results overall suggest that compensation of the bias due to the estimator and short sample size would significantly improve the accuracy of the quantile estimation. In addition, at-site FFA is able to provide reliable estimation based on short data, when accounting for the bias in the estimator appropriately.
Editor D. Koutsoyiannis; Associate editor Sheng Yue  相似文献   

6.
This study proposes an improved nonstationary model for flood frequency analysis by investigating the relationship between flood peak and flood volume, using the Three Gorges Dam (TGD), China, for verification. First, the generalized additive model for location, scale and shape (GAMLSS) is used as the prior distribution. Then, under Bayesian theory, the prior distribution is updated using the conditional distribution, which is derived from the copula function. The results show that the improvement of the proposed model is significant compared with the GAMLSS-based prior distribution. Meanwhile, selection of a suitable prior distribution has a significant effect on the results of the improvement. For applications to the TGD, the nonstationary model can obviously increase the engineering management benefits and reduce the perceived risks of large floods. This study provides guidance for the dynamic management of hydraulic engineering under nonstationary conditions.  相似文献   

7.
Many civil infrastructures are located near the confluence of two streams, where they may be subject to inundation by high flows from either stream or both. These infrastructures, such as highway bridges, are designed to meet specified performance objectives for floods of a specified return period (e.g. the 100 year flood). Because the flooding of structures on one stream can be affected by high flows on the other stream, it is important to know the relationship between the coincident exceedence probabilities on the confluent stream pair in many hydrological engineering practices. Currently, the National Flood Frequency Program (NFF), which was developed by the US Geological Survey (USGS) and based on regional analysis, is probably the most popular model for ungauged site flood estimation and could be employed to estimate flood probabilities at the confluence points. The need for improved infrastructure design at such sites has motivated a renewed interest in the development of more rigorous joint probability distributions of the coincident flows. To accomplish this, a practical procedure is needed to determine the crucial bivariate distributions of design flows at stream confluences. In the past, the copula method provided a way to construct multivariate distribution functions. This paper aims to develop the Copula‐based Flood Frequency (COFF) method at the confluence points with any type of marginal distributions via the use of Archimedean copulas and dependent parameters. The practical implementation was assessed and tested against the standard NFF approach by a case study in Iowa's Des Moines River. Monte Carlo simulations proved the success of the generalized copula‐based joint distribution algorithm. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
The objective of the study was to compare the relative accuracy of three methodologies of regional flood frequency analysis in areas of limited flood records. Thirty two drainage basins of different characteristics, located mainly in the southwest region of Saudi Arabia, were selected for the study. In the first methodology, region curves were developed and used together with the mean annual flood, estimated from the characteristics of drainage basin, to estimate flood flows at a location in the basin. The second methodology was to fit probability distribution functions to annual maximum rainfall intensity in a drainage basin. The best fitted probability function was used together with common peak flow models to estimate the annual maximum flood flows in the basin. In the third methodology, duration reduction curves were developed and used together with the average flood flow in a basin to estimate the peak flood flows in the basin. The results obtained from each methodology were compared to the flood records of the selected stations using three statistical measures of goodness-of-fit. The first methodology was found best in a case of having short length of record at a drainage basin. The second methodology produced satisfactory results. Thus, it is recommended in areas where data are not sufficient and/or reliable to utilise the first methodology.  相似文献   

9.
Flood frequency analysis is usually based on the fitting of an extreme value distribution to the local streamflow series. However, when the local data series is short, frequency analysis results become unreliable. Regional frequency analysis is a convenient way to reduce the estimation uncertainty. In this work, we propose a regional Bayesian model for short record length sites. This model is less restrictive than the index flood model while preserving the formalism of “homogeneous regions”. The performance of the proposed model is assessed on a set of gauging stations in France. The accuracy of quantile estimates as a function of the degree of homogeneity of the pooling group is also analysed. The results indicate that the regional Bayesian model outperforms the index flood model and local estimators. Furthermore, it seems that working with relatively large and homogeneous regions may lead to more accurate results than working with smaller and highly homogeneous regions.  相似文献   

10.
L. Brocca  F. Melone  T. Moramarco 《水文研究》2011,25(18):2801-2813
Nowadays, in the scientific literature many rainfall‐runoff (RR) models are available ranging from simpler ones, with a limited number of parameters, to highly complex ones, with many parameters. Therefore, the selection of the best structure and parameterisation for a model is not straightforward as it is dependent on a number of factors: climatic conditions, catchment characteristics, temporal and spatial resolution, model objectives, etc. In this study, the structure of a continuous semi‐distributed RR model, named MISDc (‘Modello Idrologico Semi‐Distribuito in continuo’) developed for flood simulation in the Upper Tiber River (central Italy) is presented. Most notably, the methodology employed to detect the more relevant processes involved in the modelling of high floods, and hence, to build the model structure and its parameters, is developed. For this purpose, an intense activity of monitoring soil moisture and runoff in experimental catchments was carried out allowing to derive a parsimonious and reliable continuous RR model operating at an hourly (or smaller) time scale. Specifically, in order to determine the catchment hydrological response, the important role of the antecedent wetness conditions is emphasized. The application of MISDc both for design flood estimation and for flood forecasting is reported here demonstrating its reliability and also its computational efficiency, another important factor in hydrological practice. As far as the flood forecasting applications are concerned, only the accuracy of the model in reproducing discharge hydrographs by assuming rainfall correctly known throughout the event is investigated indepth. In particular, the MISDc has been implemented in the framework of Civil Protection activities for the Upper Tiber River basin. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
Despite some theoretical advantages of peaks-over-threshold (POT) series over annual maximum (AMAX) series, some practical aspects of flood frequency analysis using AMAX or POT series are still subject to debate. Only minor attention has been given to the POT method in the context of pooled frequency analysis. The objective of this research is to develop a framework to promote the implementation of pooled frequency modelling based on POT series. The framework benefits from a semi-automated threshold selection method. This study introduces a formalized and effective approach to construct homogeneous pooling groups. The proposed framework also offers means to compare the performance of pooled flood estimation based on AMAX or POT series. An application of the framework is presented for a large collection of Canadian catchments. The proposed POT pooling technique generally improved flood quantile estimation in comparison to the AMAX pooling scheme, and achieved smaller uncertainty associated with the quantile estimates.  相似文献   

12.
Parametric method of flood frequency analysis (FFA) involves fitting of a probability distribution to the observed flood data at the site of interest. When record length at a given site is relatively longer and flood data exhibits skewness, a distribution having more than three parameters is often used in FFA such as log‐Pearson type 3 distribution. This paper examines the suitability of a five‐parameter Wakeby distribution for the annual maximum flood data in eastern Australia. We adopt a Monte Carlo simulation technique to select an appropriate plotting position formula and to derive a probability plot correlation coefficient (PPCC) test statistic for Wakeby distribution. The Weibull plotting position formula has been found to be the most appropriate for the Wakeby distribution. Regression equations for the PPCC tests statistics associated with the Wakeby distribution for different levels of significance have been derived. Furthermore, a power study to estimate the rejection rate associated with the derived PPCC test statistics has been undertaken. Finally, an application using annual maximum flood series data from 91 catchments in eastern Australia has been presented. Results show that the developed regression equations can be used with a high degree of confidence to test whether the Wakeby distribution fits the annual maximum flood series data at a given station. The methodology developed in this paper can be adapted to other probability distributions and to other study areas. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Abstract

This paper describes a first attempt at developing a regional flood estimation methodology for Lebanon. The analyses are based on instantaneous flood peak data for the whole country, and cover the period from the start of observations in the 1930s to the start of the civil war in the mid-1970s. Three main flood-generating zones are identified, and regional flood growth curves are derived for each zone using the Generalized Extreme Value distribution fitted by probability-weighted moments. Typical parameter values are presented, together with regression coefficients for estimating the mean annual flood. Based on this work, several recommendations are made on the future data collection and analysis requirements to develop a national flood estimation methodology for Lebanon.  相似文献   

14.
In this article, an approach using residual kriging (RK) in physiographical space is proposed for regional flood frequency analysis. The physiographical space is constructed using physiographical/climatic characteristics of gauging basins by means of canonical correlation analysis (CCA). This approach is a modified version of the original method, based on ordinary kriging (OK). It is intended to handle effectively any possible spatial trends within the hydrological variables over the physiographical space. In this approach, the trend is first quantified and removed from the hydrological variable by a quadratic spatial regression. OK is therefore applied to the regression residual values. The final estimated value of a specific quantile at an ungauged station is the sum of the spatial regression estimate and the kriged residual. To evaluate the performance of the proposed method, a cross‐validation procedure is applied. Results of the proposed method indicate that RK in CCA physiographical space leads to more efficient estimates of regional flood quantiles when compared to the original approach and to a straightforward regression‐based estimator. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Abstract

Flood frequency analysis can be made by using two types of flood peak series, i.e. the annual maximum (AM) and peaks-over-threshold (POT) series. This study presents a comparison of the results of both methods for data from the Litija 1 gauging station on the Sava River in Slovenia. Six commonly used distribution functions and three different parameter estimation techniques were considered in the AM analyses. The results showed a better performance for the method of L-moments (ML) when compared with the conventional moments and maximum likelihood estimation. The combination of the ML and the log-Pearson type 3 distribution gave the best results of all the considered AM cases. The POT method gave better results than the AM method. The binomial distribution did not offer any noticeable improvement over the Poisson distribution for modelling the annual number of exceedences above the threshold.
Editor D. Koutsoyiannis

Citation Bezak, N., Brilly, M., and ?raj, M., 2014. Comparison between the peaks-over-threshold method and the annual maximum method for flood frequency analysis. Hydrological Sciences Journal, 59 (5), 959–977.  相似文献   

16.
Asymmetric copula in multivariate flood frequency analysis   总被引:2,自引:0,他引:2  
The univariate flood frequency analysis is widely used in hydrological studies. Often only flood peak or flood volume is statistically analyzed. For a more complete analysis the three main characteristics of a flood event i.e. peak, volume and duration are required. To fully understand these variables and their relationships, a multivariate statistical approach is necessary. The main aim of this paper is to define the trivariate probability density and cumulative distribution functions. When the joint distribution is known, it is possible to define the bivariate distribution of volume and duration conditioned on the peak discharge. Consequently volume–duration pairs, statistically linked to peak values, become available. The authors build trivariate joint distribution of flood event variables using the fully nested or asymmetric Archimedean copula functions. They describe properties of this copula class and perform extensive simulations to highlight differences with the well-known symmetric Archimedean copulas. They apply asymmetric distributions to observed flood data and compare the results those obtained using distributions built with symmetric copula and the standard Gumbel Logistic model.  相似文献   

17.
For exploring the aftershock occurrence process of the 2008 Wenchuan strong earthquake, the spatio-temporal point pattern analysis method is employed to study the sequences of aftershocks with magnitude M≥4.0, M≥4.5, and M≥5.0. It is found that these data exhibit the spatio-temporal clustering on a certain distance scale and on a certain time scale. In particular, the space-time interaction obviously strengthens when the distance is less than 60 km and the time is less than 260 h for the first two aftershoc...  相似文献   

18.
Abstract

A new technique is developed for identifying groups for regional flood frequency analysis. The technique uses a clustering algorithm as a starting point for partitioning the collection of catchments. The groups formed using the clustering algorithm are subsequently revised to improve the regional characteristics based on three requirements that are defined for effective groups. The result is overlapping groups that can be used to estimate extreme flow quantiles for gauged or ungauged catchments. The technique is applied to a collection of catchments from India and the results indicate that regions with the desired characteristics can be identified using the technique. The use of the groups for estimating extreme flow quantiles is demonstrated for three example sites.  相似文献   

19.
This study analyses the differences in significant trends in magnitude and frequency of floods detected in annual maximum flood (AMF) and peak over threshold (POT) flood peak series, for the period 1965–2005. Flood peaks are identified from European daily discharge data using a baseflow-based algorithm and significant trends in the AMF series are compared with those in the POT series, derived for six different exceedence thresholds. The results show that more trends in flood magnitude are detected in the AMF than in the POT series and for the POT series more significant trends are detected in flood frequency than in flood magnitude. Spatially coherent patterns of significant trends are detected, which are further investigated by stratifying the results into five regions based on catchment and hydro-climatic characteristics. All data and tools used in this study are open-access and the results are fully reproducible.  相似文献   

20.
Abstract

Flood frequency analysis based on a set of systematic data and a set of historical floods is applied to several Mediterranean catchments. After identification and collection of data on historical floods, several hydraulic models were constructed to account for geomorphological changes. Recent and historical rating curves were constructed and applied to reconstruct flood discharge series, together with their uncertainty. This uncertainty stems from two types of error: (a) random errors related to the water-level readings; and (b) systematic errors related to over- or under-estimation of the rating curve. A Bayesian frequency analysis is performed to take both sources of uncertainty into account. It is shown that the uncertainty affecting discharges should be carefully evaluated and taken into account in the flood frequency analysis, as it can increase the quantiles confidence interval. The quantiles are found to be consistent with those obtained with empirical methods, for two out of four of the catchments.

Citation Neppel, L., Renard, B., Lang, M., Ayral, P.-A., Coeur, D., Gaume, E., Jacob, N., Payrastre, O., Pobanz, K. & Vinet, F. (2010) Flood frequency analysis using historical data: accounting for random and systematic errors. Hydrol. Sci. J. 55(2), 192–208.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号