首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The index flood method is widely used in regional flood frequency analysis (RFFA) but explicitly relies on the identification of ‘acceptable homogeneous regions’. This paper presents an alternative RFFA method, which is particularly useful when ‘acceptably homogeneous regions’ cannot be identified. The new RFFA method is based on the region of influence (ROI) approach where a ‘local region’ can be formed to estimate statistics at the site of interest. The new method is applied here to regionalize the parameters of the log‐Pearson 3 (LP3) flood probability model using Bayesian generalized least squares (GLS) regression. The ROI approach is used to reduce model error arising from the heterogeneity unaccounted for by the predictor variables in the traditional fixed‐region GLS analysis. A case study was undertaken for 55 catchments located in eastern New South Wales, Australia. The selection of predictor variables was guided by minimizing model error. Using an approach similar to stepwise regression, the best model for the LP3 mean was found to use catchment area and 50‐year, 12‐h rainfall intensity as explanatory variables, whereas the models for the LP3 standard deviation and skewness only had a constant term for the derived ROIs. Diagnostics based on leave‐one‐out cross validation show that the regression model assumptions were not inconsistent with the data and, importantly, no genuine outlier sites were identified. Significantly, the ROI GLS approach produced more accurate and consistent results than a fixed‐region GLS model, highlighting the superior ability of the ROI approach to deal with heterogeneity. This method is particularly applicable to regions that show a high degree of regional heterogeneity. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Sheng Yue 《水文研究》2001,15(6):1033-1045
A gamma distribution is one of the most frequently selected distribution types for hydrological frequency analysis. The bivariate gamma distribution with gamma marginals may be useful for analysing multivariate hydrological events. This study investigates the applicability of a bivariate gamma model with five parameters for describing the joint probability behavior of multivariate flood events. The parameters are proposed to be estimated from the marginal distributions by the method of moments. The joint distribution, the conditional distribution, and the associated return periods are derived from marginals. The usefulness of the model is demonstrated by representing the joint probabilistic behaviour between correlated flood peak and flood volume and between correlated flood volume and flood duration in the Madawask River basin in the province of Quebec, Canada. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
The objective of the study was to compare the relative accuracy of three methodologies of regional flood frequency analysis in areas of limited flood records. Thirty two drainage basins of different characteristics, located mainly in the southwest region of Saudi Arabia, were selected for the study. In the first methodology, region curves were developed and used together with the mean annual flood, estimated from the characteristics of drainage basin, to estimate flood flows at a location in the basin. The second methodology was to fit probability distribution functions to annual maximum rainfall intensity in a drainage basin. The best fitted probability function was used together with common peak flow models to estimate the annual maximum flood flows in the basin. In the third methodology, duration reduction curves were developed and used together with the average flood flow in a basin to estimate the peak flood flows in the basin. The results obtained from each methodology were compared to the flood records of the selected stations using three statistical measures of goodness-of-fit. The first methodology was found best in a case of having short length of record at a drainage basin. The second methodology produced satisfactory results. Thus, it is recommended in areas where data are not sufficient and/or reliable to utilise the first methodology.  相似文献   

4.
Abstract

The physically-based flood frequency models use readily available rainfall data and catchment characteristics to derive the flood frequency distribution. In the present study, a new physically-based flood frequency distribution has been developed. This model uses bivariate exponential distribution for rainfall intensity and duration, and the Soil Conservation Service-Curve Number (SCS-CN) method for deriving the probability density function (pdf) of effective rainfall. The effective rainfall-runoff model is based on kinematic-wave theory. The results of application of this derived model to three Indian basins indicate that the model is a useful alternative for estimating flood flow quantiles at ungauged sites.  相似文献   

5.
As an alternative to the commonly used univariate flood frequency analysis, copula frequency analysis can be used. In this study, 58 flood events at the Litija gauging station on the Sava River in Slovenia were analysed, selected based on annual maximum discharge values. Corresponding hydrograph volumes and durations were considered. Different bivariate copulas from three families were applied and compared using different statistical, graphical and upper tail dependence tests. The parameters of the copulas were estimated using the method of moments with the inversion of Kendall's tau. The Gumbel–Hougaard copula was selected as the most appropriate for the pair of peak discharge and hydrograph volume (Q‐V). The same copula was also selected for the pair hydrograph volume and duration (V‐D), and the Student‐t copula was selected for the pair of peak discharge and hydrograph duration (Q‐D). The differences among most of the applied copulas were not significant. Different primary, secondary and conditional return periods were calculated and compared, and some relationships among them were obtained. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Regression‐based regional flood frequency analysis (RFFA) methods are widely adopted in hydrology. This paper compares two regression‐based RFFA methods using a Bayesian generalized least squares (GLS) modelling framework; the two are quantile regression technique (QRT) and parameter regression technique (PRT). In this study, the QRT focuses on the development of prediction equations for a flood quantile in the range of 2 to 100 years average recurrence intervals (ARI), while the PRT develops prediction equations for the first three moments of the log Pearson Type 3 (LP3) distribution, which are the mean, standard deviation and skew of the logarithms of the annual maximum flows; these regional parameters are then used to fit the LP3 distribution to estimate the desired flood quantiles at a given site. It has been shown that using a method similar to stepwise regression and by employing a number of statistics such as the model error variance, average variance of prediction, Bayesian information criterion and Akaike information criterion, the best set of explanatory variables in the GLS regression can be identified. In this study, a range of statistics and diagnostic plots have been adopted to evaluate the regression models. The method has been applied to 53 catchments in Tasmania, Australia. It has been found that catchment area and design rainfall intensity are the most important explanatory variables in predicting flood quantiles using the QRT. For the PRT, a total of four explanatory variables were adopted for predicting the mean, standard deviation and skew. The developed regression models satisfy the underlying model assumptions quite well; of importance, no outlier sites are detected in the plots of the regression diagnostics of the adopted regression equations. Based on ‘one‐at‐a‐time cross validation’ and a number of evaluation statistics, it has been found that for Tasmania the QRT provides more accurate flood quantile estimates for the higher ARIs while the PRT provides relatively better estimates for the smaller ARIs. The RFFA techniques presented here can easily be adapted to other Australian states and countries to derive more accurate regional flood predictions. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
Abstract

Pooling of flood data is widely used to provide a framework to estimate design floods by the Index Flood method. Design flood estimation with this approach involves derivation of a growth curve which shows the relationship between XT and the return period T, where XT ?=?QT /QI and QI is the index flood at the site of interest. An implicit assumption with the Index Flood procedure of pooling analysis is that the XT T relationship is the same at all sites in a homogeneous pooling group, although this assumption would generally be violated to some extent in practical cases, i.e. some degree of heterogeneity exists. In fact, in only some cases is the homogeneity criterion effectively satisfied for Irish conditions. In this paper, the performance of the index-flood pooling analysis is assessed in the Irish low CV (coefficient of variation) hydrology context considering that heterogeneity is taken into account. It is found that the performance of the pooling method is satisfactory provided there are at least 350 station years of data included. Also it is found that, in a highly heterogeneous group, it is more desirable to have many sites with short record lengths than a smaller number of sites with long record lengths. Increased heterogeneity decreases the advantage of pooling group-based estimation over at-site estimation. Only a heterogeneity measure (H1) less than 4.0 can render the pooled estimation preferable to that obtained for at-site estimation for the estimation of 100-year flood. In moderately to highly heterogeneous regions it is preferable to conduct at-site analysis for the estimation of 100-year flood if the record length at the site concerned exceeds 50.

Editor Z.W. Kundzewicz; Associate editor A. Carsteanu

Citation Das, S. and Cunnane, C., 2012. Performance of flood frequency pooling analysis in a low CV context. Hydrological Sciences Journal, 57 (3), 433–444.  相似文献   

8.
9.
10.
In this article, an approach using residual kriging (RK) in physiographical space is proposed for regional flood frequency analysis. The physiographical space is constructed using physiographical/climatic characteristics of gauging basins by means of canonical correlation analysis (CCA). This approach is a modified version of the original method, based on ordinary kriging (OK). It is intended to handle effectively any possible spatial trends within the hydrological variables over the physiographical space. In this approach, the trend is first quantified and removed from the hydrological variable by a quadratic spatial regression. OK is therefore applied to the regression residual values. The final estimated value of a specific quantile at an ungauged station is the sum of the spatial regression estimate and the kriged residual. To evaluate the performance of the proposed method, a cross‐validation procedure is applied. Results of the proposed method indicate that RK in CCA physiographical space leads to more efficient estimates of regional flood quantiles when compared to the original approach and to a straightforward regression‐based estimator. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

11.
《水文科学杂志》2013,58(4):601-618
Abstract

Several methods for the exploration and modelling of spatial point patterns are introduced to study the spatial patterns of homogeneous pooling groups for flood frequency analysis. The study is based on selected catchments in Great Britain, where a high density of gauging stations has been established. Initial pooling groups are formed using the K-means clustering algorithm with appropriately selected similarity measures. The pooling groups are subsequently revised to improve the homogeneity in the hydrological response. Spatial patterns of the initial and final pooling groups are explored in terms of intensity and dependence of the spatial distribution of the catchments. A test against a spatial point process is used to confirm or reject the initial impression of spatial clustering. Changes in the spatial patterns from the initial to the final pooling groups are examined using two comparison methods. The spatial pattern analysis described above can be used to answer the following questions: whether homogeneous catchments tend to exist in the vicinity of each other; whether the improvement in homogeneity tends to form more clustered pooling groups; and how the spatial patterns observed can be used to direct the selection of pooling variables.  相似文献   

12.
Despite some theoretical advantages of peaks-over-threshold (POT) series over annual maximum (AMAX) series, some practical aspects of flood frequency analysis using AMAX or POT series are still subject to debate. Only minor attention has been given to the POT method in the context of pooled frequency analysis. The objective of this research is to develop a framework to promote the implementation of pooled frequency modelling based on POT series. The framework benefits from a semi-automated threshold selection method. This study introduces a formalized and effective approach to construct homogeneous pooling groups. The proposed framework also offers means to compare the performance of pooled flood estimation based on AMAX or POT series. An application of the framework is presented for a large collection of Canadian catchments. The proposed POT pooling technique generally improved flood quantile estimation in comparison to the AMAX pooling scheme, and achieved smaller uncertainty associated with the quantile estimates.  相似文献   

13.
In this paper, a new index is proposed for the selection of the best regional frequency analysis method. First, based on the theory of reliability, the new selective index is developed. The variances of three regional T‐year event estimators are then derived. The proposed methodology is applied to an actual watershed. For each regional method, the reliability of various T‐year regional estimates is computed. Finally, the reliability‐based selective index graph is constructed from which the best regional method can be determined. In addition, the selection result is compared with that based on the traditional index, root mean square error. The proposed new index is recommended as an alternative to the existing indices such as root mean square error, because the influence of uncertainty and the accuracy of estimates are considered. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

14.
Regional flood frequency analysis (RFFA) is widely used in practice to estimate flood quantiles in ungauged catchments. Most commonly adopted RFFA methods such as quantile regression technique (QRT) assume a log-linear relationship between the dependent and a set of predictor variables. As non-linear models and universal approximators, artificial neural networks (ANN) have been widely adopted in rainfall runoff modeling and hydrologic forecasting, but there have been relatively few studies involving the application of ANN to RFFA for estimating flood quantiles in ungauged catchments. This paper thus focuses on the development and testing of an ANN-based RFFA model using an extensive Australian database consisting of 452 gauged catchments. Based on an independent testing, it has been found that ANN-based RFFA model with only two predictor variables can provide flood quantile estimates that are more accurate than the traditional QRT. Seven different regions have been compared with the ANN-based RFFA model and it has been shown that when the data from all the eastern Australian states are combined together to form a single region, the ANN presents the best performing RFFA model. This indicates that a relatively larger dataset is better suited for successful training and testing of the ANN-based RFFA models.  相似文献   

15.
The annual peak flow series of Polish rivers are mixtures of summer and winter flows. As Part II of a sequence of two papers, practical aspects of applicability of seasonal approach to flood frequency analysis (FFA) of Polish rivers are discussed. Taking A Two‐Component Extreme Value (TCEV1) model as an example it was shown in the first part that regardless of estimation method, the seasonal approach can give profit in terms of upper quantile estimation accuracy that rises with the return period of the quantile and is the greatest for no seasonal variation. In this part, an assessment of annual maxima (AM) versus seasonal maxima (SM) approach to FFA was carried out with respect to seasonal and annual peak flow series of 38 Polish gauging stations. First, the assumption of mutual independence of the seasonal maxima has been tested. The smoothness of SM and AM empirical probability distribution functions was analysed and compared. The TCEV1 model with seasonally estimated parameters was found to be not appropriate for most Polish data as it considerably underrates the skewness of AM distributions and upper quantile values as well. Consequently, the discrepancies between the SM and AM estimates of TCEV1 are observed. Taking SM and TCEV1 distribution, the dominating season in AM series was confronted with predominant season for extreme floods. The key argument for presumptive superiority of SM approach that SM samples are more statistically homogeneous than AM samples has not been confirmed by the data. An analysis of fitness to SM and AM of Polish datasets made for seven distributions pointed to Pearson (3) distribution as the best for AM and Summer Maxima, whereas it was impossible to select a single best model for winter samples. In the multi‐model approach to FFA, the tree functions, i.e., Pe(3), CD3 and LN3, should be involved for both SM and AM. As the case study, Warsaw gauge on the Vistula River was selected. While most of AM elements are here from winter season, the prevailing majority of extreme annual floods are the summer maxima. The upper quantile estimates got by means of classical annual and two‐season methods happen to be fairly close; what's more they are nearly equal to the quantiles calculated just for the season of dominating extreme floods. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
17.
ABSTRACT

Series of observed flood intervals, defined as the time intervals between successive flood peaks over a threshold, were extracted directly from 11 approximately 100-year streamflow datasets from Queensland, Australia. A range of discharge thresholds were analysed that correspond to return periods of approximately 3.7 months to 6.3 years. Flood interval histograms at South East Queensland gauges were consistently unimodal whereas those of the North and Central Queensland sites were often multimodal. The exponential probability distribution (pd) is often used to describe interval exceedence probabilities, but fitting utilizing the Anderson-Darling statistic found little evidence that it is the most suitable. The fatigue life pd dominated sub-year return periods (<1 year), often transitioning to a log Pearson 3 pd at above-year return periods. Fatigue life pd is used in analysis of the lifetime to structural failure when a threshold is exceeded, and this paper demonstrates its relevance also to the elapsed time between above-threshold floods. At most sites, the interval medians were substantially less than the means for sub-year return periods. Statistically the median is a better measure of the central tendency of skewed distributions but the mean is generally used in practice to describe the classical concept of flood return period.
Editor Z.W. Kundzewicz; Associate editor I. Nalbantis  相似文献   

18.
Abstract

The aim of this paper is to understand the causal factors controlling the relationship between flood peaks and volumes in a regional context. A case study is performed based on 330 catchments in Austria ranging from 6 to 500 km2 in size. Maximum annual flood discharges are compared with the associated flood volumes, and the consistency of the peak–volume relationship is quantified by the Spearman rank correlation coefficient. The results indicate that climate-related factors are more important than catchment-related factors in controlling the consistency. Spearman rank correlation coefficients typically range from about 0.2 in the high alpine catchments to about 0.8 in the lowlands. The weak dependence in the high alpine catchments is due to the mix of flood types, including long-duration snowmelt, synoptic floods and flash floods. In the lowlands, the flood durations vary less in a given catchment which is related to the filtering of the distribution of all storms by the catchment response time to produce the distribution of flood producing storms.
Editor Z.W. Kundzewicz  相似文献   

19.
Many civil infrastructures are located near the confluence of two streams, where they may be subject to inundation by high flows from either stream or both. These infrastructures, such as highway bridges, are designed to meet specified performance objectives for floods of a specified return period (e.g. the 100 year flood). Because the flooding of structures on one stream can be affected by high flows on the other stream, it is important to know the relationship between the coincident exceedence probabilities on the confluent stream pair in many hydrological engineering practices. Currently, the National Flood Frequency Program (NFF), which was developed by the US Geological Survey (USGS) and based on regional analysis, is probably the most popular model for ungauged site flood estimation and could be employed to estimate flood probabilities at the confluence points. The need for improved infrastructure design at such sites has motivated a renewed interest in the development of more rigorous joint probability distributions of the coincident flows. To accomplish this, a practical procedure is needed to determine the crucial bivariate distributions of design flows at stream confluences. In the past, the copula method provided a way to construct multivariate distribution functions. This paper aims to develop the Copula‐based Flood Frequency (COFF) method at the confluence points with any type of marginal distributions via the use of Archimedean copulas and dependent parameters. The practical implementation was assessed and tested against the standard NFF approach by a case study in Iowa's Des Moines River. Monte Carlo simulations proved the success of the generalized copula‐based joint distribution algorithm. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
Methods based on the recursive probability, the extreme number theorem, and Markov chain (MC) concepts were applied to predict drought lengths (duration) on the standardized (termed as standardized hydrological index, SHI) sequences of monthly and annual river flows from Atlantic Canada. Results of the study indicated that the MC-based method is the most efficient, reliable and versatile method for predicting drought durations followed by the extreme-number-based method. The recursive-probability-based method was found to be computationally intensive and less efficient, although it provided a powerful means for calibrating the empirical plotting position formula needed in the MC-based method. The Weibull plotting position formula turned out to be a suitable measure of the exceedance probability in MC methodology for predicting drought lengths in Atlantic Canada. Based on results, it can be inferred that the MC-based method can be extended to MC2 and higher-order chains for predicting drought lengths on SHI sequences. The predictive capability of the extreme-number-theorem-based method is limited only to independent or weakly first-order persistent SHI sequences.
EDITOR D. Koutsoyiannis

ASSOCIATE EDITOR Q. Zhang  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号