首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Abstract

Flood frequency analysis (FFA) is essential for water resources management. Long flow records improve the precision of estimated quantiles; however, in some cases, sample size in one location is not sufficient to achieve a reliable estimate of the statistical parameters and thus, regional FFA is commonly used to decrease the uncertainty in the prediction. In this paper, the bias of several commonly used parameter estimators, including L-moment, probability weighted moment and maximum likelihood estimation, applied to the general extreme value (GEV) distribution is evaluated using a Monte Carlo simulation. Two bias compensation approaches: compensation based on the shape parameter, and compensation using three GEV parameters, are proposed based on the analysis and the models are then applied to streamflow records in southern Alberta. Compensation efficiency varies among estimators and between compensation approaches. The results overall suggest that compensation of the bias due to the estimator and short sample size would significantly improve the accuracy of the quantile estimation. In addition, at-site FFA is able to provide reliable estimation based on short data, when accounting for the bias in the estimator appropriately.
Editor D. Koutsoyiannis; Associate editor Sheng Yue  相似文献   

2.
In the hydrologic analysis of extreme events such as precipitation or floods, the data can generally be divided into two types: partial duration series and annual maximum series. Partial duration series analysis is a robust method to analyze hydrologic extremes, but the adaptive choice of an optimal threshold is challenging. The main goal of this paper was to determine the best method for choosing optimal thresholds. Ten semi-parametric tail index estimators were applied to find the optimal threshold of a 24-h duration precipitation period using data from the Korean Meteorological Administration. The mean square errors of the 10 estimators were calculated to determine the optimal threshold using a semi-parametric bootstrap method. A modified generalized Jackknife estimator determined the best performance in this study among the 10 estimators evaluated with regard to estimating the mean square error of the shape estimator for the generalized Pareto distribution.  相似文献   

3.
Asymptotic properties of maximum likelihood parameter and quantile estimators of the 2-parameter kappa distribution are studied. Eight methods for obtaining large sample confidence intervals for the shape parameter and for quantiles of this distribution are proposed and compared by using Monte Carlo simulation. The best method is highlighted on the basis of the coverage probability of the confidence intervals that it produces for sample sizes commonly found in practice. For such sample sizes, confidence intervals for quantiles and for the shape parameter are shown to be more accurate if the quantile estimators are assumed to be log normally distributed rather than normally distributed (same for the shape parameter estimator). Also, confidence intervals based on the observed Fisher information matrix perform slightly better than those based on the expected value of this matrix. A hydrological example is provided in which the obtained theoretical results are applied.  相似文献   

4.
The use of historical data can significantly reduce the uncertainty around estimates of the magnitude of rare events obtained with extreme value statistical models. For historical data to be included in the statistical analysis a number of their properties, e.g. their number and magnitude, need to be known with a reasonable level of confidence. Another key aspect of the historical data which needs to be known is the coverage period of the historical information, i.e. the period of time over which it is assumed that all large events above a certain threshold are known. It might be the case though, that it is not possible to easily retrieve with sufficient confidence information on the coverage period, which therefore needs to be estimated. In this paper methods to perform such estimation are introduced and evaluated. The statistical definition of the problem corresponds to estimating the size of a population for which only few data points are available. This problem is generally refereed to as the German tanks problem, which arose during the second world war, when statistical estimates of the number of tanks available to the German army were obtained. Different estimators can be derived using different statistical estimation approaches, with the maximum spacing estimator being the minimum-variance unbiased estimator. The properties of three estimators are investigated by means of a simulation study, both for the simple estimation of the historical coverage and for the estimation of the extreme value statistical model. The maximum spacing estimator is confirmed to be a good approach to the estimation of the historical period coverage for practical use and its application for a case study in Britain is presented.  相似文献   

5.
This paper empirically investigates the asymptotic behaviour of the flood probability distribution and more precisely the possible occurrence of heavy tail distributions, generally predicted by multiplicative cascades. Since heavy tails considerably increase the frequency of extremes, they have many practical and societal consequences. A French database of 173 daily discharge time series is analyzed. These series correspond to various climatic and hydrological conditions, drainage areas ranging from 10 to 105 km2, and are from 22 to 95 years long. The peaks-over-threshold method has been used with a set of semi-parametric estimators (Hill and Generalized Hill estimators), and parametric estimators (maximum likelihood and L-moments). We discuss the respective interest of the estimators and compare their respective estimates of the shape parameter of the probability distribution of the peaks. We emphasize the influence of the selected number of the highest observations that are used in the estimation procedure and in this respect the particular interest of the semi-parametric estimators. Nevertheless, the various estimators agree on the prevalence of heavy tails and we point out some links between their presence and hydrological and climatic conditions.  相似文献   

6.
Q.J. Wang 《Journal of Hydrology》1990,120(1-4):115-124
Unbiased estimators of probability weighted moments (PWM) and partial probability weighted moments (PPWM) from systematic and historical flood information are derived. Applications are made to estimating parameters and quantiles of the generalized extreme value (GEV) distribution. The effect of lower bound censoring, which might be deliberately introduced in practice, is also considered.  相似文献   

7.
Q. J. Wang 《Journal of Hydrology》1990,120(1-4):103-114
The concept of partial probability weighted moments (PPWM), which can be used to estimate a distribution from censored samples, is introduced. Unbiased estimators of PPWM are derived. An application is made to estimating parameters and quantiles of the generalized extreme value (GEV) distribution from censored samples. Censored samples yield high quantile estimates which are almost as efficient as those obtained from uncensored samples. This could be a very useful technique for dealing with the undesirable effects of low outliers which occur in semiarid and arid zones.  相似文献   

8.
Ad hoc techniques for estimating the quantiles of the Generalized Pareto (GP) and the Generalized Extreme Values (GEV) distributions are introduced. The estimators proposed are based on new estimators of the position and the scale parameters recently introduced in the Literature. They provide valuable estimates of the quantiles of interest both when the shape parameter is known and when it is unknown (this latter case being of great relevance in practical applications). In addition, weakly-consistent estimators are introduced, whose calculation does not require the knowledge of any parameter. The procedures are tested on simulated data, and comparisons with other techniques are shown. The research was partially supported by Contract n. ENV4-CT97-0529 within the project “FRAMEWORK” of the European Community – D.G. XII. Grants by “Progetto Giovani Ricercatori” are also acknowledged.  相似文献   

9.
10.
The key problem in nonparametric frequency analysis of flood and droughts is the estimation of the bandwidth parameter which defines the degree of smoothing. Most of the proposed bandwidth estimators have been based on the density function rather than the cumulative distribution function or the quantile that are the primary interest in frequency analysis. We propose a new bandwidth estimator derived from properties of quantile estimators. The estimator builds on work by Altman and Léger (1995). The estimator is compared to the well-known method of least squares cross-validation (LSCV) using synthetic data generated from various parametric distributions used in hydrologic frequency analysis. Simulations suggest that our estimator performs at least as well as, and in many cases better than, the method of LSCV. In particular, the use of the proposed plug-in estimator reduces bias in the estimation as compared to LSCV. When applied to data sets containing observations with identical values, typically the result of rounding or truncation, the LSCV and most other techniques generally underestimates the bandwidth. The proposed technique performs very well in such situations.  相似文献   

11.
12.
Water quality is often highly variable both in space and time, which poses challenges for modelling the more extreme concentrations. This study developed an alternative approach to predicting water quality quantiles at individual locations. We focused on river water quality data that were collected over 25 years, at 102 catchments across the State of Victoria, Australia. We analysed and modelled spatial patterns of the 10th, 25th, 50th, 75th and 90th percentiles of the concentrations of sediments, nutrients and salt, with six common constituents: total suspended solids (TSS), total phosphorus (TP), filterable reactive phosphorus (FRP), total Kjeldahl nitrogen (TKN), nitrate-nitrite (NOx), and electrical conductivity (EC). To predict the spatial variation of each quantile for each constituent, we developed statistical regression models and exhaustively searched through 50 catchment characteristics to identify the best set of predictors for that quantile. The models predict the spatial variation in individual quantiles of TSS, TKN and EC well (66%–96% spatial variation explained), while those for TP, FRP and NOx have lower performance (37%–73% spatial variation explained). The most common factors that influence the spatial variations of the different constituents and quantiles are: annual temperature, percentage of cropping land area in catchment and channel slope. The statistical models developed can be used to predict how low- and high-concentration quantiles change with landscape characteristics, and thus provide a useful tool for catchment managers to inform planning and policy making with changing climate and land use conditions.  相似文献   

13.
14.
—?A maximum-likelihood (ML) estimator of the correlation dimension d 2 of fractal sets of points not affected by the left-hand truncation of their inter-distances is defined. Such truncation might produce significant biases of the ML estimates of d 2 when the observed scale range of the phenomenon is very narrow, as often occurs in seismological studies. A second very simple algorithm based on the determination of the first two moments of the inter-distances distribution (SOM) is also proposed, itself not biased by the left-hand truncation effect. The asymptotic variance of the ML estimates is given. Statistical tests carried out on data samples with different sizes extracted from populations of inter-distances following a power law, suggested that the sample variance of the estimates obtained by the proposed methods are not significantly different, and are well estimated by the asymptotic variance also for samples containing a few hundred inter-distances. To examine the effects of different sources of systematic errors, the two estimators were also applied to sets of inter-distances between points belonging to statistical fractal distributions, baker's maps and experimental distributions of earthquake epicentres. For a full evaluation of the results achieved by the methods proposed here, these were compared with those obtained by the ML estimator for untruncated samples or by the least-squares algorithm.  相似文献   

15.
The idea of this paper is to present estimators for combining terrestrial gravity data with Earth gravity models and produce a high‐quality source of the Earth's gravity field data through all wavelengths. To do so, integral and point‐wise estimators are mathematically developed, based on the spectral combination theory, in such a way that they combine terrestrial data with one and/or two Earth gravity models. The integral estimators are developed so that they become biased or unbiased to a priori information. For testing the quality of the estimators, their global mean square errors are generated using an Earth gravity model08 model and one of the recent products of the gravity field and steady‐state ocean circulation explorer mission. Numerical results show that the integral estimators have smaller global root mean square errors than the point‐wise ones but they are not efficient practically. The integral estimator of the biased type is the most suited due to its smallest global root mean square error comparing to the rest of the estimators. Due largely to the omission errors of Earth gravity models the point‐wise estimators are not sensitive to the Earth gravity model commission error; therefore, the use of high‐degree Earth gravity models is very influential for reduction of their root mean square errors. Also it is shown that the use of the ocean circulation explorer Earth gravity model does not significantly reduce the root mean square errors of the presented estimators in the presence of Earth gravity model08. All estimators are applied in the region of Fennoscandia and a cap size of 2° for numerical integration and a maximum degree of 2500 for generation of band‐limited kernels are found suitable for the integral estimators.  相似文献   

16.
Studies have illustrated the performance of at-site and regional flood quantile estimators. For realistic generalized extreme value (GEV) distributions and short records, a simple index-flood quantile estimator performs better than two-parameter (2P) GEV quantile estimators with probability weighted moment (PWM) estimation using a regional shape parameter and at-site mean and L-coefficient of variation (L-CV), and full three-parameter at-site GEV/PWM quantile estimators. However, as regional heterogeneity or record lengths increase, the 2P-estimator quickly dominates. This paper generalizes the index flood procedure by employing regression with physiographic information to refine a normalized T-year flood estimator. A linear empirical Bayes estimator uses the normalized quantile regression estimator to define a prior distribution which is employed with the normalized 2P-quantile estimator. Monte Carlo simulations indicate that this empirical Bayes estimator does essentially as well as or better than the simpler normalized quantile regression estimator at sites with short records, and performs as well as or better than the 2P-estimator at sites with longer records or smaller L-CV.  相似文献   

17.
18.
1 Introduction The process of remotely sensed data acquisition isaffected by factors such as the rotation of the earth, finite scan rate of some sensors, curvature of the earth, non-ideal sensor, variation in platform altitude, attitude, velocity, etc.[1]. One important procedurewhich should be done prior to analyzing remotely sensed data, is geometric correction (image to map) or registration (image to image) of remotely sensed data. The purpose of geometric correction or registration is to e…  相似文献   

19.
Abstract

A new technique is developed for identifying groups for regional flood frequency analysis. The technique uses a clustering algorithm as a starting point for partitioning the collection of catchments. The groups formed using the clustering algorithm are subsequently revised to improve the regional characteristics based on three requirements that are defined for effective groups. The result is overlapping groups that can be used to estimate extreme flow quantiles for gauged or ungauged catchments. The technique is applied to a collection of catchments from India and the results indicate that regions with the desired characteristics can be identified using the technique. The use of the groups for estimating extreme flow quantiles is demonstrated for three example sites.  相似文献   

20.
A class of regression type estimators of the parameterd in a fractionally differencedARMA (p, q) process is introduced. This class is an extension of the estimator considered by Geweke and Porter-Hudak. In a simulation study, we compared three estimators from this class together with two approximate maximum likelihood estimators which are based on two separate approximations to the likelihood. One approximation ignores the determinant term in the likelihood and the other includes a compensating factor for the determinant. When the determinant term is included, the estimate tends to be much less biased and is in general superior to the other estimate. The approximate maximum likelihood estimator out performed, by a large margin, the regression type estimators for pureARIMA (0,d,0) processes. However, forARIMA (1,d,1) processes, a regression type estimator turned out to be the best for realizations of length 400 in 3 out of the 5 cases we tried.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号