首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Discharge time series' are one of the core data sets used in hydrological investigations. Errors in the data mainly occur through uncertainty in gauging (measurement uncertainty) and uncertainty in determination of the stage–discharge relationship (rating curve uncertainty). Thirty‐six flow gauges from the Namoi River catchment, Australia, were examined to explore how rating curve uncertainty affects gauge reliability and uncertainty of observed flow records. The analysis focused on the deviations in gaugings from the rating curves because standard (statistical) uncertainty methods could not be applied. Deviations of greater/lesser than 10% were considered significant to allow for a measurement uncertainty threshold of 10%, determined from quality coding of gaugings and operational procedures. The deviations in gaugings were compared against various factors to examine trends and identify major controls, including stage height, date, month, rating table, gauging frequency and quality, catchment area and type of control. The analysis gave important insights into data quality and the reliability of each gauge, which had previously not been recognized. These included identification of more/less reliable periods of record, which varied widely between gauges, and identification of more/less reliable parts of the hydrograph. Most gauges showed significant deviations at low stages, affecting the determination of low flows. This was independent of the type of gauge control, with many gauges experiencing problems in the stability of the rating curve, likely as a result of sediment flux. The deviations in gaugings also have widespread application in modelling, for example, informing suitable calibration periods and defining error distributions. This paper demonstrates the value and importance of undertaking qualitative analyses of observed records. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Parsimonious stage–fall–discharge rating curve models for gauging stations subject to backwater complications are developed from simple hydraulic theory. The rating curve models are compounded in order to allow for possible shifts in the hydraulics when variable backwater becomes effective. The models provide a prior scientific understanding through the relationship between the rating curve parameters and the hydraulic properties of the channel section under study. This characteristic enables prior distributions for the rating curve parameters to be easily elicited according to site‐specific information and the magnitude of well‐known hydraulic quantities. Posterior results from three Norwegian and one American twin‐gauge stations affected by variable backwater are obtained using Markov chain Monte Carlo simulation techniques. The case studies demonstrate that the proposed Bayesian rating curve assessment is appropriate for developing rating procedures for gauging stations that are subject to variable backwater. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
In order to quantify total error affecting hydrological models and predictions, we must explicitly recognize errors in input data, model structure, model parameters and validation data. This paper tackles the last of these: errors in discharge measurements used to calibrate a rainfall‐runoff model, caused by stage–discharge rating‐curve uncertainty. This uncertainty may be due to several combined sources, including errors in stage and velocity measurements during individual gaugings, assumptions regarding a particular form of stage–discharge relationship, extrapolation of the stage–discharge relationship beyond the maximum gauging, and cross‐section change due to vegetation growth and/or bed movement. A methodology is presented to systematically assess and quantify the uncertainty in discharge measurements due to all of these sources. For a given stage measurement, a complete PDF of true discharge is estimated. Consequently, new model calibration techniques can be introduced to explicitly account for the discharge error distribution. The method is demonstrated for a gravel‐bed river in New Zealand, where all the above uncertainty sources can be identified, including significant uncertainty in cross‐section form due to scour and re‐deposition of sediment. Results show that rigorous consideration of uncertainty in flow data results in significant improvement of the model's ability to predict the observed flow. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
5.
球坐标系中图形单元应变与旋转张量及其误差解算   总被引:1,自引:0,他引:1       下载免费PDF全文
利用泰勒级数展开和弹性力学的几何方程,推导了球坐标系中由GPS位移数据解算图形单元应变和旋转张量的解析公式. 通过线性化处理,并利用误差传播定律,详细推导了图形单元应变和旋转张量的误差公式. 利用川滇地区最新的GPS测站位移速率数据,采用图形单元法解算了该区的面应变率及最大剪切应变率分布,并对其进行了初步的分析. 同时阐述了图形单元应变数学模型的局限性,讨论了图形单元法计算应变的意义以及图形单元的选择问题,进一步分析了应变计算的定权方法,还讨论了GPS观测网图形单元的地心半径差异与应变的关系问题.   相似文献   

6.
River discharge and nutrient measurements are subject to aleatory and epistemic uncertainties. In this study, we present a novel method for estimating these uncertainties in colocated discharge and phosphorus (P) measurements. The “voting point”‐based method constrains the derived stage‐discharge rating curve both on the fit to available gaugings and to the catchment water balance. This helps reduce the uncertainty beyond the range of available gaugings and during out of bank situations. In the example presented here, for the top 5% of flows, uncertainties are shown to be 139% using a traditional power law fit, compared with 40% when using our updated “voting point” method. Furthermore, the method is extended to in situ and lab analysed nutrient concentration data pairings, with lower uncertainties (81%) shown for high concentrations (top 5%) than when a traditional regression is applied (102%). Overall, for both discharge and nutrient data, the method presented goes some way to accounting for epistemic uncertainties associated with nonstationary physical characteristics of the monitoring site.  相似文献   

7.
Precipitation and Reference Evapotranspiration (ETo) are the most important variables for rainfall–runoff modelling. However, it is not always possible to get access to them from ground‐based measurements, particularly in ungauged catchments. This study explores the performance of rainfall and ETo data from the global European Centre for Medium Range Weather Forecasts (ECMWF) ERA interim reanalysis data for the discharge prediction. The Weather Research and Forecasting (WRF) mesoscale model coupled with the NOAH Land Surface Model is used for the retrieval of hydro‐meteorological variables by downscaling ECMWF datasets. The conceptual Probability Distribution Model (PDM) is chosen for this study for the discharge prediction. The input data and model parameter sensitivity analysis and uncertainty estimations are taken into account for the PDM calibration and prediction in the case study catchment in England following the Generalized Likelihood Uncertainty Estimation approach. The goodness of calibration and prediction uncertainty is judged on the basis of the p‐factor (observations bracketed by the prediction uncertainty) and the r‐factor (achievement of small uncertainty band). The overall analysis suggests that the uncertainty estimates using WRF downscaled ETo have slightly smaller p and r values (p= 0.65; r= 0.58) as compared to ground‐based observation datasets (p= 0.71; r= 0.65) during the validation and hence promising for discharge prediction. On the contrary, WRF precipitation has the worst performance, and further research is needed for its improvement (p= 0.04; r= 0.10). Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
The Natural Resources Conservation Service Curve Number model is one of the most recognizable procedures in the field of rainfall–run‐off estimation. It has been widely applied for different purposes in hydrological models. In spite of its widespread use, some uncertainties have not even clarified and must be examined for its proper application. Particularly, choosing the most representative rainfall–run‐off events, and the coefficient λ that relates the parameters of the model (curve number CN and initial abstraction Ia). In this research, an advanced analysis is developed to evaluate the influence of λ for a set of representative watersheds of the Agricultural Research Service of the United Stated Department of Agriculture. They are characterized by different soil properties, land uses, and climatic conditions. Finally, 2 novel methodologies for the selection of the most representative rainfall–run‐off events and for the adaptation of coefficient λ are included, based on the pattern of rainfall distribution.  相似文献   

9.
10.
Abstract

The water-centric community has continuously made efforts to identify, assess and implement rigorous uncertainty analyses for routine hydrological measurements. This paper reviews some of the most relevant efforts and subsequently demonstrates that the Guide to the expression of uncertainty in measurement (GUM) is a good candidate for estimation of uncertainty intervals for hydrometry. The demonstration is made by implementing the GUM to typical hydrometric applications and comparing the analysis results with those obtained using the Monte Carlo method. The results show that hydrological measurements would benefit from the adoption of the GUM as the working standard, because of its soundness, the availability of software for practical implementation and potential for extending the GUM to hydrological/hydraulic numerical simulations.

Editor D. Koutsoyiannis

Citation Muste, M., Lee, K. and Bertrand-Krajewski, J.-L., 2012. Standardized uncertainty analysis for hydrometry: a review of relevant approaches and implementation examples. Hydrological Sciences Journal, 57 (4), 643–667.  相似文献   

11.
Solute concentrations in streamflow typically vary systematically with stream discharge, and the resulting concentration–discharge relationships are important signatures of catchment biogeochemical processes. Solutes derived from mineral weathering often exhibit decreasing concentrations with increasing flows, suggesting dilution of a kinetically limited weathering flux by a variable flux of water. However, previous work showed that concentration–discharge relationships of weathering‐derived solutes in 59 headwater catchments were much weaker than this simple dilution model would predict. Instead, catchments behaved as chemostats, with rates of solute production and/or mobilization that were nearly proportional to water fluxes, on both event and interannual timescales. Here, we re‐examine these findings using data for a wider range of solutes from 2,186 catchments, ranging from ~10 to >1,000,000 km2 in drainage area and spanning a wide range of lithologic and climatic settings. Concentration–discharge relationships among this much larger set of larger catchments are broadly consistent with the previously described chemostatic behaviour, at least on event and interannual timescales for weathering‐derived solutes. Among these same catchments, however, site‐to‐site variations in mean concentrations of weathering‐derived solutes exhibit strong negative correlations with long‐term average precipitation and discharge, reflecting strong climatic control on long‐term leaching of the critical zone. We use multiple regression of site characteristics including discharge to identify potential controls on long‐term mean concentrations and find that lithologic and land cover controls are significant predictors for many analytes. The picture that emerges is one in which, on event and interannual timescales, weathering‐derived stream solute concentrations are chemostatically buffered by groundwater storage and fast chemical reactions, but each catchment's chemostatic “set point” reflects site‐to‐site differences in climatically driven evolution of the critical zone. In contrast to these weathering products, some nutrients and particulates are often near‐chemostatic across all timescales, and their long‐term mean concentrations correlate more strongly with land use than climatic characteristics.  相似文献   

12.
The proper assessment of design hydrographs and their main properties (peak, volume and duration) in small and ungauged basins is a key point of many hydrological applications. In general, two types of methods can be used to evaluate the design hydrograph: one approach is based on the statistics of storm events, while the other relies on continuously simulating rainfall‐runoff time series. In the first class of methods, the design hydrograph is obtained by applying a rainfall‐runoff model to a design hyetograph that synthesises the storm event. In the second approach, the design hydrograph is quantified by analysing long synthetic runoff time series that are obtained by transforming synthetic rainfall sequences through a rainfall‐runoff model. These simulation‐based procedures overcome some of the unrealistic hypotheses which characterize the event‐based approaches. In this paper, a simulation experiment is carried out to examine the differences between the two types of methods in terms of the design hydrograph's peak, volume and duration. The results conclude that the continuous simulation methods are preferable because the event‐based approaches tend to underestimate the hydrograph's volume and duration. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
A need for more accurate flood inundation maps has recently arisen because of the increasing frequency and extremity of flood events. The accuracy of flood inundation maps is determined by the uncertainty propagated from all of the variables involved in the overall process of flood inundation modelling. Despite our advanced understanding of flood progression, it is impossible to eliminate the uncertainty because of the constraints involving cost, time, knowledge, and technology. Nevertheless, uncertainty analysis in flood inundation mapping can provide useful information for flood risk management. The twin objectives of this study were firstly to estimate the propagated uncertainty rates of key variables in flood inundation mapping by using the first‐order approximation method and secondly to evaluate the relative sensitivities of the model variables by using the Hornberger–Spear–Young (HSY) method. Monte Carlo simulations using the Hydrologic Engineering Center's River Analysis System and triangle‐based interpolation were performed to investigate the uncertainty arising from discharge, topography, and Manning's n in the East Fork of the White River near Seymour, Indiana, and in Strouds Creek in Orange County, North Carolina. We found that the uncertainty of a single variable is propagated differently to the flood inundation area depending on the effects of other variables in the overall process. The uncertainty was linearly/nonlinearly propagated corresponding to valley shapes of the reaches. In addition, the HSY sensitivity analysis revealed the topography of Seymour reach and the discharge of Strouds Creek to be major contributors to the change of flood inundation area. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
How much data is needed for calibration of a hydrological catchment model? In this paper we address this question by evaluating the information contained in different subsets of discharge and groundwater time series for multi‐objective calibration of a conceptual hydrological model within the framework of an uncertainty analysis. The study site was a 5·6‐km2 catchment within the Forsmark research site in central Sweden along the Baltic coast. Daily time series data were available for discharge and several groundwater wells within the catchment for a continuous 1065‐day period. The hydrological model was a site‐specific modification of the conceptual HBV model. The uncertainty analyses were based on a selective Monte Carlo procedure. Thirteen subsets of the complete time series data were investigated with the idea that these represent realistic intermittent sampling strategies. Data subsets included split‐samples and various combinations of weekly, monthly, and quarterly fixed interval subsets, as well as a 53‐day ‘informed observer’ subset that utilized once per month samples except during March and April—the months containing large and often dominant snow melt events—when sampling was once per week. Several of these subsets, including that of the informed observer, provided very similar constraints on model calibration and parameter identification as the full data record, in terms of credibility bands on simulated time series, posterior parameter distributions, and performance indices calculated to the full dataset. This result suggests that hydrological sampling designs can, at least in some cases, be optimized. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
Timber harvest temporarily increases water yield; however, relationships between hydrologic and nutrient chemistry changes have not been consistent. This study quantified the effects of forest harvesting and site preparation without fertilization and with modern best management practices on nutrient concentrations and yields in small headwater streams of the Southeastern Coastal Plain. We monitored two watershed pairs for 2 years prior to and 1 year following timber harvest and for 2 more years following site preparation and planting. Treatment watersheds were clearcut, and downstream portions of streamside management zones were thinned in Fall 2003. Site preparation (herbicide application and burning) and planting followed a year later. All operations followed 1999 Georgia forestry best management practices. Previously published research revealed a large increase in water yield following harvest. Nutrient concentrations varied significantly within and between monitoring periods, even in reference watersheds. Silvicultural activities had no discernible effect on phosphorus and ammonium concentrations; however, statistically significant increases in nitrate/nitrite (67–340 µg L−1) and total nitrogen concentrations (100–400 µg L−1) in treatment watersheds followed stand re‐establishment. Nutrient yields increased after timber harvest largely as a result of increased water yields, although increased nutrient yields were small relative to inter‐annual and inter‐watershed variability and variability. Annual water yield largely explained the variability in annual nitrogen and phosphorus export from reference and treatment streams (r2 values from 0.65 to 0.98). High NOx concentrations coming from an upstream agricultural area decreased 1600–1800 µg L−1 over several hundred metres in the treatment streams by dilution, uptake or denitrification. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
In a related study developed by the authors, building fragility is represented by intensity‐specific distributions of damage exceedance probability of various damage states. The contribution of the latter has been demonstrated in the context of loss estimation of building portfolios, where it is shown that the proposed concept of conditional fragility functions provides the link between seismic intensity and the uncertainty in damage exceedance probabilities. In the present study, this methodology is extended to the definition of building vulnerability, whereby vulnerability functions are characterized by hazard‐consistent distributions of damage ratio per level of primary seismic intensity parameter—Sa(T1). The latter is further included in a loss assessment framework, in which the impact of variability and spatial correlation of damage ratio in the probabilistic evaluation of seismic loss is accounted for, using test‐bed portfolios of 2, 5, and 8‐story precode reinforced concrete buildings located in the district of Lisbon, Portugal. This methodology is evaluated in comparison with current state‐of‐the‐art methods of vulnerability and loss calculation, highlighting the discrepancies that can arise in loss estimates when the variability and spatial distributions of damage ratio, influenced by ground motion properties other than the considered primary intensity measure, are not taken into account.  相似文献   

17.
Multiple segmented rating curves have been proposed to better capture the variability of the physical and hydraulic characteristics of river–floodplain systems. We evaluate the accuracy of one- and two-segmented rating curves by exploiting a large and unique database of direct measurements of stage and discharge data in more than 200 Swedish catchments. Such a comparison is made by explicitly accounting for the potential impact of measurement uncertainty. This study shows that two-segmented rating curves did not fit the data significantly better, nor did they generate fewer errors than one-segmented rating curves. Two-segmented rating curves were found to be slightly beneficial for low flow when there were strong indications of segmentation, but predicted the rating relationship worse in cases of weak indication of segmentation. Other factors were found to have a larger impact on rating curve errors, such as the uncertainty of the discharge measurements and the type of regression method.  相似文献   

18.
Lacustrine groundwater discharge (LGD) transports nutrients from a catchment to a lake, which may fuel eutrophication, one of the major threats to our fresh waters. Unfortunately, LGD has often been disregarded in lake nutrient studies. Most measurement techniques are based on separate determinations of volume and nutrient concentration of LGD: Loads are calculated by multiplying seepage volumes by concentrations of exfiltrating water. Typically low phosphorus (P) concentrations of pristine groundwater often are increased due to anthropogenic sources such as fertilizer, manure or sewage. Mineralization of naturally present organic matter might also increase groundwater P. Reducing redox conditions favour P transport through the aquifer to the reactive aquifer‐lake interface. In some cases, large decreases of P concentrations may occur at the interface, for example, due to increased oxygen availability, while in other cases, there is nearly no decrease in P. The high reactivity of the interface complicates quantification of groundwater‐borne P loads to the lake, making difficult clear differentiation of internal and external P loads to surface water. Anthropogenic sources of nitrogen (N) in groundwater are similar to those of phosphate. However, the environmental fate of N differs fundamentally from P because N occurs in several different redox states, each with different mobility. While nitrate behaves essentially conservatively in most oxic aquifers, ammonium's mobility is similar to that of phosphate. Nitrate may be transformed to gaseous N2 in reducing conditions and permanently removed from the system. Biogeochemical turnover of N is common at the reactive aquifer‐lake interface. Nutrient loads from LGD were compiled from the literature. Groundwater‐borne P loads vary from 0.74 to 2900 mg PO4‐P m?2 year?1; for N, these loads vary from 0.001 to 640 g m?2 year?1. Even small amounts of seepage can carry large nutrient loads due to often high nutrient concentrations in groundwater. Large spatial heterogeneity, uncertain areal extent of the interface and difficult accessibility make every determination of LGD a challenge. However, determinations of LGD are essential to effective lake management. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
T. Furuichi  Z. Win  R. J. Wasson 《水文研究》2009,23(11):1631-1641
Among the large rivers rising on the Tibetan Plateau and adjacent high mountains, the discharge and suspended sediment load of the Ayeyarwady (Irrawaddy) River are the least well known. Data collected between 1969 and 1996 at Pyay (Prome) are analysed to provide the best available modern estimate of discharge (379 ± 47 × 109 m3/year) and suspended sediment load (325 ± 57 × 106 t/year) for the river upstream of the delta head. A statistical comparison with data collected in the nineteenth century (1871 to 1879) shows discharge has significantly decreased in the last ~100 years. Regression and correlation analyses between discharge in the modern period and indices of El Niño–Southern Oscillation (ENSO) show a relationship. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
The Equotip surface hardness tester is becoming a popular method for rock and stone weathering research. In order to improve the reliability of Equotip for on‐site application this study tested four porous limestones under laboratory conditions. The range of stone porosity was chosen to represent likely porosities found in weathered limestones in the field. We consider several key issues: (i) its suitability for soft and porous stones; (ii) the type of probe required for specific on‐site applications; (iii) appropriate (non‐parametrical) statistical methods for Equotip data; (iv) sufficient sampling size. This study shows that the Equotip is suitable for soft and porous rock and stone. From the two tested probes the DL probe has some advantages over the D probe as it correlates slightly better with open porosity and allows for more controlled sampling in recessed areas and rough or curved areas. We show that appropriate sampling sizes and robust non‐parametric methods for subsequent data evaluation can produce meaningful measures of rock surface hardness derived from the Equotip. The novel Hybrid dynamic hardness, a combination of two measuring procedures [single impact method (SIM) and repeated impact method (RIM)], has been adapted and is based on median values to provide a more robust data evaluation. For the tested stones in this study we propose a sample size of 45 readings (for a confidence level of 95%). This approach can certainly be transferred to stone and rock with similar porosities and hardness. Our approach also allows for consistent comparisons to be made across a wide variety of studies in the fields of rock weathering and stone deterioration research. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号