首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
The effect of the sample size on prediction quality is well understood. Recently, studies have assessed this relationship using near‐continuous water quality samples. However, this is rarely possible because of financial constraints, and therefore, many studies have relied on simulation‐based methods utilizing more affordable surrogates. A limitation of simulation‐based methods is the requirement of a good relationship, which is often not present. Therefore, catchment managers require a direct method to estimate the effect of sample size on the mean using historical water quality data. One measure of prediction quality is the precision with which a mean is estimated; this is the focus of this work. By characterizing the effect of sample size on the precision of the mean, it is possible for catchment managers to adjust the sample size in relation to both the cost and the precision. Historical data are often sparse and generally collected using several different sampling schemes, all without inclusion probabilities. This means that an approach is needed to obtain unbiased estimates of the variance of the mean using a model‐based approach. With the use of total phosphorus data from 17 sub‐catchments in southeastern Australia, the ability of a model‐based approach to estimate the effect of sample size on the precision of event and base‐flow mean concentrations. The results showed that for estimating annual base‐flow mean concentration, little gain in precision was achieved above 12 observations per year. Sample sizes greater than 12 samples per event improved event‐based estimates; however, the inclusion of more than 12 samples per event did not greatly reduce the event mean concentration uncertainties. The precision of the base‐flow estimates was most correlated to percentage urban cover, whereas the precision of the event mean estimates was most correlated with catchment size. The method proposed in this work could be readily applied to other water quality variables and other monitoring sites. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Anderson WP  Evans DG 《Ground water》2007,45(4):499-505
Ground water recharge is often estimated through the calibration of ground water flow models. We examine the nature of calibration errors by considering some simple mathematical and numerical calculations. From these calculations, we conclude that calibrating a steady-state ground water flow model to water level extremes yields estimates of recharge that have the same value as the time-varying recharge at the time the water levels are measured. These recharge values, however, are a subdued version of the actual transient recharge signal. In addition, calibrating a steady-state ground water flow model to data collected during periods of rising water levels will produce recharge values that underestimate the actual transient recharge. Similarly, calibrating during periods of falling water levels will overestimate the actual transient recharge. We also demonstrate that average water levels can be used to estimate the actual average recharge rate provided that water level data have been collected for a sufficient amount of time.  相似文献   

3.
This paper presents a methodology to optimise measurement networks for the prediction of groundwater flow. Two different strategies are followed: the design of a measurement network that aims at minimizing the log-transmissivity variance (averaged over the domain of interest) or a design that minimises the hydraulic head variance (averaged over the domain of interest). The methodology consists of three steps. In the first step the prior log-transmissivity and hydraulic head variances are estimated. This step is completely general in the sense that the prior variances maybe unconditional, or maybe conditioned to log-transmissivity and/or hydraulic head measurements. In case hydraulic head measurements are available in the first step, the inverse groundwater flow problem is solved by the sequential self-calibrated method. In the second step, the full covariance matrices of hydraulic head and log-transmissivity are calculated numerically on the basis of a sufficiently big number of Monte Carlo realisations. On the basis of the estimated covariances, the impact of an additional measurement in terms of variance reduction is calculated. The measurement that yields the maximum domain averaged variance reduction is selected. Additional measurement locations are selected according to the same procedure.The procedure has been tested for a series of synthetic reference cases. Different sampling designs are tested for each of these cases, and the proposed strategies are compared with other sampling strategies. Although the proposed strategies indeed reach their objective and yield in most cases the lowest posterior log-transmissivity variance or hydraulic head variance, the differences as compared to alternative sampling strategies are frequently small. For the cases considered here, a sampling design that covers more or less regularly the aquifer performs well.The paper also illustrates that for the optimal estimation of a well catchment a heuristic criterion (spreading measurement points as regularly as possible over the zone where there is some uncertainty regarding the capture probability) yields better results than a sampling design that minimises the posterior log-transmissivity variance or posterior hydraulic head variance.  相似文献   

4.
Acquisition of Representative Ground Water Quality Samples for Metals   总被引:1,自引:0,他引:1  
R.S. Kerr Environmental Research Laboratory (RSKERL) personnel have evaluated sampling procedures for the collection of representative, accurate, and reproducible ground water quality samples for metals for the past four years. Intensive sampling research at three different field sites has shown that the method by which samples are collected has a greater impact on sample quality, accuracy, and reproducibility than whether the samples are filtered or not. In particular, sample collection practices that induce artificially high levels of turbidity have been shown to have the greatest negative impacts on sample quality. Results indicated the ineffectiveness of bailers for collection of representative metal samples. Inconsistent operator usage together with excessive purging generally resulted in excessive turbidity (>100 NTUs) and large differences in filtered and unfiltered metal samples. The use of low flow rate purging and sampling consistently produced filtered and unfiltered samples that showed no significant differences in concentrations. Turbidity levels were generally less than 5 NTUs, even in fine-textured glacial till. We recommend the use of low flow rates, during both purging and sampling, placement of the sampling intake at the desired sampling point, minimal disturbance of the stagnant water column above the screened interval, monitoring of water quality indicators during purging, minimization of atmospheric contact with samples, and collection of unfiltered samples for metal analyses to estimate total contaminant loading in the system. While additional time is spent due to use of low flow rates, this is compensated for by eliminating the need for filtration, decreased volume of contaminated purge water, and less resampling to address inconsistent data results.  相似文献   

5.
Conventional flood frequency analysis is concerned with providing an unbiased estimate of the magnitude of the design flow exceeded with the probabilityp, but sampling uncertainties imply that such estimates will, on average, be exceeded more frequently. An alternative approach is therefore, to derive an estimator which gives an unbiased estimate of flow risk: the difference between the two magnitudes reflects uncertainties in parameter estimation. An empirical procedure has been developed to estimate the mean true exceedance probabilities of conventional estimates made using a GEV distribution fitted by probability weighted moments, and adjustment factors have been determined to enable the estimation of flood magnitudes exceeded with, on average, the desired probability.  相似文献   

6.
Regional estimates of acid neutralizing capacity (ANC) in stream waters are found using a regression model. The model has landscape classifications based on catchment characteristics as its main independent variables. It also includes continuously varying covariates. Landscape classifications and covariates are selected from a priori scientific understanding of acidification processes. Parameter estimates for the model are found using measurements of ANC in 50 streams in Galloway, south‐west Scotland with a history of acidification. The parameterized model is then used to provide ANC simulations for streams throughout a subregion, assuming conservative mixing of ANC through the flow network. The stream water sampling survey is designed to reduce the variance of parameter estimates. A variance model is suggested for the concentrations, and this is used to simulate the variance of ANC concentrations throughout the subregion. Monte Carlo simulation is used to estimate the distribution of the length of river reach with ANC less than zero. © Crown Copyright 2004. Reproduced with the permission of Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.  相似文献   

7.
Because of their fast response to hydrological events, small catchments show strong quantitative and qualitative variations in their water runoff. Fluxes of solutes or suspended material can be estimated from water samples only if an appropriate sampling scheme is used. We used continuous in‐stream measurements of the electrical conductivity of the runoff in a small subalpine catchment (64 ha) in central Switzerland and in a very small (0·16 ha) subcatchment. Different sampling and flux integration methods were simulated for weekly water analyses. Fluxes calculated directly from grab samples are strongly biased towards high conductivities observed at low discharges. Several regressions and weighted averages have been proposed to correct for this bias. Their accuracy and precision are better, but none of these integration methods gives a consistently low bias and a low residual error. Different methods of peak sampling were also tested. Like regressions, they produce important residual errors and their bias is variable. This variability (both between methods and between catchments) does not allow one to tell a priori which sampling scheme and integration method would be more accurate. Only discharge‐proportional sampling methods were found to give essentially unbiased flux estimates. Programmed samplers with a fraction collector allow for a proportional pooling and are appropriate for short‐term studies. For long‐term monitoring or experiments, sampling at a frequency proportional to the discharge appears to be the best way to obtain accurate and precise flux estimates. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

8.
Regional ground water flow is most usually estimated using Darcy's law, with hydraulic conductivities estimated from pumping tests, but can also be estimated using ground water residence times derived from radioactive tracers. The two methods agree reasonably well in relatively homogeneous aquifers but it is not clear which is likely to produce more reliable estimates of ground water flow rates in heterogeneous systems. The aim of this paper is to compare bias and uncertainty of tracer and hydraulic approaches to assess ground water flow in heterogeneous aquifers. Synthetic two-dimensional aquifers with different levels of heterogeneity (correlation lengths, variances) are used to simulate ground water flow, pumping tests, and transport of radioactive tracers. Results show that bias and uncertainty of flow rates increase with the variance of the hydraulic conductivity for both methods. The bias resulting from the nonlinearity of the concentration–time relationship can be reduced by choosing a tracer with a decay rate similar to the mean ground water residence time. The bias on flow rates estimated from pumping tests is reduced when performing long duration tests. The uncertainty on ground water flow is minimized when the sampling volume is large compared to the correlation length. For tracers, the uncertainty is related to the ratio of correlation length to the distance between sampling wells. For pumping tests, it is related to the ratio of correlation length to the pumping test's radius of influence. In regional systems, it may be easier to minimize this ratio for tracers than for pumping tests.  相似文献   

9.
This work proposes two modelling frameworks for diagnosing temporal variations in nonlinear rating curves that describe suspended sediment–discharge relationships. A variant of the weighted regression on time, discharge, and season model is proposed and is compared against dynamic nonlinear modelling, a newly developed nonlinear time series filter based on sequential Monte Carlo sampling. Both approaches estimate a time series of rating curve parameters, with uncertainty, that can be used to diagnose variability in the sediment–discharge relationship over time. We evaluate the models with a variety of synthetic scenarios to highlight their ability to estimate signals of known rating curve change. Results reveal important bias‐variance trade‐offs unique to each approach, and in general, suggest that dynamic nonlinear modelling is better suited for rapid rating curve changes, whereas the weighted regression on time, discharge, and season variant more precisely estimates slow change. The techniques are then applied in two case studies in the Upper Hudson and Mohawk Rivers in New York. We conclude with a discussion of the implications of dynamic rating curves for the management of water quality in riverine and estuary systems.  相似文献   

10.
Studies of aquatic invertebrate production have been primarily conducted at the level of individual taxa or populations. Advancing our understanding of the functioning and energy flow in aquatic ecosystems necessitates scaling-up to community and whole-lake levels, as well as integrating across benthic and pelagic habitats and across multiple trophic levels. In this paper, we compare a suite of non-cohort based methods for estimating benthic invertebrate production at subpopulation, habitat, and whole-lake levels for Sparkling Lake, WI, USA. Estimates of the overall mean benthic invertebrate production (i.e. whole-lake level) ranged from 1.9 to 5.0 g DM m−2 y−1, depending on the method. Production estimates varied widely among depths and habitats, and there was general qualitative agreement among methods with regards to differences in production among habitats. However, there were also consistent and systematic differences among methods. The size-frequency method gave the highest, while the regression model of Banse and Mosher (Ecol Monogr 50:355–379, 1980) gave the lowest production estimates. The regression model of Plante and Downing (Can J Fish Aquat Sci 46:1489–1498, 1989) had the lowest average coefficients of variation at habitat (CV = 0.17) and whole-lake (CV = 0.08) levels. At the habitat level, variance in production estimates decreased with sampling effort, with little improvement after 10–15 samples. Our study shows how different production estimates can be generated from the same field data, though aggregating estimates up to the whole-lake level does produce an averaging effect that tends to reduce variance.  相似文献   

11.
Water quality in several tributaries of the Dnepr in the southeastern part of its basin in the territory of the Republic of Belarus was estimated by six biotic indices and by the comparison with reference sites as accepted in the European Water Framework Directive. Water quality estimates obtained by different indices for the same sites are significantly different. The most adequate estimates were obtained from the British and Belgian indices for the assessment of the state of flowing waters. The comparative analysis of the two approaches showed that the method based on reference sites yields a more stringent estimate of river water quality than biotic indices.  相似文献   

12.
Paillet FL 《Ground water》2001,39(5):667-675
Permeability profiles derived from high-resolution flow logs in heterogeneous aquifers provide a limited sample of the most permeable beds or fractures determining the hydraulic properties of those aquifers. This paper demonstrates that flow logs can also be used to infer the large-scale properties of aquifers surrounding boreholes. The analysis is based on the interpretation of the hydraulic head values estimated from the flow log analysis. Pairs of quasi-steady flow profiles obtained under ambient conditions and while either pumping or injecting are used to estimate the hydraulic head in each water-producing zone. Although the analysis yields localized estimates of transmissivity for a few water-producing zones, the hydraulic head estimates apply to the far-field aquifers to which these zones are connected. The hydraulic head data are combined with information from other sources to identify the large-scale structure of heterogeneous aquifers. More complicated cross-borehole flow experiments are used to characterize the pattern of connection between large-scale aquifer units inferred from the hydraulic head estimates. The interpretation of hydraulic heads in situ under steady and transient conditions is illustrated by several case studies, including an example with heterogeneous permeable beds in an unconsolidated aquifer, and four examples with heterogeneous distributions of bedding planes and/or fractures in bedrock aquifers.  相似文献   

13.
The logistical demands of coring lake sediments tend to preclude the replicate coring necessary to establish error estimates for measured sedimentary parameters. However, if such parameters are to be used to reconstruct sediment yield, and particularly to identify temporal variability of sediment yield, reasonable error estimates are required. In this paper data from a series of alpine lakes in British Columbia are applied to develop a new method for deriving such estimates. Regression surfaces fitted to point values of sediment mass are used to model the physically controlled spatial variability of sedimentation. Deviations from these surfaces are assumed to represent remaining unstructured variance, which constitutes a conservative error estimate. Application of the technique to the alpine lake dataset gives sediment yield estimates with error ranges of ±7–21 per cent. The potential error is minimized where the spatial variability of sedimentation is strongly predictable. The best fits were achieved for elongate lakes of simple basin morphology. The range of the error estimates is sufficiently low to allow detection of variability in Holocene sediment yield to one of the lakes. By using this technique, absolute sediment yields with associated error estimates may be derived. The associated gains in precision justify multicore approaches to lake sediment‐based reconstructions of sediment yield. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

14.
Soil-gas sampling and analysis is a common tool used in vapor intrusion assessments; however, sample collection becomes more difficult in fine-grained, low-permeability soils because of limitations on the flow rate that can be sustained during purging and sampling. This affects the time required to extract sufficient volume to satisfy purging and sampling requirements. The soil-gas probe tubing or pipe and sandpack around the probe screen should generally be purged prior to sampling. After purging, additional soil gas must be extracted for chemical analysis, which may include field screening, laboratory analysis, occasional duplicate samples, or analysis for more than one analytical method (e.g., volatile organic compounds and semivolatile organic compounds). At present, most regulatory guidance documents do not distinguish between soil-gas sampling methods that are appropriate for high- or low-permeability soils. This paper discusses permeability influences on soil-gas sample collection and reports data from a case study involving soil-gas sampling from silt and clay-rich soils with moderate to extremely low gas permeability to identify a sampling approach that yields reproducible samples with data quality appropriate for vapor intrusion investigations for a wide range of gas-permeability conditions.  相似文献   

15.
Six methods were compared for calculating annual stream exports of sulfate, nitrate, calcium, magnesium and aluminum from six small Appalachian watersheds. Approximately 250–400 stream samples and concurrent stream flow measurements were collected during baseflows and storm flows for the 1989 water year at five Pennsylvania watersheds and during the 1989–1992 water years at a West Virginia watershed. Continuous stream flow records were also collected at each watershed. Solute exports were calculated from the complete data set using six different scenarios ranging from instantaneous monthly measurements of stream chemistry and stream flow, to intensive monitoring of storm flow events and multiple regression equations. The results for five of the methods were compared with the regression method because statistically significant models were developed and the regression equations allowed for prediction of solute concentrations during unsampled storm flows. Results indicated that continuous stream flow measurement was critical to producing exports within 10% of regression estimates. For solutes whose concentrations were not correlated strongly with stream flow, weekly grab samples combined with continuous records of stream flow were sufficient to produce export estimates within 10% of the regression method. For solutes whose concentrations were correlated strongly with stream flow, more intensive sampling during storm flows or the use of multiple regression equations were the most appropriate methods, especially for watersheds where stream flows changed most quickly. Concentration–stream flow relationships, stream hydrological response, available resources and required level of accuracy of chemical budgets should be considered when choosing a method for calculating solute exports. © 1997 John Wiley & Sons, Ltd.  相似文献   

16.
The largest grains found in samples of transported sediment are commonly used to estimate flow competence. With samples from a range of flows, a relationship between the flow and the largest mobile grain can be derived and used to estimate the critical shear stress for incipient motion of the different grain sizes in the bed sediment or, inversely, to estimate the magnitude of the flow from the largest grain found in a transport sample. Because these estimates are based on an extreme value of the transport grain-size distribution, however, they are subject to large errors and are sensitive to the effect of sample size, which tends to vary widely in sediment transport samples from natural flows. Furthermore, estimates of the critical shear stress based on the largest sampled moving grain cannot be scaled in a manner that permits reasonable comparison between fractions. The degree to which sample size and scaling problems make largest-grain estimates of fractional critical shear stress deviate from a true relationship cannot be predicted exactly, although the direction of such a deviation can be demonstrated. The large errors and unknown bias suggest that the largest sampled mobile grain is not a reliable predictor of either critical shear stress or flow magnitude. It is possible to define a single flow competence for the entire mixture, based on a central value of the transport grain-size distribution. Such a measure is relatively stable, does not require between-fraction scaling, and appears to be well supported by observation.  相似文献   

17.
18.
We have used two different sampling techniques to study the geochemical response of a small lowland rural catchment to episodic storm runoff. The first method involves traditional daily spot sampling and has been used to develop a standard end‐member mixing analysis (EMMA) of the relative contributions of ground water flow and surface runoff to the total stream flow. The second method utilizes a continuous sampling device, powered by an osmotic pump, to produce an integrated 24‐h sample of the stream flow. When combined with the EMMA results from the spot samples, analyses of the integrated samples reveal the presence of a third component that makes a significant contribution to the dissolved NO3, Ca and K export from the catchment during the rising limb of the hydrographic profile of a storm event following a prolonged dry period. The storm occurred in the middle of the night, so that the response of the stream chemistry was not captured by the daily samples. We hypothesize that this third component is derived from the flushing of stored soil water that contains the geochemical signature of decaying vegetation. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
Abstract

Abstract Stream sampling programmes for water quality estimation constitute a statistical survey of a correlated population. The properties of parameter and other estimates made from sample values from such programmes are set in the context of statistical sampling theory. It is shown that a model-based rather than a design-based approach to statistical analysis is usually appropriate. The influence of model structure and sampling design on the robustness and suitability of estimation procedures is investigated, and relationships with kriging are demonstrated. Methodology is discussed with reference to data from a UK sampling programme  相似文献   

20.
Chemical hydrograph separation using electrical conductivity and digital filters is applied to quantify runoff components in the 1,640 km2 semi‐arid Kaap River catchment and its subcatchments in South Africa. A rich data set of weekly to monthly water quality data ranging from 1978 to 2012 (450 to 940 samples per site) was analysed at 4 sampling locations in the catchment. The data were routinely collected by South Africa's national Department of Water and Sanitation, using standard sampling procedures. Chemical hydrograph separation using electrical conductivity (EC) as a tracer was used as reference and a recursive digital filter was then calibrated for the catchment. Results of the two‐component hydrograph separation indicate the dominance of baseflow in the low flow regime, with a contribution of about 90% of total flow; however, during the wet season, baseflow accounts for 50% of total flow. The digital filter parameters were very sensitive and required calibration, using chemical hydrograph separation as a reference. Calibrated baseflow estimates ranged from 40% of total flow at the catchment outlet to 70% in the tributaries. The study demonstrates that routinely monitored water quality data, especially EC, can be used as a meaningful tracer, which could also aid in the calibration of a digital filter method and reduce uncertainty of estimated flow components. This information enhances our understanding of how baseflow is generated and contributed to streamflow throughout the year, which can aid in quantification of environmental flows, as well as to better parameterize hydrological models used for water resources planning and management. Baseflow estimates can also be useful for groundwater and water quality management.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号