首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Practical application of the power-law regression model with an unknown location parameter can be plagued by non-finite least squares parameter estimates. This presents a serious problem in hydrology, since stream flow data is mainly obtained using an estimated stage–discharge power-law rating curve. This study provides a set of sufficient requirements for the data to ensure the existence of finite least squares parameter estimates for a power-law regression with an unknown location parameter. It is shown that in practice, these requirements act as necessary for having a finite least squares solution, in most cases. Furthermore, it is proved that there is a finite probability for the model to produce data having non-finite least squares parameter estimates. The implications of this result are discussed in the context of asymptotic predictions, inference and experimental design. A Bayesian approach to the actual regression problem is recommended.  相似文献   

2.
This paper presents a Bayesian approach for fitting the standard power-law rating curve model to a set of stage-discharge measurements. Methods for eliciting both regional and at-site prior information, and issues concerning the determination of prior forms, are discussed. An efficient MCMC algorithm for the specific problem is derived. The appropriateness of the proposed method is demonstrated by applying the model to both simulated and real-life data. However, some problems came to light in the applications, and these are discussed.  相似文献   

3.
The estimation of missing rainfall data is an important problem for data analysis and modelling studies in hydrology. This paper develops a Bayesian method to address missing rainfall estimation from runoff measurements based on a pre-calibrated conceptual rainfall–runoff model. The Bayesian method assigns posterior probability of rainfall estimates proportional to the likelihood function of measured runoff flows and prior rainfall information, which is presented by uniform distributions in the absence of rainfall data. The likelihood function of measured runoff can be determined via the test of different residual error models in the calibration phase. The application of this method to a French urban catchment indicates that the proposed Bayesian method is able to assess missing rainfall and its uncertainty based only on runoff measurements, which provides an alternative to the reverse model for missing rainfall estimates.  相似文献   

4.
Recent advances in sediment fingerprinting research have seen Bayesian mixing models being increasingly employed as an effective method to coherently translate component uncertainties into source apportionment results. Here, we advance earlier work by presenting an extended Bayesian mixing model capable of providing a full Bayes treatment of geochemical uncertainties. The performance of the extended full Bayes model was assessed against the equivalent empirical Bayes model and traditional frequentist optimisation. The performance of models coded in different Bayesian software (JAGS and Stan) was also evaluated, alongside an assessment of model sensitivity to reduced source representativeness and nonconservative fingerprint behaviour. Results revealed comparable accuracy and precision for the full and empirical Bayes models across both synthetic and real sediment geochemistry datasets, demonstrating that the empirical treatment of source data here represents a close approximation of the full Bayes treatment. Contrasts in the performance of models coded in JAGS and Stan revealed that the choice of software employed can impact significantly upon source apportionment results. Bayesian models coded in Stan were the least sensitive to both reduced source representativeness and nonconservative fingerprint behaviour, indicating Stan as the preferred software for future Bayesian sediment fingerprinting studies. Whilst the frequentist optimisation generally yielded comparable accuracy to the Bayesian models, uncertainties around apportionment estimates were substantially greater and the frequentist model was less effective at dealing with nonconservative behaviour. Overall, the effective performance of the extended full Bayes mixing model coded in Stan represents a notable advancement in source apportionment modelling relative to previous approaches. Both the mixing model and the software comparisons presented here should provide useful guidelines for future sediment fingerprinting studies.  相似文献   

5.
The Bayesian inverse approach proposed by Woodbury and Ulrych (2000) is extended to estimate the transmissivity fields of highly heterogeneous aquifers for steady state ground water flow. Boundary conditions are Dirichlet and Neumann type, and sink and source terms are included. A first-order approximation of Taylor's series for the exponential terms introduced by sinks and sources or the Neumann condition in the governing equation is adopted. Such a treatment leads to a linear finite element formulation between hydraulic head and the logarithm of the transmissivity-denoted as ln(T)-perturbations. An updating procedure similar to that of Woodbury and Ulrych (2000) can be performed. This new algorithm is examined against a generic example. It is found that the linearized solution approximates the true solution with an R2 coefficient = 0.96 for an ln(T) variance of 9 for the test case. The addition of hydraulic head data is shown to improve the ln(T) estimates, in comparison to simply interpolating the sparse ln(T) data alone. The new Bayesian code is also employed to calibrate a high-resolution finite difference MODFLOW model of the Edwards Aquifer in southwest Texas. The posterior ln(T) field from this application yields better head fit when compared to the prior ln(T) field determined from upscaling and cokriging. We believe that traditional MODFLOW grids could be imported into the new Bayes code fairly seamlessly and thereby enhance existing calibration of many aquifers.  相似文献   

6.
A common way to simulate fluid flow in porous media is to use Lattice Boltzmann (LB) methods. Permeability predictions from such flow simulations are controlled by parameters whose settings must be calibrated in order to produce realistic modelling results. Herein we focus on the simplest and most commonly used implementation of the LB method: the single-relaxation-time BGK model. A key parameter in the BGK model is the relaxation time τ which controls flow velocity and has a substantial influence on the permeability calculation. Currently there is no rigorous scheme to calibrate its value for models of real media. We show that the standard method of calibration, by matching the flow profile of the analytic Hagen-Poiseuille pipe-flow model, results in a BGK-LB model that is unable to accurately predict permeability even in simple realistic porous media (herein, Fontainebleau sandstone). In order to reconcile the differences between predicted permeability and experimental data, we propose a method to calibrate τ using an enhanced Transitional Markov Chain Monte Carlo method, which is suitable for parallel computer architectures. We also propose a porosity-dependent τ calibration that provides an excellent fit to experimental data and which creates an empirical model that can be used to choose τ for new samples of known porosity. Our Bayesian framework thus provides robust predictions of permeability of realistic porous media, herein demonstrated on the BGK-LB model, and should therefore replace the standard pipe-flow based methods of calibration for more complex media. The calibration methodology can also be extended to more advanced LB methods.  相似文献   

7.
Maximum-likelihood estimators properly represent measurement error, thus provide a statistically sound basis for evaluating the adequacy of a model fit and for finding the multivariate parameter confidence region. We demonstrate the advantages of using maximum-likelihood estimators rather than simple least-squares estimators for the problem of finding unsaturated hydraulic parameters. Inversion of outflow data given independent retention data can be treated by an extension to a Bayesian estimator. As an example, we apply the methodology to retention and transient unsaturated outflow observations, both obtained on the same medium sand sample. We found the van Genuchten expression to be adequate for the retention data, as the best fit was within measurement error. The Cramer–Rao confidence bound described the true parameter uncertainty approximately. The Mualem–van Genuchten expression was, however, inadequate for our outflow observations, suggesting that the parameters (, n) may not always be equivalent in describing both retention and unsaturated conductivity.  相似文献   

8.
Estimation of low flows in rivers continues to be a vexing problem despite advances in statistical and process‐based hydrological models. We develop a method to estimate minimum streamflow at seasonal to annual timescales from measured streamflow based on regional similarity in the deviations of daily streamflow from minimum streamflow for a period of interest. The method is applied to 1,019 gauged sites in the Western United States for June to December 2015. The gauges were clustered into six regions with distinct timing and magnitude of low flows. A gamma distribution was fit each day to the deviations in specific discharge (daily streamflow divided by drainage area) from minimum specific discharge for gauges in each region. The Kolmogorov–Smirnov test identified days when the gamma distribution was adequate to represent the distribution of deviations in a region. The performance of the gamma distribution was evaluated at gauges by comparing daily estimates of minimum streamflow with estimates from area‐based regression relations for minimum streamflow. Each region had at least 8 days during the period when streamflow measurements would provide better estimates than the regional regression equation, but the number of such days varied by region depending on aridity and homogeneity of streamflow within the region. Synoptic streamflow measurements at ungauged sites have value for estimating minimum streamflow and improving the spatial resolution of hydrological model in regions with streamflow‐gauging networks.  相似文献   

9.
The effect of channel size on residence time distributions (RTDs) of solute in rivers is investigated in this paper using tracer test data and the variable residence time (VART) model. Specifically, the investigation focuses on the influence of shear dispersion and hyporheic exchange on the shape of solute RTD, and how these two transport processes prevail in larger and smaller streams, respectively, leading to distinct tails of RTD. Simulation results show that (1) RTDs are dispersion-dependent and thereby channel-size (scale) dependent. RTDs increasing longitudinal dispersion coefficient. Small streams with negligible dispersion coefficient may display various types of RTD from upward curving patterns to a straight line (power-law distributions) and further to downward curving lognormal distributions when plotted in log–log coordinates. Moderate-sized rivers are transitional in terms of RTDs and commonly exhibit lognormal and power-law RTDs; (2) the incorporation of water and solute losses/gains in the VART model can improve simulation results and make parameter values more reasonable; (3) the ratio of time to peak concentration to the minimum mean residence time is equal to the recovery ratio of tracer. The relation provides a simple method for determining the minimum mean residence time; and (4) the VART model is able to reproduce various RTDs observed in rivers with 3–4 fitting parameters while no user-specified RTD functions are needed.  相似文献   

10.
11.
Selection of a flood frequency distribution and associated parameter estimation procedure is an important step in flood frequency analysis. This is however a difficult task due to problems in selecting the best fit distribution from a large number of candidate distributions and parameter estimation procedures available in the literature. This paper presents a case study with flood data from Tasmania in Australia, which examines four model selection criteria: Akaike Information Criterion (AIC), Akaike Information Criterion—second order variant (AICc), Bayesian Information Criterion (BIC) and a modified Anderson–Darling Criterion (ADC). It has been found from the Monte Carlo simulation that ADC is more successful in recognizing the parent distribution correctly than the AIC and BIC when the parent is a three-parameter distribution. On the other hand, AIC and BIC are better in recognizing the parent distribution correctly when the parent is a two-parameter distribution. From the seven different probability distributions examined for Tasmania, it has been found that two-parameter distributions are preferable to three-parameter ones for Tasmania, with Log Normal appears to be the best selection. The paper also evaluates three most widely used parameter estimation procedures for the Log Normal distribution: method of moments (MOM), method of maximum likelihood (MLE) and Bayesian Markov Chain Monte Carlo method (BAY). It has been found that the BAY procedure provides better parameter estimates for the Log Normal distribution, which results in flood quantile estimates with smaller bias and standard error as compared to the MOM and MLE. The findings from this study would be useful in flood frequency analyses in other Australian states and other countries in particular, when selecting an appropriate probability distribution from a number of alternatives.  相似文献   

12.
Many dating techniques include significant error terms which are not independent between samples to date. This is typically the case in Optically Stimulated Luminescence (OSL) dating where the conversion from characteristic equivalent doses to the corresponding ages using the annual dosimetry data includes error terms that are common to all produced datings. Dealing with these errors is essential to estimate ages from a set of datings whose chronological ordering is known. In this work, we propose and we study a Bayesian model to address this problem. For this purpose, we first consider a multivariate model with multiplicative Gaussian errors in a Bayesian framework. This model relates a set of characteristic equivalent doses to the corresponding ages while taking into account for the systematic and non-systematic errors associated to the dosimetry. It thus offers the opportunity to deal properly with stratigraphic constraints within OSL datings, but also with other datings possessing errors which are independent from systematic errors of OSL (e.g. radiocarbon). Then, we use this model to extend an existing Bayesian model for the assessment of characteristic equivalent doses from Single Aliquot and Regenerative (SAR) dose measurements. The overall Bayesian model leads to the joint estimation of all the variables (which include all the dose–response functions and characteristic equivalent doses) of a sequence of, possibly heterogeneous, datings. We also consider a more generic solution consisting in using directly the age model from a set of characteristic equivalent dose estimates and their associated standard errors. We finally give an example of application on a set of five OSL datings with stratigraphic constraints and observe a good adequacy between the two approaches.  相似文献   

13.
How can spatially explicit nonlinear regression modelling be used for obtaining nonpoint source loading estimates in watersheds with limited information? What is the value of additional monitoring and where should future data‐collection efforts focus on? In this study, we address two frequently asked questions in watershed modelling by implementing Bayesian inference techniques to parameterize SPAtially Referenced Regressions On Watershed attributes (SPARROW), a model that empirically estimates the relation between in‐stream measurements of nutrient fluxes and the sources/sinks of nutrients within the watershed. Our case study is the Hamilton Harbour watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. The proposed Bayesian approach explicitly accounts for the uncertainty associated with the existing knowledge from the system and the different types of spatial correlation typically underlying the parameter estimation of watershed models. Informative prior parameter distributions were formulated to overcome the problem of inadequate data quantity and quality, whereas the potential bias introduced from the pertinent assumptions is subsequently examined by quantifying the relative change of the posterior parameter patterns. Our modelling exercise offers the first estimates of export coefficients and delivery rates from the different subcatchments and thus generates testable hypotheses regarding the nutrient export ‘hot spots’ in the studied watershed. Despite substantial uncertainties characterizing our calibration dataset, ranging from 17% to nearly 400%, we arrived at an uncertainty level for the whole‐basin nutrient export estimates of only 36%. Finally, we conduct modelling experiments that evaluate the potential improvement of the model parameter estimates and the decrease of the predictive uncertainty if the uncertainty associated with the current nutrient loading estimates is reduced. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Bayesian methods for estimating multi-segment discharge rating curves   总被引:3,自引:2,他引:1  
This study explores Bayesian methods for handling compound stage–discharge relationships, a problem which arises in many natural rivers. It is assumed: (1) the stage–discharge relationship in each rating curve segment is a power-law with a location parameter, or zero-plane displacement; (2) the segment transitions are abrupt and continuous; and (3) multiplicative measurement errors are of equal variance. The rating curve fitting procedure is then formulated as a piecewise regression problem where the number of segments and the associated changepoints are assumed unknown. Procedures are developed for describing both global and site-specific prior distributions for all rating curve parameters, including the changepoints. Estimation and uncertainty analysis is evaluated using Markov chain Monte Carlo simulation (MCMC) techniques. The first model explored accounts for parameter and model uncertainties in the interpolated area, i.e. within the range of available stage–discharge measurements. A second model is constructed in an attempt to include the uncertainty in extrapolation, which is necessary when the rating curve is used to estimate discharges beyond the highest or lowest measurement. This is done by assuming that the rate of changepoints both inside and outside the measured area follows a Poisson process. The theory is applied to actual data from Norwegian gauging stations. The MCMC solutions give results that appear sensible and useful for inferential purposes, though the latter model needs further efforts in order to obtain a more efficient simulation scheme.  相似文献   

15.
This paper presents a scheme describing low flow formation processes in areas with different environmental conditions, including the impact of the selection and explanatory power of predictors for a probabilistic model based on the Logit model. The research was carried out using 29 daily streamflow gauges located in the Lublin region of southeastern Poland for the hydrological period 1976–2018. Analysis resulted in two distinct low flow schemes. In the lowland rivers, low flows occur during the warm season and are related to evaporation exceeding precipitation. In the upland rivers, hydrogeological factors related to water levels in the local Cretaceous aquifers determine the occurrence of low flows. This differentiation affects the quality of the predictive models. For lowland rivers, models based on the climatic water balance with a monthly shift have a better fit, while these models used for upland rivers are characterized by an approximately 10% decrease in accuracy. For upland rivers, the combined CtHt models without shifts produce the best model fit. The generalized precision of the Logit models is around 80%–90%.  相似文献   

16.
When large quantities of seismic data are involved it is impossible to examine all gathers by eye for AVO anomalies. The standard approach is to compute, for each amplitude profile (at a specific time) on each gather, the intercept and gradient of a straight-line fit to seismic amplitudes. These intercepts and gradients are each plotted as a sort of seismic section - an intercept section, and a gradient section. Estimation of the intercept and gradient for a straight-line fit to each amplitude profile proceeds traditionally via least-squares. Two undesirable features can be hidden from the user by the fitting procedure, namely (i) the effect of outlying or uncharacteristic amplitudes on the intercept and gradient estimates, and (ii) complete breakdown of the straight-line model for the amplitudes, thus rendering meaningless the intercept and gradient estimates. It should be remembered that least-squares can always fit any sequence of numbers to any other sequence of numbers; checks are needed to show that the result is meaningful. It is shown that statistically robust estimation methods can greatly limit the damage done by outlying amplitudes, and that a simple test on the model, the runs-statistic, is capable of detecting breakdown of the straight-line assumption. It is demonstrated using two seismic data sets that these two techniques, used in tandem, facilitate much better quality control of AVO intercept and gradient calculations.  相似文献   

17.
The Usutu virus is an arbovirus transmitted by mosquitoes and causing disease in birds. The virus was detected in Austria for the first time in 2001, while a major outbreak occurred in 2003. Rubel et al. (2008) developed a nine-compartment deterministic SEIR model to explain the spread of the disease. We extended this to a hierarchical Bayes model assuming random variation in temperature data, in reproduction data of birds, and in the number of birds found to be infected. The model was implemented in R, combined with the FORTRAN subroutine for the original deterministic model. Analysis was made by MCMC using a random walk Metropolis scheme. Posterior means, medians, and credible intervals were calculated for the parameters. The hierarchical Bayes approach proved to be fruitful in extending the deterministic model into a stochastic one. It allowed for Bayesian point and interval estimation and quantification of uncertainty of predictions. The analysis revealed that some model parameters were not identifiable; therefore we kept constant some of them and analyzed others conditional on them. Identifiability problems are common in models aiming to mirror the mechanism of the process, since parameters with natural interpretation are likely to exhibit interrelationships. This study illustrated that Bayesian modeling combined with conditional analysis may help in those cases. Its application to the Usutu model improved model fit and revealed the structure of interdependencies between model parameters: it demonstrated that determining some of them experimentally would enable estimation of the others, except one of them, from available data.  相似文献   

18.
This paper discusses two model-based geostatistical methods for spatial interpolation of the number of days that ground level ozone exceeds a threshold level. The first method assumes counts to approximately follow a Poisson distribution, while the second method assumes a log-Normal distribution. First, these methods were compared using an extensive data set covering the Netherlands, Belgium and Germany. Second, the focus was placed on only the Netherlands, where only a small data set was used. Bayesian techniques were used for parameter estimation and interpolation. Parameter estimates are comparable due to the log-link in both models. Incorporating data from adjacent countries improves parameter estimation. The Poisson model predicts more accurately (maximum kriging standard deviation of 2.16 compared to 2.69) but shows smoother surfaces than the log-Normal model. The log-Normal approach ensures a better representation of the observations and gives more realistic patterns (an RMSE of 2.26 compared to 2.44). Model-based geostatistical procedures are useful to interpolate limited data sets of counts of ozone exceedance days. Spatial risk estimates using existing prior information can be made relating health effects to environmental thresholds.  相似文献   

19.
Precise measurements of seismological Q are difficult because we lack detailed knowledge on how the Earth’s fine velocity structure affects the amplitude data. In a number of recent papers, Morozov (Geophys J Int 175:239–252, 2008; Seism Res Lett 80:5–7, 2009; Pure Appl Geophys, this volume, 2010) proposes a new procedure intended to improve Q determinations. The procedure relies on quantifying the structural effects using a new form of geometrical spreading (GS) model that has an exponentially decaying component with time, e ?γt·γ is a free parameter and is measured together with Q. Morozov has refit many previously published sets of amplitude attenuation data. In general, the new Q estimates are much higher than previous estimates, and all of the previously estimated frequency-dependence values for Q disappear in the new estimates. In this paper I show that (1) the traditional modeling of seismic amplitudes is physically based, whereas the new model lacks a physical basis; (2) the method of measuring Q using the new model is effectively just a curve fitting procedure using a first-order Taylor series expansion; (3) previous high-frequency data that were fit by a power-law frequency dependence for Q are expected to be also fit by the first-order expansion in the limited frequency bands involved, because of the long tails of power-law functions; (4) recent laboratory measurements of intrinsic Q of mantle materials at seismic frequencies provide independent evidence that intrinsic Q is often frequency-dependent, which should lead to frequency-dependent total Q; (5) published long-period surface wave data that were used to derive several recent Q models inherently contradict the new GS model; and (6) previous modeling has already included a special case that is mathematically identical to the new GS model, but with physical assumptions and measured Q values that differ from those with the new GS model. Therefore, while individually the previous Q measurements have limited precision, they cannot be improved by using the new GS model. The large number of Q measurements by seismologists are sufficient to show that Q values in the Earth are highly laterally variable and are often frequency dependent.  相似文献   

20.
First, I benchmark existing methods of calculating subsurface 26Al, 10Be, and 14C production rates due to cosmic-ray muons against published calibration data from bedrock cores and mine excavations. This shows that methods based on downward propagation of the surface muon energy spectrum fit calibration data adequately. Of these methods, one that uses a simpler geographic scaling method based on energy-dependent attenuation of muons in the atmosphere appears to fit calibration data better than a more complicated one that uses the results of a global particle transport model to estimate geographic variation in the surface muon energy spectrum. Second, I show that although highly simplified and computationally much cheaper exponential function approximations for subsurface production rates are not globally adequate for accurate production rate estimates at arbitrary location and depth, they can be used with acceptable accuracy for many exposure-dating and erosion-rate-estimation applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号