首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
The input uncertainty is as significant as model error, which affects the parameter estimation, yields bias and misleading results. This study performed a comprehensive comparison and evaluation of uncertainty estimates according to the impact of precipitation errors by GLUE and Bayesian methods using the Metropolis Hasting algorithm in a validated conceptual hydrological model (WASMOD). It aims to explain the sensitivity and differences between the GLUE and Bayesian method applied to hydrological model under precipitation errors with constant multiplier parameter and random multiplier parameter. The 95 % confidence interval of monthly discharge in low flow, medium flow and high flow were selected for comparison. Four indices, i.e. the average relative interval length, the percentage of observations bracketed by the confidence interval, the percentage of observations bracketed by the unit confidence interval and the continuous rank probability score (CRPS) were used in this study for sensitivity analysis under model input error via GLUE and Bayesian methods. It was found that (1) the posterior distributions derived by the Bayesian method are narrower and sharper than those obtained by the GLUE under precipitation errors, but the differences are quite small; (2) Bayesian method performs more sensitive in uncertainty estimates of discharge than GLUE according to the impact of precipitation errors; (3) GLUE and Bayesian methods are more sensitive in uncertainty estimate of high flow than the other flows by the impact of precipitation errors; and (4) under the impact of precipitation, the results of CRPS for low and medium flows are quite stable from both GLUE and Bayesian method while it is sensitive for high flow by Bayesian method.  相似文献   

2.
Topographic surveys inevitably contain error, introducing uncertainty into estimates of volumetric or mean change based on the differencing of repeated surveys. In the geomorphic community, uncertainty has often been framed as a problem of separating out real change from apparent change due purely to error, and addressed by removing measured change considered indistinguishable from random noise from analyses (thresholding). Thresholding is important when quantifying gross changes (i.e. total erosion or total deposition), which are systematically biased by random errors in stable parts of a landscape. However, net change estimates are not substantially influenced by those same random errors, and the use of thresholds results in inherently biased, and potentially misleading, estimates of net change and uncertainty. More generally, thresholding is unrelated to the important process of propagating uncertainty in order to place uncertainty bounds around final estimates. Error propagation methods for uncorrelated, correlated, and systematic errors are presented. Those equations demonstrate that uncertainties in modern net change analyses, as well as in gross change analyses using reasonable thresholds, are likely to be dominated by low-magnitude but highly correlated or systematic errors, even after careful attempts to reduce those errors. In contrast, random errors with little to no correlation largely cancel to negligible levels when averaged or summed. Propagated uncertainty is then typically insensitive to the precision of individual measurements, and is instead defined by the relative mean error (accuracy) over the area of interest. Given that real-world mean elevation changes in many landscape settings are often similar in magnitude to potential mean errors in repeat topographic analyses, reducing highly correlated or systematic errors will be central to obtaining accurate change estimates, while placing uncertainty bounds around those results provides essential context for their interpretation. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   

3.
Hydrologic risk analysis for dam safety relies on a series of probabilistic analyses of rainfall-runoff and flow routing models, and their associated inputs. This is a complex problem in that the probability distributions of multiple independent and derived random variables need to be estimated in order to evaluate the probability of dam overtopping. Typically, parametric density estimation methods have been applied in this setting, and the exhaustive Monte Carlo simulation (MCS) of models is used to derive some of the distributions. Often, the distributions used to model some of the random variables are inappropriate relative to the expected behaviour of these variables, and as a result, simulations of the system can lead to unrealistic values of extreme rainfall or water surface levels and hence of the probability of dam overtopping. In this paper, three major innovations are introduced to address this situation. The first is the use of nonparametric probability density estimation methods for selected variables, the second is the use of Latin Hypercube sampling to improve the efficiency of MCS driven by the multiple random variables, and the third is the use of Bootstrap resampling to determine initial water surface level. An application to the Soyang Dam in South Korea illustrates how the traditional parametric approach can lead to potentially unrealistic estimates of dam safety, while the proposed approach provides rather reasonable estimates and an assessment of their sensitivity to key parameters.  相似文献   

4.
The random function is a mathematical model commonly used in the assessment of uncertainty associated with a spatially correlated attribute that has been partially sampled. There are multiple algorithms for modeling such random functions, all sharing the requirement of specifying various parameters that have critical influence on the results. The importance of finding ways to compare the methods and setting parameters to obtain results that better model uncertainty has increased as these algorithms have grown in number and complexity. Crossvalidation has been used in spatial statistics, mostly in kriging, for the analysis of mean square errors. An appeal of this approach is its ability to work with the same empirical sample available for running the algorithms. This paper goes beyond checking estimates by formulating a function sensitive to conditional bias. Under ideal conditions, such function turns into a straight line, which can be used as a reference for preparing measures of performance. Applied to kriging, deviations from the ideal line provide sensitivity to the semivariogram lacking in crossvalidation of kriging errors and are more sensitive to conditional bias than analyses of errors. In terms of stochastic simulation, in addition to finding better parameters, the deviations allow comparison of the realizations resulting from the applications of different methods. Examples show improvements of about 30% in the deviations and approximately 10% in the square root of mean square errors between reasonable starting modelling and the solutions according to the new criteria.  相似文献   

5.
ABSTRACT

A linear approach is presented for analysing flood discharge series affected by measurement errors which are random in nature. A general model based upon the conditional probability concept is introduced to represent random errors and to analyse their effect on flood estimates. Flood predictions provided by quantiles are shown to be positively biased when performed from a sample of measured discharge. Though for design purposes such an effect is conservative, this bias cannot be neglected if the peak discharges are determined from stage measurements by means of the extrapolated tail of the rating curve for the gauging station concerned. Monte Carlo experiments, which have been carried out to analyse small sample effects, have finally shown that the use of the method of maximum likelihood is able to reduce the bias due to measurement errors in discharge data.  相似文献   

6.
Time series in the Earth Sciences are often characterized as self-affine long-range persistent, where the power spectral density, S, exhibits a power-law dependence on frequency, f, S(f) ~ f ?β , with β the persistence strength. For modelling purposes, it is important to determine the strength of self-affine long-range persistence β as precisely as possible and to quantify the uncertainty of this estimate. After an extensive review and discussion of asymptotic and the more specific case of self-affine long-range persistence, we compare four common analysis techniques for quantifying self-affine long-range persistence: (a) rescaled range (R/S) analysis, (b) semivariogram analysis, (c) detrended fluctuation analysis, and (d) power spectral analysis. To evaluate these methods, we construct ensembles of synthetic self-affine noises and motions with different (1) time series lengths N = 64, 128, 256, …, 131,072, (2) modelled persistence strengths β model = ?1.0, ?0.8, ?0.6, …, 4.0, and (3) one-point probability distributions (Gaussian, log-normal: coefficient of variation c v = 0.0 to 2.0, Levy: tail parameter a = 1.0 to 2.0) and evaluate the four techniques by statistically comparing their performance. Over 17,000 sets of parameters are produced, each characterizing a given process; for each process type, 100 realizations are created. The four techniques give the following results in terms of systematic error (bias = average performance test results for β over 100 realizations minus modelled β) and random error (standard deviation of measured β over 100 realizations): (1) Hurst rescaled range (R/S) analysis is not recommended to use due to large systematic errors. (2) Semivariogram analysis shows no systematic errors but large random errors for self-affine noises with 1.2 ≤ β ≤ 2.8. (3) Detrended fluctuation analysis is well suited for time series with thin-tailed probability distributions and for persistence strengths of β ≥ 0.0. (4) Spectral techniques perform the best of all four techniques: for self-affine noises with positive persistence (β ≥ 0.0) and symmetric one-point distributions, they have no systematic errors and, compared to the other three techniques, small random errors; for anti-persistent self-affine noises (β < 0.0) and asymmetric one-point probability distributions, spectral techniques have small systematic and random errors. For quantifying the strength of long-range persistence of a time series, benchmark-based improvements to the estimator predicated on the performance for self-affine noises with the same time series length and one-point probability distribution are proposed. This scheme adjusts for the systematic errors of the considered technique and results in realistic 95 % confidence intervals for the estimated strength of persistence. We finish this paper by quantifying long-range persistence (and corresponding uncertainties) of three geophysical time series—palaeotemperature, river discharge, and Auroral electrojet index—with the three representing three different types of probability distribution—Gaussian, log-normal, and Levy, respectively.  相似文献   

7.
Structural strain modes are able to detect changes in local structural performance, but errors are inevitably intermixed in the measured data. In this paper, strain modal parameters are considered as random variables, and their uncertainty is analyzed by a Bayesian method based on the structural frequency response function (FRF). The estimates of strain modal parameters with maximal posterior probability are determined. Several independent measurements of the FRF of a four-story reinforced concrete frame structural model were performed in the laboratory. The ability to identify the stiffness change in a concrete column using the strain mode was verified. It is shown that the uncertainty of the natural frequency is very small. Compared with the displacement mode shape, the variations of strain mode shapes at each point are quite different. The damping ratios are more affected by the types of test systems. Except for the case where a high order strain mode does not identify local damage, the first order strain mode can provide an exact indication of the damage location.  相似文献   

8.
孙庆山  李乐 《地震》2018,38(3):92-102
应用结合波形互相关技术的双差地震定位法对红河断裂带北段1999—2015年发生的地震进行重新定位。 重新定位后残差明显降低, 震中相比于定位前分布更为集中, 绝大部分地震位于5~15 km的中上地壳。 对重定位结果进行误差分析, 统计显示在95%置信水平下, 定位误差椭圆的长轴基本不超过2.6 km, 水平和垂直方向定位误差均为1.6 km, 定位结果具有较好的稳定性。  相似文献   

9.
Three downscaling models, namely the Statistical Down‐Scaling Model (SDSM), the Long Ashton Research Station Weather Generator (LARS‐WG) model and an artificial neural network (ANN) model, have been compared in terms of various uncertainty attributes exhibited in their downscaled results of daily precipitation, daily maximum and minimum temperature. The uncertainty attributes are described by the model errors and the 95% confidence intervals in the estimates of means and variances of downscaled data. The significance of those errors has been examined by suitable statistical tests at the 95% confidence level. The 95% confidence intervals in the estimates of means and variances of downscaled data have been estimated using the bootstrapping method and compared with the observed data. The study has been carried out using 40 years of observed and downscaled daily precipitation data and daily maximum and minimum temperature data, starting from 1961 to 2000. In all the downscaling experiments, the simulated predictors of the Canadian Global Climate Model (CGCM1) have been used. The uncertainty assessment results indicate that, in daily precipitation downscaling, the LARS‐WG model errors are significant at the 95% confidence level only in a very few months, the SDSM errors are significant in some months, and the ANN model errors are significant in almost all months of the year. In downscaling daily maximum and minimum temperature, the performance of all three models is similar in terms of model errors evaluation at the 95% confidence level. But, according to the evaluation of variability and uncertainty in the estimates of means and variances of downscaled precipitation and temperature, the performances of the LARS‐WG model and the SDSM are almost similar, whereas the ANN model performance is found to be poor in that consideration. Further assessment of those models, in terms of skewness and average dry‐spell length comparison between observed and downscaled daily precipitation, indicates that the downscaled daily precipitation skewness and average dry‐spell lengths of the LARS‐WG model and the SDSM are closer to the observed data, whereas the ANN model downscaled precipitation underestimated those statistics in all months. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
Shear‐wave polarization and time delay are attributes commonly used for fracture detection and characterization. In time‐lapse analysis these parameters can be used as indicators of changes in the fracture orientation and density. Indeed, changes in fracture characteristics provide key information for increased reservoir characterization and exploitation. However, relative to the data uncertainty, is the comparison of these parameters over time statistically meaningful? We present the uncertainty in shear‐wave polarization and time delay as a function of acquisition uncertainties, such as receiver and source misorientation, miscoupling and band‐limited random noise. This study is applied to a time‐lapse borehole seismic survey, recorded in Vacuum Field, New Mexico. From the estimated uncertainties for each survey, the uncertainty in the difference between the two surveys is 31° for the shear‐wave polarization angle and 4 ms for the shear‐wave time delay. Any changes in these parameters greater than these error estimates can be interpreted with confidence. This analysis can be applied to any time‐lapse measurement to provide an interval of confidence in the interpretation of shear‐wave polarization angles and time splitting.  相似文献   

11.
In many branches of science, techniques designed for use in one context are used in other contexts, often with the belief that results which hold in the former will also hold or be relevant in the latter. Practical limitations are frequently overlooked or ignored. Three techniques used in seismic data analysis are often misused or their limitations poorly understood: (1) maximum entropy spectral analysis; (2) the role of goodness-of-fit and the real meaning of a wavelet estimate; (3) the use of multiple confidence intervals. It is demonstrated that in practice maximum entropy spectral estimates depend on a data-dependent smoothing window with unpleasant properties, which can result in poor spectral estimates for seismic data. Secondly, it is pointed out that the level of smoothing needed to give least errors in a wavelet estimate will not give rise to the best goodness-of-fit between the seismic trace and the wavelet estimate convolved with the broadband synthetic. Even if the smoothing used corresponds to near-minimum errors in the wavelet, the actual noise realization on the seismic data can cause important perturbations in residual wavelets following wavelet deconvolution. Finally the computation of multiple confidence intervals (e.g. at several spatial positions) is considered. Suppose a nominal, say 90%, confidence interval is calculated at each location. The confidence attaching to the simultaneous use of the confidence intervals is not then 90%. Methods do exist for working out suitable confidence levels. This is illustrated using porosity maps computed using conditional simulation.  相似文献   

12.
2D Monte Carlo versus 2D Fuzzy Monte Carlo health risk assessment   总被引:15,自引:4,他引:11  
Risk estimates can be calculated using crisp estimates of the exposure variables (i.e., contaminant concentration, contact rate, exposure frequency and duration, body weight, and averaging time). However, aggregate and cumulative exposure studies require a better understanding of exposure variables and uncertainty and variability associated with them. Probabilistic risk assessment (PRA) studies use probability distributions for one or more variables of the risk equation in order to quantitatively characterize variability and uncertainty. Two-dimensional Monte Carlo Analysis (2D MCA) is one of the advanced modeling approaches that may be used to conduct PRA studies. In this analysis the variables of the risk equation along with the parameters of these variables (for example mean and standard deviation for a normal distribution) are described in terms of probability density functions (PDFs). A variable described in this way is called a second order random variable. Significant data or considerable insight to uncertainty associated with these variables is necessary to develop the appropriate PDFs for these random parameters. Typically, available data and accuracy and reliability of such data are not sufficient for conducting a reliable 2D MCA. Thus, other theories and computational methods that propagate uncertainty and variability in exposure and health risk assessment are needed. One such theory is possibility analysis based on fuzzy set theory, which allows the utilization of incomplete information (incomplete information includes vague and imprecise information that is not sufficient to generate probability distributions for the parameters of the random variables of the risk equation) together with expert judgment. In this paper, as an alternative to 2D MCA, we are proposing a 2D Fuzzy Monte Carlo Analysis (2D FMCA) to overcome this difficulty. In this approach, instead of describing the parameters of PDFs used in defining the variables of the risk equation as random variables, we describe them as fuzzy numbers. This approach introduces new concepts and risk characterization methods. In this paper we provide a comparison of these two approaches relative to their computational requirements, data requirements and availability. For a hypothetical case, we also provide a comperative interpretation of the results generated.  相似文献   

13.
Nonuniqueness in geophysical inverse problems is naturally resolved by incorporating prior information about unknown models into observed data. In practical estimation procedures, the prior information must be quantitatively expressed. We represent the prior information in the same form as observational equations, nonlinear equations with random errors in general, and treat as data. Then we may define a posterior probability density function of model parameters for given observed data and prior data, and use the maximum likelihood criterion to solve the problem. Supposing Gaussian errors both in observed data and prior data, we obtain a simple algorithm for iterative search to find the maximum likelihood estimates. We also obtain an asymptotic expression of covariance for estimation errors, which gives a good approximation to exact covariance when the estimated model is linearly close to a true model. We demonstrate that our approach is a general extension of various inverse methods dealing with Gaussian data. By way of example, we apply the new approach to a problem of inferring the final rupture state of the 1943 Tottori earthquake (M = 7.4) from coseismic geodetic data. The example shows that the use of sufficient prior information effectively suppresses both the nonuniqueness and the nonlinearity of the problem.  相似文献   

14.
Soil sampling design, the number of samples collected and the lateral variation of caesium-137 (137Cs) in uneroded reference locations were extracted from previously published work. The focus was on published work which used 137Cs reference inventory (Bq m−2) for qualitative or quantitative estimation of sediment redistribution (SRD) within the landscape. The objective of this study was to address one of the methodological concerns facing the 137Cs technique—that is, the lack of a rigorous statistical treatment of reference locations. The limited attention paid to the reference location is not justified as ‘true’ estimates of SRD are based on the assumption of an unbiased, independent, random probability sample estimate, commonly the arithmetic mean. Results from the literature survey indicated that only 11% of the reference locations sampled for 137Cs expressly stated that a probability sampling design was used (transect or systematic-aligned grid). The remaining locations were generally sampled using a non-probability based design, more commonly known as haphazard sampling. Of the 75 reference study areas identified only 40 provided enough information to determine the dispersion around the mean, and from this the coefficient of variation (CV) was calculated for all available data. The median CV was 19·3%, with 95% confidence limits of 13·0–23.4%, indicating that approximately 11 random, independent samples would generally be necessary to adequately quantify the reference 137Cs area activity with an allowable error of 10% at 90% confidence. Further analysis indicated that only one-third of the studies sampled a sufficient number of 137Cs reference locations. This value would actually be lower as sampling frameworks were based on non-probability sampling procedures. For 137Cs reference locations it is recommended that a probability sampling design be utilized, preferably the systematic-aligned grid method, and as a minimum first-order estimate about 11 samples should be collected for inventory estimates.  相似文献   

15.
The sensitivity and overall uncertainty in peak ground acceleration (PGA)estimates have been calculated for the city of Tabriz, northwestern Iran byusing a specific randomized blocks design. Eight seismic hazard models andparameters with randomly selected uncertainties at two levels have beenconsidered and then a linear model between predicted PGA at a givenprobability level and the uncertainties has been performed. The inputmodels and parameters are those related to the attenuation, magnituderupture-length and recurrence relationships with their uncertainties.Application of this procedure to the studied area indicates that effects ofthe simultaneous variation of all eight input models and parameters on thesensitivity of the seismic hazard can be investigated with a decreasingnumber of computations for all possible combinations at a fixed annualprobability. The results show that the choice of a mathematical model ofthe source mechanism, attenuation relationships and the definition ofseismic parameters are most critical in estimating the sensitivity of seismichazard evaluation, in particular at low levels of probability of exceedance.The overall uncertainty in the expected PGA for an annual probability of0.0021 (10% exceedence in 50 yr) is expressed by a coefficient ofvariation (CV) of about 34% at 68% confidence level for a distance ofabout 5km from the field of the major faults. The CV will decrease withincreasing site-source distance and remains constant, CV = 15%, fordistances larger than 15 km. Finally, treating alternative models on theoverall uncertainty are investigated by additional outliers in input decision.  相似文献   

16.
This paper investigates the effects of uncertainty in rock-physics models on reservoir parameter estimation using seismic amplitude variation with angle and controlled-source electromagnetics data. The reservoir parameters are related to electrical resistivity by the Poupon model and to elastic moduli and density by the Xu-White model. To handle uncertainty in the rock-physics models, we consider their outputs to be random functions with modes or means given by the predictions of those rock-physics models and we consider the parameters of the rock-physics models to be random variables defined by specified probability distributions. Using a Bayesian framework and Markov Chain Monte Carlo sampling methods, we are able to obtain estimates of reservoir parameters and information on the uncertainty in the estimation. The developed method is applied to a synthetic case study based on a layered reservoir model and the results show that uncertainty in both rock-physics models and in their parameters may have significant effects on reservoir parameter estimation. When the biases in rock-physics models and in their associated parameters are unknown, conventional joint inversion approaches, which consider rock-physics models as deterministic functions and the model parameters as fixed values, may produce misleading results. The developed stochastic method in this study provides an integrated approach for quantifying how uncertainty and biases in rock-physics models and in their associated parameters affect the estimates of reservoir parameters and therefore is a more robust method for reservoir parameter estimation.  相似文献   

17.
Abstract

Hydrological models are commonly used to perform real-time runoff forecasting for flood warning. Their application requires catchment characteristics and precipitation series that are not always available. An alternative approach is nonparametric modelling based only on runoff series. However, the following questions arise: Can nonparametric models show reliable forecasting? Can they perform as reliably as hydrological models? We performed probabilistic forecasting one, two and three hours ahead for a runoff series, with the aim of ascribing a probability density function to predicted discharge using time series analysis based on stochastic dynamics theory. The derived dynamic terms were compared to a hydrological model, LARSIM. Our procedure was able to forecast within 95% confidence interval 1-, 2- and 3-h ahead discharge probability functions with about 1.40 m3/s of range and relative errors (%) in the range [–30; 30]. The LARSIM model and the best nonparametric approaches gave similar results, but the range of relative errors was larger for the nonparametric approaches.

Editor D. Koutsoyiannis; Associate editor K. Hamed

Citation Costa, A.C., Bronstert, A. and Kneis, D., 2012. Probabilistic flood forecasting for a mountainous headwater catchment using a nonparametric stochastic dynamic approach. Hydrological Sciences Journal, 57 (1), 10–25.  相似文献   

18.
Precise estimates of the covariance parameters are essential in least-squares collocation (LSC) in the case of increased accuracy requirements. This paper implements restricted maximum likelihood (REML) method for the estimation of three covariance parameters in LSC with the Gauss-Markov second-order function (GM2), which is often used in interpolation of gravity anomalies. The estimates are then validated with the use of an independent technique, which has been often omitted in the previous works that are confined to covariance parameters errors based on the information matrix. The crossvalidation of REML estimates with the use of hold-out method (HO) helps in understanding of REML estimation errors. We analyzed in detail the global minimum of negative log-likelihood function (NLLF) in the estimation of covariance parameters, as well, as the accuracy of the estimates. We found that the correlation between covariance parameters may critically contribute to the errors of their estimation. It was also found that knowing some intrinsic properties of the covariance function may help in the scoring process.  相似文献   

19.
The stochastic integral equation method (S.I.E.M.) is used to evaluate the relative performance of a set of both calibrated and uncalibrated rainfall-runoff models with respect to prediction errors. The S.I.E.M. is also used to estimate confidence (prediction) interval values of a runoff criterion variable, given a prescribed rainfall-runoff model, and a similarity measure used to condition the storms that are utilized for model calibration purposes.Because of the increasing attention given to the issue of uncertainty in rainfall-runoff modeling estimates, the S.I.E.M. provides a promising tool for the hydrologist to consider in both research and design.  相似文献   

20.
The stochastic integral equation method (S.I.E.M.) is used to evaluate the relative performance of a set of both calibrated and uncalibrated rainfall-runoff models with respect to prediction errors. The S.I.E.M. is also used to estimate confidence (prediction) interval values of a runoff criterion variable, given a prescribed rainfall-runoff model, and a similarity measure used to condition the storms that are utilized for model calibration purposes.Because of the increasing attention given to the issue of uncertainty in rainfall-runoff modeling estimates, the S.I.E.M. provides a promising tool for the hydrologist to consider in both research and design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号