首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Sheng Yue 《水文研究》2001,15(6):1033-1045
A gamma distribution is one of the most frequently selected distribution types for hydrological frequency analysis. The bivariate gamma distribution with gamma marginals may be useful for analysing multivariate hydrological events. This study investigates the applicability of a bivariate gamma model with five parameters for describing the joint probability behavior of multivariate flood events. The parameters are proposed to be estimated from the marginal distributions by the method of moments. The joint distribution, the conditional distribution, and the associated return periods are derived from marginals. The usefulness of the model is demonstrated by representing the joint probabilistic behaviour between correlated flood peak and flood volume and between correlated flood volume and flood duration in the Madawask River basin in the province of Quebec, Canada. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
Sheng Yue 《水文研究》2000,14(14):2575-2588
Complex hydrological events such as floods always appear to be multivariate events that are characterized by a few correlated variables. A complete understanding of these events needs to investigate joint probabilistic behaviours of these correlated variables. The lognormal distribution is one of frequently selected candidates for flood‐frequency analysis. The multivariate lognormal distribution will serve as an important tool for analysing a multivariate flood episode. This article presents a procedure for using the bivariate lognormal distribution to describe the joint distributions of correlated flood peaks and volumes, and correlated flood volumes and durations. Joint distributions, conditional distributions, and the associated return periods of these random variables can be readily derived from their marginal distributions. The approach is verified using observed streamflow data from the Nord river basin, located in the Province of Quebec, Canada. The theoretical distributions show a good fit to observed ones. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
Probabilistic-fuzzy health risk modeling   总被引:3,自引:2,他引:1  
Health risk analysis of multi-pathway exposure to contaminated water involves the use of mechanistic models that include many uncertain and highly variable parameters. Currently, the uncertainties in these models are treated using statistical approaches. However, not all uncertainties in data or model parameters are due to randomness. Other sources of imprecision that may lead to uncertainty include scarce or incomplete data, measurement error, data obtained from expert judgment, or subjective interpretation of available information. These kinds of uncertainties and also the non-random uncertainty cannot be treated solely by statistical methods. In this paper we propose the use of fuzzy set theory together with probability theory to incorporate uncertainties into the health risk analysis. We identify this approach as probabilistic-fuzzy risk assessment (PFRA). Based on the form of available information, fuzzy set theory, probability theory, or a combination of both can be used to incorporate parameter uncertainty and variability into mechanistic risk assessment models. In this study, tap water concentration is used as the source of contamination in the human exposure model. Ingestion, inhalation and dermal contact are considered as multiple exposure pathways. The tap water concentration of the contaminant and cancer potency factors for ingestion, inhalation and dermal contact are treated as fuzzy variables while the remaining model parameters are treated using probability density functions. Combined utilization of fuzzy and random variables produces membership functions of risk to individuals at different fractiles of risk as well as probability distributions of risk for various alpha-cut levels of the membership function. The proposed method provides a robust approach in evaluating human health risk to exposure when there is both uncertainty and variability in model parameters. PFRA allows utilization of certain types of information which have not been used directly in existing risk assessment methods.  相似文献   

4.
Frequent ash fallout from long-lived eruptions (with active phases greater than 5 years) may lead to local populations experiencing unacceptably high cumulative exposures to respirable particulate matter. Ash from Montserrat has been shown to contain significant levels of cristobalite and other reactive agents that are associated with an increased risk of developing pneumoconiosis (including silicosis) and other long-term health problems. There are a number of difficulties associated with estimating risks in populations due to uncertain and wide ranging individual exposures, change in behaviour with time and the natural variation in individual response. Present estimates of risk in workers and other population groups are simplifications based on a limited number of exposure measurements taken on Montserrat (1996–1999), and exposure−response curves from epidemiological studies of coal workers exposed to siliceous dust. In this paper we present a method for calculating the long-term cumulative exposure to cristobalite from volcanic ash by Monte Carlo simulation. Code has been written to generate synthetic time series for volcanic activity, rainfall, ash deposition and erosion to give daily ash deposit values and cristobalite fraction at a range of locations. The daily mean personal exposure for PM10 and cristobalite is obtained by sampling from a probability distribution, with distribution parameters dependent on occupation, ground deposit depth and daily weather conditions. Output from multiple runs is processed to calculate the exceedance probability for cumulative exposure over a range of occupation types, locations and exposure periods. Results are interpreted in terms of current occupational standards, and epidemiological exposure−response functions for silicosis are applied to quantify the long-term health risk. Assuming continuing volcanic activity, median risk of silicosis (profusion 1/0 or higher) for an average adult after 20 years continuous exposure is estimated to be approximately 0.5% in northern Montserrat to 1.6% in Cork Hill. The occupational group with the highest exposure to ash are gardeners, with a corresponding 2% to 4% risk of silicosis. In situations where opportunities for in-depth exposure studies are limited, computer simulations provide a good indication of risk based on current expert knowledge. By running the code for a range of input scenarios, the cost-benefit of mitigation measures (such as a programme of active ash clearance) can be estimated. Results also may be used to identify situations where full exposure studies or fieldwork would be beneficial. Editorial responsibility: J Stix  相似文献   

5.
2D Monte Carlo versus 2D Fuzzy Monte Carlo health risk assessment   总被引:15,自引:4,他引:11  
Risk estimates can be calculated using crisp estimates of the exposure variables (i.e., contaminant concentration, contact rate, exposure frequency and duration, body weight, and averaging time). However, aggregate and cumulative exposure studies require a better understanding of exposure variables and uncertainty and variability associated with them. Probabilistic risk assessment (PRA) studies use probability distributions for one or more variables of the risk equation in order to quantitatively characterize variability and uncertainty. Two-dimensional Monte Carlo Analysis (2D MCA) is one of the advanced modeling approaches that may be used to conduct PRA studies. In this analysis the variables of the risk equation along with the parameters of these variables (for example mean and standard deviation for a normal distribution) are described in terms of probability density functions (PDFs). A variable described in this way is called a second order random variable. Significant data or considerable insight to uncertainty associated with these variables is necessary to develop the appropriate PDFs for these random parameters. Typically, available data and accuracy and reliability of such data are not sufficient for conducting a reliable 2D MCA. Thus, other theories and computational methods that propagate uncertainty and variability in exposure and health risk assessment are needed. One such theory is possibility analysis based on fuzzy set theory, which allows the utilization of incomplete information (incomplete information includes vague and imprecise information that is not sufficient to generate probability distributions for the parameters of the random variables of the risk equation) together with expert judgment. In this paper, as an alternative to 2D MCA, we are proposing a 2D Fuzzy Monte Carlo Analysis (2D FMCA) to overcome this difficulty. In this approach, instead of describing the parameters of PDFs used in defining the variables of the risk equation as random variables, we describe them as fuzzy numbers. This approach introduces new concepts and risk characterization methods. In this paper we provide a comparison of these two approaches relative to their computational requirements, data requirements and availability. For a hypothetical case, we also provide a comperative interpretation of the results generated.  相似文献   

6.
Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential and hybrid exponential/Pareto distributions) are evaluated on their ability to reproduce the statistics of the original observed time series. Each probability distribution is also indirectly assessed by looking at its ability to reproduce key hydrological variables after being used as inputs to a lumped hydrological model. Data from 24 weather stations and two watersheds (Chute‐du‐Diable and Yamaska watersheds) in the province of Quebec (Canada) were used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three‐parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear‐cut when the simulated time series are used to drive a hydrological model. Although the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modelling. The implications of choosing a distribution function with respect to hydrological modelling and climate change impact studies are also discussed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Multicomponent probability distributions such as the two‐component Gumbel distribution are sometimes applied to annual flood maxima when individual floods are seen as belonging to different classes, depending on physical processes or time of year. However, hydrological inconsistencies may arise if only nonclassified annual maxima are available to estimate the component distribution parameters. In particular, an unconstrained best fit to annual flood maxima may yield some component distributions with a high probability of simulating floods with negative discharge. In such situations, multicomponent distributions cannot be justified as an improved approximation to a local physical reality of mixed flood types, even though a good data fit is achieved. This effect usefully illustrates that a good match to data is no guarantee against degeneracy of hydrological models. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
In the past, arithmetic and geometric means have both been used to characterise pathogen densities in samples used for microbial risk assessment models. The calculation of total (annual) risk is based on cumulative independent (daily) exposures and the use of an exponential dose–response model, such as that used for exposure to Giardia or Cryptosporidium. Mathematical analysis suggests that the arithmetic mean is the appropriate measure of central tendency for microbial concentration with respect to repeated samples of daily exposure in risk assessment. This is despite frequent characterisation of microbial density by the geometric mean, since the microbial distributions may be Log normal or skewed in nature. Mathematical derivation supporting the use of the arithmetic mean has been based on deterministic analysis, prior assumptions and definitions, the use of point-estimates of probability, and has not included from the outset the influence of an actual distribution for microbial densities. We address these issues by experiments using two real-world pathogen datasets, together with Monte Carlo simulation, and it is revealed that the arithmetic mean also holds in the case of a daily dose with a finite distribution in microbial density, even when the distribution is very highly-skewed, as often occurs in environmental samples. Further, for simplicity, in many risk assessment models, the daily infection risk is assumed to be the same for each day of the year and is represented by a single value, which is then used in the calculation of p Σ, which is a numerical estimate of annual risk, P Σ, and we highlight the fact that is simply a function of the geometric mean of the daily complementary risk probabilities (although it is sometimes approximated by the arithmetic mean of daily risk in the low dose case). Finally, the risk estimate is an imprecise probability with no indication of error and we investigate and clarify the distinction between risk and uncertainty assessment with respect to the predictive model used for total risk assessment.  相似文献   

9.
This study aims to model the joint probability distribution of drought duration, severity and inter-arrival time using a trivariate Plackett copula. The drought duration and inter-arrival time each follow the Weibull distribution and the drought severity follows the gamma distribution. Parameters of these univariate distributions are estimated using the method of moments (MOM), maximum likelihood method (MLM), probability weighted moments (PWM), and a genetic algorithm (GA); whereas parameters of the bivariate and trivariate Plackett copulas are estimated using the log-pseudolikelihood function method (LPLF) and GA. Streamflow data from three gaging stations, Zhuangtou, Taian and Tianyang, located in the Wei River basin, China, are employed to test the trivariate Plackett copula. The results show that the Plackett copula is capable of yielding bivariate and trivariate probability distributions of correlated drought variables.  相似文献   

10.
Several alternative probability distributions for estimating the probability of exceedance of contaminant concentrations are examined for their appropriateness for developing inputs to risk assessments. The rationale is provided for using the log Pearson Type III distribution, a three-parameter model, for estimation of the exceedance probabilities.  相似文献   

11.
The deterioration of the condition of process plants assets has a major negative impact on the safety of its operation. Risk based integrity modeling provides a methodology to quantify the risks posed by an aging asset. This provides a means for the protection of human life, financial investment and the environmental damage from the consequences of its failures. This methodology is based on modeling the uncertainty in material degradations using probability distributions, known as priors. Using Bayes theorem, one may improve the prior distribution to obtain a posterior distribution using actual inspection data. Although the choice of priors is often subjective, a rational consensus can be achieved by judgmental studies and analyzing the generic data from the same or similar installations. The first part of this paper presents a framework for a risk based integrity modeling. This includes a methodology to select the prior distributions for the various types of corrosion degradation mechanisms, namely, the uniform, localized and erosion corrosion. Several statistical tests were conducted based on the data extracted from the literature to check which of the prior distributions follows data the best. Once the underlying distribution has been confirmed, one can estimate the parameters of the distributions. In the second part, the selected priors are tested and validated using actual plant inspection data obtained from existing assets in operation. It is found that uniform corrosion can be best described using 3P-Weibull and 3P-Lognormal distributions. Localized corrosion can be best described using Type1 extreme value and 3P-Weibull, while erosion corrosion can best be described using the 3P-Weibull, Type1 extreme value, or 3P-Lognormal distributions.  相似文献   

12.
Chain dependent models for daily precipitation typically model the occurrence process as a Markov chain and the precipitation intensity process using one of several probability distributions. It has been argued that the mixed exponential distribution is a superior model for the rainfall intensity process, since the value of its information criterion (Akaike information criterion or Bayesian information criterion) when fit to precipitation data is usually less than the more commonly used gamma distribution. The differences between the criterion values of the best and lesser models are generally small relative to the magnitude of the criterion value, which raises the question of whether these differences are statistically significant. Using a likelihood ratio statistic and nesting the gamma and mixed exponential distributions in a parent distribution, we show indirectly that generally the superiority of the mixed exponential distribution over the gamma distribution for modeling precipitation intensity is statistically significant. Comparisons are also made with a common-a gamma model, which are less informative.  相似文献   

13.
Nowadays, Flood Forecasting and Warning Systems (FFWSs) are known as the most inexpensive and efficient non‐structural measures for flood damage mitigation in the world. Benefit to cost of the FFWSs has been reported to be several times of other flood mitigation measures. Beside these advantages, uncertainty in flood predictions is a subject that may affect FFWS's reliability and the benefits of these systems. Determining the reliability of advanced flood warning systems based on the rainfall–runoff models is a challenge in assessment of the FFWS performance which is the subject of this study. In this paper, a stochastic methodology is proposed to provide the uncertainty band of the rainfall–runoff model and to calculate the probability of acceptable forecasts. The proposed method is based on Monte Carlo simulation and multivariate analysis of the predicted time and discharge error data sets. For this purpose, after the calibration of the rainfall–runoff model, the probability distributions of input calibration parameters and uncertainty band of the model are estimated through the Bayesian inference. Then, data sets of the time and discharge errors are calculated using the Monte Carlo simulation, and the probability of acceptable model forecasts is calculated by multivariate analysis of data using copula functions. The proposed approach was applied for a small watershed in Iran as a case study. The results showed using rainfall–runoff modeling based on real‐time precipitation is not enough to attain high performance for FFWSs in small watersheds, and it seems using weather forecasts as the inputs of rainfall–runoff models is essential to increase lead times and the reliability of FFWSs in small watersheds. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
The objective of this study was to develop a novel risk analysis approach to assess ozone exposure as a risk factor for respiratory health. Based on the human exposure experiment, the study first constructed the relationship between lung function decrement and respiratory symptoms scores (ranged 0–1 corresponding to absent to severe symptoms). This study used a toxicodynamic model to estimate different levels of ozone exposure concentration-associated lung function decrement measured as percent forced expiratory volume in 1 s (%FEV1). The relationships between 8-h ozone exposure and %FEV1 decrement were also constructed with a concentration–response model. The recorded time series of environmental monitoring of ozone concentrations in Taiwan were used to analyze the statistical indicators which may have predictability in ozone-induced airway function disorders. A statistical indicator-based probabilistic risk assessment framework was used to predict and assess the ozone-associated respiratory symptoms scores. The results showed that ozone-associated lung function decrement can be detected by using information from statistical indicators. The coefficient of variation and skewness were the common indicators which were highly correlated with %FEV1 decrement in the next 7 days. The model predictability can be further improved by a composite statistical indicator. There was a 50 % risk probability that mean and maximum respiratory symptoms scores would fall within the moderate region, 0.33–0.67, with estimates of 0.36 (95 % confidence interval 0.27–0.45) and 0.50 (0.41–0.59), respectively. We conclude that statistical indicators related to variability and skewness can provide a powerful tool for detecting ozone-induced health effects from empirical data in specific populations.  相似文献   

15.
A challenge when working with multivariate data in a geostatistical context is that the data are rarely Gaussian. Multivariate distributions may include nonlinear features, clustering, long tails, functional boundaries, spikes, and heteroskedasticity. Multivariate transformations account for such features so that they are reproduced in geostatistical models. Projection pursuit as developed for high dimensional data exploration can also be used to transform a multivariate distribution into a multivariate Gaussian distribution with an identity covariance matrix. Its application within a geostatistical modeling context is called the projection pursuit multivariate transform (PPMT). An approach to incorporate exhaustive secondary variables in the PPMT is introduced. With this approach the PPMT can incorporate any number of secondary variables with any number of primary variables. A necessary alteration to the approach to make this numerically practical was the implementation of a continuous probability estimator that relies on Bernstein polynomials for the transformation that takes place in the projections. Stopping criteria were updated to incorporate a bootstrap t test that compares data sampled from a multivariate Gaussian distribution with the data undergoing transformation.  相似文献   

16.
Hydrologic risk analysis for dam safety relies on a series of probabilistic analyses of rainfall-runoff and flow routing models, and their associated inputs. This is a complex problem in that the probability distributions of multiple independent and derived random variables need to be estimated in order to evaluate the probability of dam overtopping. Typically, parametric density estimation methods have been applied in this setting, and the exhaustive Monte Carlo simulation (MCS) of models is used to derive some of the distributions. Often, the distributions used to model some of the random variables are inappropriate relative to the expected behaviour of these variables, and as a result, simulations of the system can lead to unrealistic values of extreme rainfall or water surface levels and hence of the probability of dam overtopping. In this paper, three major innovations are introduced to address this situation. The first is the use of nonparametric probability density estimation methods for selected variables, the second is the use of Latin Hypercube sampling to improve the efficiency of MCS driven by the multiple random variables, and the third is the use of Bootstrap resampling to determine initial water surface level. An application to the Soyang Dam in South Korea illustrates how the traditional parametric approach can lead to potentially unrealistic estimates of dam safety, while the proposed approach provides rather reasonable estimates and an assessment of their sensitivity to key parameters.  相似文献   

17.
The objective of the paper is to show that the use of a discrimination procedure for selecting a flood frequency model without the knowledge of its performance for the considered underlying distributions may lead to erroneous conclusions. The problem considered is one of choosing between lognormal (LN) and convective diffusion (CD) distributions for a given random sample of flood observations. The probability density functions of these distributions are similarly shaped in the range of the main probability mass and the discrepancies grow with the increase in the value of the coefficient of variation (C V ). This problem was addressed using the likelihood ratio (LR) procedure. Simulation experiments were performed to determine the probability of correct selection (PCS) for the LR method. Pseudo-random samples were generated for several combinations of sample sizes and the coefficient of variation values from each of the two distributions. Surprisingly, the PCS of the LN model was twice smaller than that of the CD model, rarely exceeding 50%. The results obtained from simulation were analyzed and compared both with those obtained using real data and with the results obtained from another selection procedure known as the QK method. The results from the QK are just the opposite to that of the LR procedure.  相似文献   

18.
This study aims to develop a joint probability function of peak ground acceleration (PGA) and cumulative absolute velocity (CAV) for the strong ground motion data from Taiwan. First, a total of 40,385 earthquake time histories are collected from the Taiwan Strong Motion Instrumentation Program. Then, the copula approach is introduced and applied to model the joint probability distribution of PGA and CAV. Finally, the correlation results using the PGA‐CAV empirical data and the normalized residuals are compared. The results indicate that there exists a strong positive correlation between PGA and CAV. For both the PGA and CAV empirical data and the normalized residuals, the multivariate lognormal distribution composed of two lognormal marginal distributions and the Gaussian copula provides adequate characterization of the PGA‐CAV joint distribution observed in Taiwan. This finding demonstrates the validity of the conventional two‐step approach for developing empirical ground motion prediction equations (GMPEs) of multiple ground motion parameters from the copula viewpoint. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents a Bayesian Monte Carlo method for evaluating the uncertainty in the delineation of well capture zones and its application to a wellfield in a heterogeneous, multiaquifer system. In the method presented, Bayes' rule is used to update prior distributions for the unknown parameters of the stochastic model for the hydraulic conductivity, and to calculate probability-based weights for parameter realizations using head residuals. These weights are then assigned to the corresponding capture zones obtained using forward particle tracking. Statistical analysis of the set of weighted protection zones results in a probability distribution for the capture zones. The suitability of the Bayesian stochastic method for a multilayered system is investigated, using the wellfield Het Rot at Nieuwrode, Belgium, located in a three-layered aquifer system, as an example. The hydraulic conductivity of the production aquifer is modeled as a spatially correlated random function with uncertain parameters. The aquitard and overlying unconfined aquifer are assigned random, homogeneous conductivities. The stochastic results are compared with deterministic capture zones obtained with a calibrated model for the area. The predictions of the stochastic approach are more conservative and indicate that parameter uncertainty should be taken into account in the delineation of well capture zones.  相似文献   

20.
Exposure estimation using repeated blood concentration measurements   总被引:3,自引:3,他引:0  
Physiologically based toxicokinetic (PBTK) modeling has been well established to study the distributions of chemicals in target tissues. In addition, the hierarchical Bayesian statistical approach using Markov Chain Monte Carlo (MCMC) simulations has been applied successfully for parameter estimation. The aim was to estimate the constant inhalation exposure concentration (assumed) using a PBTK model based on repeated measurements in venous blood, so that exposures could be estimated. By treating the constant exterior exposure as an unknown parameter of a four-compartment PBTK model, we applied MCMC simulations to estimate the exposure based on a hierarchical Bayesian approach. The dataset on 16 volunteers exposed to 100 ppm (≅0.538 mg/L) trichloroethylene vapors for 4 h was reanalyzed as an illustration. Cases of time-dependent exposures with a constant mean were also studied via 100 simulated datasets. The posterior geometric mean of 0.571, with narrow 95% posterior confidence interval (CI) (0.506, 0.645), estimated the true trichloroethylene inhalation concentration (0.538 mg/L) with very high precision. Also, the proposed method estimated the overall constant mean of the simulated time-dependent exposure scenarios well with slightly wider 95% CIs. The proposed method justifies the accuracy of exposure estimation from biomonitoring data using PBTK model and MCMC simulations from a real dataset and simulation studies numerically, which provides a starting point for future applications in occupational exposure assessment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号