首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The National Weather Service (NWS) uses the SNOW17 model to forecast snow accumulation and ablation processes in snow-dominated watersheds nationwide. Successful application of the SNOW17 relies heavily on site-specific estimation of model parameters. The current study undertakes a comprehensive sensitivity and uncertainty analysis of SNOW17 model parameters using forcing and snow water equivalent (SWE) data from 12 sites with differing meteorological and geographic characteristics. The Generalized Sensitivity Analysis and the recently developed Differential Evolution Adaptive Metropolis (DREAM) algorithm are utilized to explore the parameter space and assess model parametric and predictive uncertainty. Results indicate that SNOW17 parameter sensitivity and uncertainty generally varies between sites. Of the six hydroclimatic characteristics studied, only air temperature shows strong correlation with the sensitivity and uncertainty ranges of two parameters, while precipitation is highly correlated with the uncertainty of one parameter. Posterior marginal distributions of two parameters are also shown to be site-dependent in terms of distribution type. The SNOW17 prediction ensembles generated by the DREAM-derived posterior parameter sets contain most of the observed SWE. The proposed uncertainty analysis provides posterior parameter information on parameter uncertainty and distribution types that can serve as a foundation for a data assimilation framework for hydrologic models.  相似文献   

3.
Uncertainty factors have substantial influences on the numerical simulations of earthquakes. However, most simulation methods are deterministic and do not sufficiently consider those uncertainty factors. A good approach for predicting future destructive earthquakes that is also applied to probabilistic hazard analysis is studying those uncertainty factors, which is very significant for improving the reliability and accuracy of ground-motion predictions. In this paper, we investigated several uncertainty factors, namely the initial rupture point, stress drop, and number of sub-faults, all of which display substantial influences on ground-motion predictions, via sensitivity analysis. The associated uncertainties are derived by considering the uncertainties in the parameter values, as those uncertainties are associated with the ground motion itself. A sensitivity analysis confirms which uncertainty factors have large influences on ground motion predictions, based upon which we can allocate appropriate weights to those uncertainty factors during the prediction process. We employ the empirical Green function method as a numerical simulation tool. The effectiveness of this method has been previously validated, especially in areas with sufficient earthquake record data such as Japan, Southwest China, and Taiwan, China. Accordingly, we analyse the sensitivities of the uncertainty factors during a prediction of strong ground motion using the empirical Green function method. We consequently draw the following conclusions. (1) The stress drop has the largest influence on ground-motion predictions. The discrepancy between the maximum and minimum PGA among three different stations is very large. In addition, the PGV and PGD also change drastically. The Arias intensity increases exponentially with an increase in the stress drop ratio of two earthquakes. (2) The number of sub-faults also has a large influence on various ground-motion parameters but a small influence on the Fourier spectrum and response spectrum. (3) The initial rupture point largely influences the PGA and Arias intensity. We will accordingly pay additional attention to these uncertainty factors when we conduct ground-motion predictions in the future.  相似文献   

4.
I. IORGULESCU  A. MUSY 《水文研究》1997,11(9):1353-1355
A generalization of the TOPMODEL equations for a power law vertical profile of hydraulic conductivity is introduced. The exponential profile of TOPMODEL is obtained as a limit case of the new general form. © 1997 John Wiley & Sons, Ltd.  相似文献   

5.
Higher-order approximation techniques for estimating stochastic parameter of the non-homogeneous Poisson (NHP) model are presented. The NHP model is characterized by a two-parameter cumulative probability distribution function (CDF) of sediment displacement. Those two parameters are the temporal and spatial intensity functions, physically representing the inverse of the average rest period and step length of sediment particles, respectively. Difficulty of estimating the parameters has, however, restricted the applications of the NHP model. The approximation techniques are proposed to address such problem. The basic idea of the method is to approximate a model involving stochastic parameters by Taylor series expansion. The expansion preserves certain higher-order terms of interest. Using the experimental (laboratory or field) data, one can determine the model parameters through a system of equations that are simplified by the approximation technique. The parameters so determined are used to predict the cumulative distribution of sediment displacement. The second-order approximation leads to a significant reduction of the CDF error (of the order of 47%) compared to the first-order approximation. Error analysis is performed to evaluate the accuracy of the first- and second-order approximations with respect to the experimental data. The higher-order approximations provide better estimations of the sediment transport and deposition that are critical factors for such environment as spawning gravel-bed.  相似文献   

6.
Higher-order approximation techniques for estimating stochastic parameter of the non-homogeneous Poisson (NHP) model are presented. The NHP model is characterized by a two-parameter cumulative probability distribution function (CDF) of sediment displacement. Those two parameters are the temporal and spatial intensity functions, physically representing the inverse of the average rest period and step length of sediment particles, respectively. Difficulty of estimating the parameters has, however, restricted the applications of the NHP model. The approximation techniques are proposed to address such problem. The basic idea of the method is to approximate a model involving stochastic parameters by Taylor series expansion. The expansion preserves certain higher-order terms of interest. Using the experimental (laboratory or field) data, one can determine the model parameters through a system of equations that are simplified by the approximation technique. The parameters so determined are used to predict the cumulative distribution of sediment displacement. The second-order approximation leads to a significant reduction of the CDF error (of the order of 47%) compared to the first-order approximation. Error analysis is performed to evaluate the accuracy of the first- and second-order approximations with respect to the experimental data. The higher-order approximations provide better estimations of the sediment transport and deposition that are critical factors for such environment as spawning gravel-bed.  相似文献   

7.
An extension of TOPMODEL was developed for rainfall–runoff simulation in agricultural watersheds equipped with tile drains. Tile drain functions are incorporated into the framework of TOPMODEL. Nine possible flow generation scenarios are suggested for tile-drained watersheds and applied in the modelling procedure. In the model development, two methods of simulation of the flow in the unsaturated zone were compared: the traditional, physically based storage approach and a new approach using a transfer function. A regionalized sensitivity analysis was used to determine the sensitivity of parameters and to compare the behaviour of the transfer function with that of the simple storage-related formulation. The number of accepted combinations of parameter values, on average, was higher for the transfer function approach than when using a Monte Carlo method of parameter estimation. Since the rainfall–runoff response pattern tends to vary seasonally, seven events distributed throughout a year were used in the sensitivity analysis to investigate the seasonal variation of the hydrological characteristics. © 1997 John Wiley & Sons, Ltd.  相似文献   

8.
Epistemic uncertainties can be classified into two major categories: parameter and model. While the first one stems from the difficulties in estimating the values of input model parameters, the second comes from the difficulties in selecting the appropriate type of model. Investigating their combined effects and ranking each of them in terms of their influence on the predicted losses can be useful in guiding future investigations. In this context, we propose a strategy relying on variance-based global sensitivity analysis, which is demonstrated using an earthquake loss assessment for Pointe-à-Pitre (Guadeloupe, France). For the considered assumptions, we show: that uncertainty of losses would be greatly reduced if all the models could be unambiguously selected; and that the most influential source of uncertainty (whether of parameter or model type) corresponds to the seismic activity group. Finally, a sampling strategy was proposed to test the influence of the experts’ weights on models and on the assumed coefficients of variation of parameter uncertainty. The former influenced the sensitivity measures of the model uncertainties, whereas the latter could completely change the importance rank of the uncertainties associated to the vulnerability assessment step.  相似文献   

9.
The modified Universal Soil Loss Equation combined with the U.S. Soil Conservation Service runoff calculation method is analysed with respect to parameter uncertainty. The case of agricultural watersheds, where the above method gives quite accurate estimates on the multiannual sediment yield, illustrates the methodology. Parameter uncertainty results in a coefficient of variation of the calculated sediment yield in the range (0.14–0.21). For the general case, five groups of statistical error bound for the parameters are distinguished, based on available information such as the scale of maps.  相似文献   

10.
Rapid population growth and economy development have led to increasing reliance on water resources. It is even aggravated for agricultural irrigation systems where more water is necessary to support the increasing population. In this study, an inexact programming method based on two-stage stochastic programming and interval-parameter programming is developed to obtain optimal water-allocation strategies for agricultural irrigation systems. It is capable of handling such problems where two-stage decisions need to be suggested under random- and interval-parameter inputs. An interactive solving procedure derived from conventional interval-parameter programming makes it possible for the impact of lower and upper bounds of interval inputs to be well reflected in the resulting solutions. An agricultural irrigation management problem is then provided to demonstrate the applicability, and reasonable solutions are obtained. Compared to the solutions from a representative interval-parameter programming model where only one decision-stage exists, the interval of optimized objective-function value is narrow, indicating more alternatives could be provided when water-allocation targets are rather high. However, chances of obtaining more benefits exist in association with a risk of paying more penalties; such a relationship becomes apparent when the variation of water availability is much intensive.  相似文献   

11.
12.
Optimization of multi-phase transport models is important both for calibrating model parameters to observed data and for analyzing management options. We focus on examples of geological carbon sequestration (GCS) process-based multi-phase models. Realistic GCS models can be very computationally expensive not only due to the spatial distribution of the model but also because of the complex nonlinear multi-phase and multi-component transport equations to be solved. As a result we need to have optimization methods that get accurate answers with relatively few simulations. In this analysis we compare a variety of different types of optimization algorithms to understand the best type of algorithms to use for different types of problems. This includes an analysis of which characteristics of the problem are important in choice of algorithm. The goal of this paper is to evaluate which optimization algorithms are the most efficient in a given situation, taking into account shape of the optimization problem (e.g. uni- or multi-modal) and the number of simulations that can be done. The algorithms compared are the widely used derivative-based PEST optimization algorithm, the derivative-based iTOUGH2, the Kriging response surface algorithm EGO, the heuristics-based DDS (Dynamically Dimensioned Search), and the Radial Basis Function surrogate response surface based global optimization algorithms ‘GORBIT’ and ‘Stochastic RBF’. We calibrate a simple homogeneous model ‘3hom’ and two more realistic models ‘20layer’ and ‘6het’. The latter takes 2 h per simulation. Using rigorous statistical tests, we show that while the derivative-based algorithms of PEST are efficient on the simple 3hom model, it does poorly in comparison to surrogate optimization methods Stochastic RBF and GORBIT on the more realistic models. We then identify the shapes of the optimization surface of the three models using enumerative simulations and discover that 3hom is smooth and unimodal and the more realistic models are rough and multi-modal. When the number of simulations is limited, surrogate response surfaces algorithms perform best on multi-modal, bumpy objective functions, which we expect to have for most realistic multi-phase flow models such as those for GCS.  相似文献   

13.
The TOPMODEL framework was used to derive expressions that account for saturated and unsaturated flow through shallow soil on a hillslope. The resulting equations were the basis for a shallow‐soil TOPMODEL (STOPMODEL). The common TOPMODEL theory implicitly assumes a water table below the entire watershed and this does not conceptually apply to systems hydrologically controlled by shallow interflow of perched groundwater. STOPMODEL provides an approach for extending TOPMODEL's conceptualization to apply to shallow, interflow‐driven watersheds by using soil moisture deficit instead of water table depth as the state variable. Deriving STOPMODEL by using a hydraulic conductivity function that changes exponentially with soil moisture content results in equations that look very similar to those commonly associated with TOPMODEL. This alternative way of conceptualizing TOPMODEL makes the modelling approach available to researchers, planners, and engineers who work in areas where TOPMODEL was previously believed to be unsuited, such as the New York City Watershed in the Catskills region of New York State. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

14.
In this study we propose a probabilistic approach for coupled distributed hydrological‐hillslope stability models that accounts for soil parameters uncertainty at basin scale. The geotechnical and soil retention curve parameters are treated as random variables across the basin and theoretical probability distributions of the Factor of Safety (FS) are estimated. The derived distributions are used to obtain the spatio‐temporal dynamics of probability of failure, in terms of parameters uncertainty, conditioned to soil moisture dynamics. The framework has been implemented in the tRIBS‐VEGGIE (Triangulated Irregular Network (TIN)‐based Real‐time Integrated Basin Simulator‐VEGetation Generator for Interactive Evolution)‐Landslide model and applied to a basin in the Luquillo Experimental Forest (Puerto Rico) where shallow landslides are common. In particular, the methodology was used to evaluate how the spatial and temporal patterns of precipitation, whose variability is significant over the basin, affect the distribution of probability of failure, through event scale analyses. Results indicate that hyetographs where heavy precipitation is near the end of the event lead to the most critical conditions in terms of probability of failure. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
林婷婷  林小雪  万玲  杨莹 《地球物理学报》1954,63(11):4256-4267
地面磁共振方法探测地下水趋于成熟.随着研究的深入,磁共振技术在隧道超前探测方面也开展了应用.然而,由于隧道空间特殊环境限制,获取的磁共振信号信噪比极低,解释结果中各参数的确定性值得深思.基于这一问题,本文提出隧道磁共振测深超前探测模型参数不确定度分析方案,实际工作前,根据不同探测目标要求及环境噪声水平,优化仪器装置参数设计,提高解释结果准确性.本文首先在地面磁共振探测理论基础上,推导了考虑天线铺设角度影响的隧道矩形线圈激发场计算表达式,模拟了隧道准全空间磁共振测深正演响应.其次,基于后验模型协方差矩阵,计算模型参数标准偏差因子,划分参数不确定度等级.最后,构建三层含水模型,将第二层含水体作为观测目标.在仿真合成数据的基础上,分别探讨了电阻率、含水量、水体厚度、线圈边长、匝数、线圈旋转角度以及噪声水平等参数对目标含水体测定的影响.通过对比分析,得到如下结论:当探测目标前方地层的电阻率小于10 Ωm时,目标含水体的不确定度随着该电阻率的增大而降低;当该电阻率大于10 Ωm时,其不影响目标含水体的不确定度;目标体前方地层含水量的增大能够明显增加目标含水体的不确定度;目标层电阻率以及含水量对该层含水体的不确定度几乎不造成影响;目标层厚度越大其含水体的确定度越高;线圈边长和匝数的增大都能在很大程度上降低含水体的不确定度;线圈的偏转角度不影响目标体的不确定度;磁共振信号中噪声的幅度越大,含水体参数的不确定度越大.本文的研究结论有助于提高隧道磁共振探测数据反演参数的准确性,同时能够为实际探测提供预先优化参数的分析方案.  相似文献   

16.
林婷婷  林小雪  万玲  杨莹 《地球物理学报》2020,63(11):4256-4267

地面磁共振方法探测地下水趋于成熟.随着研究的深入,磁共振技术在隧道超前探测方面也开展了应用.然而,由于隧道空间特殊环境限制,获取的磁共振信号信噪比极低,解释结果中各参数的确定性值得深思.基于这一问题,本文提出隧道磁共振测深超前探测模型参数不确定度分析方案,实际工作前,根据不同探测目标要求及环境噪声水平,优化仪器装置参数设计,提高解释结果准确性.本文首先在地面磁共振探测理论基础上,推导了考虑天线铺设角度影响的隧道矩形线圈激发场计算表达式,模拟了隧道准全空间磁共振测深正演响应.其次,基于后验模型协方差矩阵,计算模型参数标准偏差因子,划分参数不确定度等级.最后,构建三层含水模型,将第二层含水体作为观测目标.在仿真合成数据的基础上,分别探讨了电阻率、含水量、水体厚度、线圈边长、匝数、线圈旋转角度以及噪声水平等参数对目标含水体测定的影响.通过对比分析,得到如下结论:当探测目标前方地层的电阻率小于10 Ωm时,目标含水体的不确定度随着该电阻率的增大而降低;当该电阻率大于10 Ωm时,其不影响目标含水体的不确定度;目标体前方地层含水量的增大能够明显增加目标含水体的不确定度;目标层电阻率以及含水量对该层含水体的不确定度几乎不造成影响;目标层厚度越大其含水体的确定度越高;线圈边长和匝数的增大都能在很大程度上降低含水体的不确定度;线圈的偏转角度不影响目标体的不确定度;磁共振信号中噪声的幅度越大,含水体参数的不确定度越大.本文的研究结论有助于提高隧道磁共振探测数据反演参数的准确性,同时能够为实际探测提供预先优化参数的分析方案.

  相似文献   

17.
Abstract

The effect of land-use or land-cover change on stream runoff dynamics is not fully understood. In many parts of the world, forest management is the major land-cover change agent. While the paired catchment approach has been the primary methodology used to quantify such effects, it is only possible for small headwater catchments where there is uniformity in precipitation inputs and catchment characteristics between the treatment and control catchments. This paper presents a model-based change-detection approach that includes model and parameter uncertainty as an alternative to the traditional paired-catchment method for larger catchments. We use the HBV model and data from the HJ Andrews Experimental Forest in Oregon, USA, to develop and test the approach on two small (<1 km2) headwater catchments (a 100% clear-cut and a control) and then apply the technique to the larger 62 km2 Lookout catchment. Three different approaches are used to detect changes in stream peak flows using: (a) calibration for a period before (or after) change and simulation of runoff that would have been observed without land-cover changes (reconstruction of runoff series); (b) comparison of calibrated parameter values for periods before and after a land-cover change; and (c) comparison of runoff predicted with parameter sets calibrated for periods before and after a land-cover change. Our proof-of-concept change detection modelling showed that peak flows increased in the clear-cut headwater catchment, relative to the headwater control catchment, and several parameter values in the model changed after the clear-cutting. Some minor changes were also detected in the control, illustrating the problem of false detections. For the larger Lookout catchment, moderately increased peak flows were detected. Monte Carlo techniques used to quantify parameter uncertainty and compute confidence intervals in model results and parameter ranges showed rather wide distributions of model simulations. While this makes change detection more difficult, it also demonstrated the need to explicitly consider parameter uncertainty in the modelling approach to obtain reliable results.

Citation Seibert, J. & McDonnell, J. J. (2010) Land-cover impacts on streamflow: a change-detection modelling approach that incorporates parameter uncertainty. Hydrol. Sci. J. 55(3), 316–332.  相似文献   

18.
The application of a modified version of dynamic TOPMODEL for two subcatchments at Plynlimon, Wales is described. Conservative chemical mixing within mobile and immobile stores has been added to the hydrological model in an attempt to simulate observed stream chloride concentrations. The model was not fully able to simulate the observed behaviour, in particular the short‐ to medium‐term dynamics. One of the primary problems highlighted by the study was the representation of dry deposition and cloud‐droplet‐deposited chloride, which formed a significant part of the long‐term chloride mass budget. Equifinality of parameter sets inhibited the ability to determine the effective catchment mixing volumes and coefficients or the most likely partition between occult mass inputs and chloride mass inputs determined by catchment immobile‐store antecedent conditions. Some success was achieved, in as much as some aspects of the dynamic behaviour of the signal were satisfactorily simulated, although spectral analysis showed that the model could not fully reproduce the 1/f power spectra of observed stream chloride concentrations with its implications of a wide distribution of residence times for water in the catchment. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
Three techniques for locating field lines in the magnetosphere that contain standing ULF pulsations are compared using dynamic spectra. The first technique compares ratios of the H- and D-components of the magnetic field at a single site; the second examines the ratios of the H-components at neighboring sites along a magnetic meridian; and the third displays the phase difference between H-components at neighboring sites. We find that the H:D ratio at a single station appears to detect magnetospheric standing waves but not their precise location. In contrast, the dual station H-ratio technique is sensitive to resonances local to the stations and has advantages over the widely used phase-gradient technique. In contrast to the latter technique calculating the H-power ratio does not require precise timing and provides two resonant locations, not one. We also find that the stations used need not be strictly confined to a single magnetic meridian. Resonance signatures can be detected with stations up to 1300 km in east–west separation. In our initial data near L=2 multiple-harmonic structure is generally not observed. The resonant wave period, when assumed to be the fundamental of the standing Alfven wave, gives densities in the range 3000–8000 amu/cm3. These mass densities agree with in situ observations at earlier epochs. The equatorial mass density varies most during the day (by over a factor of two for the case studied) at L=1.86 and much less (20%) at L=2.2. This is consistent with a constant upward flux of ions over this latitude range flowing into a flux tube whose volume increases rapidly with increasing L-value.  相似文献   

20.
Regression‐based regional flood frequency analysis (RFFA) methods are widely adopted in hydrology. This paper compares two regression‐based RFFA methods using a Bayesian generalized least squares (GLS) modelling framework; the two are quantile regression technique (QRT) and parameter regression technique (PRT). In this study, the QRT focuses on the development of prediction equations for a flood quantile in the range of 2 to 100 years average recurrence intervals (ARI), while the PRT develops prediction equations for the first three moments of the log Pearson Type 3 (LP3) distribution, which are the mean, standard deviation and skew of the logarithms of the annual maximum flows; these regional parameters are then used to fit the LP3 distribution to estimate the desired flood quantiles at a given site. It has been shown that using a method similar to stepwise regression and by employing a number of statistics such as the model error variance, average variance of prediction, Bayesian information criterion and Akaike information criterion, the best set of explanatory variables in the GLS regression can be identified. In this study, a range of statistics and diagnostic plots have been adopted to evaluate the regression models. The method has been applied to 53 catchments in Tasmania, Australia. It has been found that catchment area and design rainfall intensity are the most important explanatory variables in predicting flood quantiles using the QRT. For the PRT, a total of four explanatory variables were adopted for predicting the mean, standard deviation and skew. The developed regression models satisfy the underlying model assumptions quite well; of importance, no outlier sites are detected in the plots of the regression diagnostics of the adopted regression equations. Based on ‘one‐at‐a‐time cross validation’ and a number of evaluation statistics, it has been found that for Tasmania the QRT provides more accurate flood quantile estimates for the higher ARIs while the PRT provides relatively better estimates for the smaller ARIs. The RFFA techniques presented here can easily be adapted to other Australian states and countries to derive more accurate regional flood predictions. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号