首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Estimating erroneous parameters in ensemble based snow data assimilation system has been given little attention in the literature. Little is known about the related methods’ effectiveness, performance, and sensitivity to other error sources such as model structural error. This research tackles these questions by running synthetic one-dimensional snow data assimilation with the ensemble Kalman filter (EnKF), in which both state and parameter are simultaneously updated. The first part of the paper investigates the effectiveness of this parameter estimation approach in a perfect-model-structure scenario, and the second part focuses on its dependence on model structure error. The results from first part research demonstrate the advantages of this parameter estimation approach in reducing the systematic error of snow water equivalent (SWE) estimates, and retrieving the correct parameter value. The second part results indicate that, at least in our experiment, there is an evident dependence of parameter search convergence on model structural error. In the imperfect-model-structure run, the parameter search diverges, although it can simulate the state variable well. This result suggest that, good data assimilation performance in estimating state variables is not a sufficient indicator of reliable parameter retrieval in the presence of model structural error. The generality of this conclusion needs to be tested by data assimilation experiments with more complex structural error configurations.  相似文献   

3.
A Monte Carlo-based approach to assess uncertainty in recharge areas shows that incorporation of atmospheric tracer observations (in this case, tritium concentration) and prior information on model parameters leads to more precise predictions of recharge areas. Variance-covariance matrices, from model calibration and calculation of sensitivities, were used to generate parameter sets that account for parameter correlation and uncertainty. Constraining parameter sets to those that met acceptance criteria, which included a standard error criterion, did not appear to bias model results. Although the addition of atmospheric tracer observations and prior information produced similar changes in the extent of predicted recharge areas, prior information had the effect of increasing probabilities within the recharge area to a greater extent than atmospheric tracer observations. Uncertainty in the recharge area propagates into predictions that directly affect water quality, such as land cover in the recharge area associated with a well and the residence time associated with the well. Assessments of well vulnerability that depend on these factors should include an assessment of model parameter uncertainty. A formal simulation of parameter uncertainty can be used to delineate probabilistic recharge areas, and the results can be expressed in ways that can be useful to water-resource managers. Although no one model is the correct model, the results of multiple models can be evaluated in terms of the decision being made and the probability of a given outcome from each model.  相似文献   

4.
The inverse problem of parameter structure identification in a distributed parameter system remains challenging. Identifying a more complex parameter structure requires more data. There is also the problem of over-parameterization. In this study, we propose a modified Tabu search for parameter structure identification. We embed an adjoint state procedure in the search process to improve the efficiency of the Tabu search. We use Voronoi tessellation for automatic parameterization to reduce the dimension of the distributed parameter. Additionally, a coarse-fine grid technique is applied to further improve the effectiveness and efficiency of the proposed methodology. To avoid over-parameterization, at each level of parameter complexity we calculate the residual error for parameter fitting, the parameter uncertainty error and a modified Akaike Information Criterion. To demonstrate the proposed methodology, we conduct numerical experiments with synthetic data that simulate both discrete hydraulic conductivity zones and a continuous hydraulic conductivity distribution. Our results indicate that the Tabu search allied with the adjoint state method significantly improves computational efficiency and effectiveness in solving the inverse problem of parameter structure identification.  相似文献   

5.
6.
Information theory is the basis for understanding how information is transmitted as observations. Observation data can be used to compare uncertainty on parameter estimates and predictions between models. Jacobian Information (JI) is quantified as the determinant of the weighted Jacobian (sensitivity) matrix. Fisher Information (FI) is quantified as the determinant of the weighted FI matrix. FI measures the relative disorder of a model (entropy) in a set of models. One‐dimensional models are used to demonstrate the relationship between JI and FI, and the resulting uncertainty on estimated parameter values and model predictions for increasing model complexity, different model structures, different boundary conditions, and over‐fitted models. Greater model complexity results in increased JI accompanied by an increase in parameter and prediction uncertainty. FI generally increases with increasing model complexity unless model error is large. Models with lower FI have a higher level of disorder (increase in entropy) which results in greater uncertainty of parameter estimates and model predictions. A constant‐head boundary constrains the heads in the area near the boundary, reducing sensitivity of simulated equivalents to estimated parameters. JI and FI are lower for this boundary condition as compared to a constant‐outflow boundary in which the heads in the area of the boundary can adjust freely. Complex, over‐fitted models, in which the structure of the model is not supported by the observation dataset, result in lower JI and FI because there is insufficient information to estimate all parameters in the model.  相似文献   

7.
多尺度逐次逼近遗传算法反演大地电磁资料   总被引:44,自引:15,他引:29       下载免费PDF全文
遗传算法是一种随机全局搜索算法,与常规的基于局部线性化的最优化方法相比对初始模型的依赖性大为减弱,但是存在着有效基因丢失和早熟收敛问题.采用多尺度逐次逼近反演思想而建立的多尺度逐次逼近遗传算法,能有效地解决上述问题.用该算法对大地电磁资料进行反演,理论曲线和实测资料的试算结果表明多尺度逐次逼近遗传算法能够自动反演地电参数.  相似文献   

8.
Calibration of hydrologic models is very difficult because of measurement errors in input and response, errors in model structure, and the large number of non-identifiable parameters of distributed models. The difficulties even increase in arid regions with high seasonal variation of precipitation, where the modelled residuals often exhibit high heteroscedasticity and autocorrelation. On the other hand, support of water management by hydrologic models is important in arid regions, particularly if there is increasing water demand due to urbanization. The use and assessment of model results for this purpose require a careful calibration and uncertainty analysis. Extending earlier work in this field, we developed a procedure to overcome (i) the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, (ii) the problem of heteroscedasticity of errors by combining a Box–Cox transformation of results and data with seasonally dependent error variances, (iii) the problems of autocorrelated errors, missing data and outlier omission with a continuous-time autoregressive error model, and (iv) the problem of the seasonal variation of error correlations with seasonally dependent characteristic correlation times. The technique was tested with the calibration of the hydrologic sub-model of the Soil and Water Assessment Tool (SWAT) in the Chaohe Basin in North China. The results demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model. A comparison with an independent error model and with error models that only considered a subset of the suggested techniques clearly showed the superiority of the approach based on all the features (i)–(iv) mentioned above.  相似文献   

9.
Automatic calibration of complex subsurface reaction models involves numerous difficulties, including the existence of multiple plausible models, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study investigated a novel procedure for performing simultaneous calibration of multiple models (SCMM). By combining a hybrid global-plus-polishing search heuristic with a biased-but-random adaptive model evaluation step, the new SCMM method calibrates multiple models via efficient exploration of the multi-model calibration space. Central algorithm components are an adaptive assignment of model preference weights, mapping functions relating the uncertain parameters of the alternative models, and a shuffling step that efficiently exploits pseudo-optimal configurations of the alternative models. The SCMM approach was applied to two nitrate contamination problems involving batch reactions and one-dimensional reactive transport. For the chosen problems, the new method produced improved model fits (i.e. up to 35% reduction in objective function) at significantly reduced computational expense (i.e. 40–90% reduction in model evaluations), relative to previously established benchmarks. Although the method was effective for the test cases, SCMM relies on a relatively ad-hoc approach to assigning intermediate preference weights and parameter mapping functions. Despite these limitations, the results of the numerical experiments are empirically promising and the reasoning and structure of the approach provide a strong foundation for further development.  相似文献   

10.
Probabilistic-fuzzy health risk modeling   总被引:3,自引:2,他引:1  
Health risk analysis of multi-pathway exposure to contaminated water involves the use of mechanistic models that include many uncertain and highly variable parameters. Currently, the uncertainties in these models are treated using statistical approaches. However, not all uncertainties in data or model parameters are due to randomness. Other sources of imprecision that may lead to uncertainty include scarce or incomplete data, measurement error, data obtained from expert judgment, or subjective interpretation of available information. These kinds of uncertainties and also the non-random uncertainty cannot be treated solely by statistical methods. In this paper we propose the use of fuzzy set theory together with probability theory to incorporate uncertainties into the health risk analysis. We identify this approach as probabilistic-fuzzy risk assessment (PFRA). Based on the form of available information, fuzzy set theory, probability theory, or a combination of both can be used to incorporate parameter uncertainty and variability into mechanistic risk assessment models. In this study, tap water concentration is used as the source of contamination in the human exposure model. Ingestion, inhalation and dermal contact are considered as multiple exposure pathways. The tap water concentration of the contaminant and cancer potency factors for ingestion, inhalation and dermal contact are treated as fuzzy variables while the remaining model parameters are treated using probability density functions. Combined utilization of fuzzy and random variables produces membership functions of risk to individuals at different fractiles of risk as well as probability distributions of risk for various alpha-cut levels of the membership function. The proposed method provides a robust approach in evaluating human health risk to exposure when there is both uncertainty and variability in model parameters. PFRA allows utilization of certain types of information which have not been used directly in existing risk assessment methods.  相似文献   

11.
The curve number (CN) method is widely used for rainfall–runoff modelling in continuous hydrologic simulation models. A sound continuous soil moisture accounting procedure is necessary for models using the CN method. For shallow soils and soils with low storage, the existing methods have limitations in their ability to reproduce the observed runoff. Therefore, a simple one‐parameter model based on the Soil Conservation Society CN procedure is developed for use in continuous hydrologic simulation. The sensitivity of the model parameter to runoff predictions was also analysed. In addition, the behaviour of the procedure developed and the existing continuous soil moisture accounting procedure used in hydrologic models, in combination with Penman–Monteith and Hargreaves evapotranspiration (ET) methods was also analysed. The new CN methodology, its behaviour and the sensitivity of the depletion coefficient (model parameter) were tested in four United States Geological Survey defined eight‐digit watersheds in different water resources regions of the USA using the SWAT model. In addition to easy parameterization for calibration, the one‐parameter model developed performed adequately in predicting runoff. When tested for shallow soils, the parameter is found to be very sensitive to surface runoff and subsurface flow and less sensitive to ET. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
A new parameter estimation algorithm based on ensemble Kalman filter (EnKF) is developed. The developed algorithm combined with the proposed problem parametrization offers an efficient parameter estimation method that converges using very small ensembles. The inverse problem is formulated as a sequential data integration problem. Gaussian process regression is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen–Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative regularized EnKF algorithm. The filter is converted to an optimization algorithm by using a pseudo time-stepping technique such that the model output matches the time dependent data. The EnKF Kalman gain matrix is regularized using truncated SVD to filter out noisy correlations. Numerical results show that the proposed algorithm is a promising approach for parameter estimation of subsurface flow models.  相似文献   

13.
The derivation and history of the frequently cited aeolian transport model of White are considered in light of the continued replication of an error in the original expression. The error may have escaped notice because the expression is still dimensionally correct and it yields predictions that appear reasonable in comparison with both the predictions of other models with field data. The incorrect expression has come to be identified as a distinct model. However, the correct formulation of the ‘White model’ is, in fact, a rearrangement of the Kawamura model with a slightly smaller (c.6%) empirical coefficient. © 1997 John Wiley & Sons, Ltd.  相似文献   

14.
Considerable uncertainty occurs in the parameter estimates of traditional rainfall–water level transfer function noise (TFN) models, especially with the models built using monthly time step datasets. This is due to the equal weights assigned for rainfall occurring during both water level rise and water level drop events while estimating the TFN model parameters using the least square technique. As an alternative to this approach, a threshold rainfall-based binary-weighted least square method was adopted to estimate the TFN model parameters. The efficacy of this binary-weighted approach in estimating the TFN model parameters was tested on 26 observation wells distributed across the Adyar River basin in Southern India. Model performance indices such as mean absolute error and coefficient of determination values showed that the proposed binary-weighted approach of fitting independent threshold-based TFN models for water level rise and water level drop scenarios considerably improves the model accuracy over other traditional TFN models.
EDITOR D. Koutsoyiannis

ASSOCIATE EDITOR A. Fiori  相似文献   

15.
Abstract

Flood forecasting is of prime importance when it comes to reducing the possible number of lives lost to storm-induced floods. Because rainfall-runoff models are far from being perfect, hydrologists need to continuously update outputs from the rainfall-runoff model they use, in order to adapt to the actual emergency situation. This paper introduces a new updating procedure that can be combined with conceptual rainfall-runoff models for flood forecasting purposes. Conceptual models are highly nonlinear and cannot easily accommodate theoretically optimal methods such as Kalman filtering. Most methods developed so far mainly update the states of the system, i.e. the contents of the reservoirs involved in the rainfall-runoff model. The new parameter updating method proves to be superior to a standard error correction method on four watersheds whose floods can cause damage to the greater Paris area. Moreover, further developments of the approach are possible, especially along the idea of combining parameter updating with assimilation of additional data such as soil moisture data from field measurements and/or from remote sensing.  相似文献   

16.
The error in physically-based rainfall-runoff modelling is broken into components, and these components are assigned to three groups: (1) model structure error, associated with the model’s equations; (2) parameter error, associated with the parameter values used in the equations; and (3) run time error, associated with rainfall and other forcing data. The error components all contribute to “integrated” errors, such as the difference between simulated and observed runoff, but their individual contributions cannot usually be isolated because the modelling process is complex and there is a lack of knowledge about the catchment and its hydrological responses. A simple model of the Slapton Wood Catchment is developed within a theoretical framework in which the catchment and its responses are assumed to be known perfectly. This makes it possible to analyse the contributions of the error components when predicting the effects of a physical change in the catchment. The standard approach to predicting change effects involves: (1) running “unchanged” simulations using current parameter sets; (2) making adjustments to the sets to allow for physical change; and (3) running “changed” simulations. Calibration or uncertainty-handling methods such as GLUE are used to obtain the current sets based on forcing and runoff data for a calibration period, by minimising or creating statistical bounds for the “integrated” errors in simulations of runoff. It is shown that current parameter sets derived in this fashion are unreliable for predicting change effects, because of model structure error and its interaction with parameter error, so caution is needed if the standard approach is to be used when making management decisions about change in catchments.  相似文献   

17.
Magnetic resonance sounding (MRS) has increasingly become an important method in hydrogeophysics because it allows for estimations of essential hydraulic properties such as porosity and hydraulic conductivity. A resistivity model is required for magnetic resonance sounding modelling and inversion. Therefore, joint interpretation or inversion is favourable to reduce the ambiguities that arise in separate magnetic resonance sounding and vertical electrical sounding (VES) inversions. A new method is suggested for the joint inversion of magnetic resonance sounding and vertical electrical sounding data. A one‐dimensional blocky model with varying layer thicknesses is used for the subsurface discretization. Instead of conventional derivative‐based inversion schemes that are strongly dependent on initial models, a global multi‐objective optimization scheme (a genetic algorithm [GA] in this case) is preferred to examine a set of possible solutions in a predefined search space. Multi‐objective joint optimization avoids the domination of one objective over the other without applying a weighting scheme. The outcome is a group of non‐dominated optimal solutions referred to as the Pareto‐optimal set. Tests conducted using synthetic data show that the multi‐objective joint optimization approximates the joint model parameters within the experimental error level and illustrates the range of trade‐off solutions, which is useful for understanding the consistency and conflicts between two models and objectives. Overall, the Levenberg‐Marquardt inversion of field data measured during a survey on a North Sea island presents similar solutions. However, the multi‐objective genetic algorithm method presents an efficient method for exploring the search space by producing a set of non‐dominated solutions. Borehole data were used to provide a verification of the inversion outcomes and indicate that the suggested genetic algorithm method is complementary for derivative‐based inversions.  相似文献   

18.
Analysis of slug tests in formations of high hydraulic conductivity   总被引:1,自引:0,他引:1  
A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.  相似文献   

19.
L. Foglia  S.W. Mehl 《Ground water》2015,53(1):130-139
In this work, we provide suggestions for designing experiments where calibration of many models is required and guidance for identifying problematic calibrations. Calibration of many conceptual models which have different representations of the physical processes in the system, as is done in cross‐validation studies or multi‐model analysis, often uses computationally frugal inversion techniques to achieve tractable execution times. However, because these frugal methods are usually local methods, and the inverse problem is almost always nonlinear, there is no guarantee that the optimal solution will be found. Furthermore, evaluation of each inverse model's performance to identify poor calibrations can be tedious. Results of this study show that if poorly calibrated models are included in the analysis, simulated predictions and measures of prediction uncertainty can be affected in unexpected ways. Guidelines are provided to help identify problematic regressions and correct them.  相似文献   

20.
Abstract

This paper describes a fuzzy rule-based approach applied for reconstruction of missing precipitation events. The working rules are formulated from a set of past observations using an adaptive algorithm. A case study is carried out using the data from three precipitation stations in northern Italy. The study evaluates the performance of this approach compared with an artificial neural network and a traditional statistical approach. The results indicate that, within the parameter sub-space where its rules are trained, the fuzzy rule-based model provided solutions with low mean square error between observations and predictions. The problems that have yet to be addressed are overfitting and applicability outside the range of training data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号