首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
This study compares formal Bayesian inference to the informal generalized likelihood uncertainty estimation (GLUE) approach for uncertainty-based calibration of rainfall-runoff models in a multi-criteria context. Bayesian inference is accomplished through Markov Chain Monte Carlo (MCMC) sampling based on an auto-regressive multi-criteria likelihood formulation. Non-converged MCMC sampling is also considered as an alternative method. These methods are compared along multiple comparative measures calculated over the calibration and validation periods of two case studies. Results demonstrate that there can be considerable differences in hydrograph prediction intervals generated by formal and informal strategies for uncertainty-based multi-criteria calibration. Also, the formal approach generates definitely preferable validation period results compared to GLUE (i.e., tighter prediction intervals that show higher reliability) considering identical computational budgets. Moreover, non-converged MCMC (based on the standard Gelman–Rubin metric) performance is reasonably consistent with those given by a formal and fully-converged Bayesian approach even though fully-converged results requires significantly larger number of samples (model evaluations) for the two case studies. Therefore, research to define alternative and more practical convergence criteria for MCMC applications to computationally intensive hydrologic models may be warranted.  相似文献   

2.
Single and multiple surrogate models were compared for single-objective pumping optimization problems of a hypothetical and a real-world coastal aquifer. Different instances of radial basis functions and kriging surrogates were utilized to reduce the computational cost of direct optimization with variable density and salt transport models. An adaptive surrogate update scheme was embedded in the operations of an evolutionary algorithm to efficiently control the feasibility of optimal solutions in pumping optimization problems with multiple constraints. For a set of independent optimization runs, results showed that multiple surrogates, either by selecting the best or by using ensembles, did not necessarily outperform the single surrogate approach. Nevertheless, the ensemble with optimal weights produced slightly better results than selecting only the best surrogates or applying a simple averaging approach. For all cases, the computational cost, by using single or multiple surrogate models, was reduced by up to 90% of the direct optimization.  相似文献   

3.
The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a sufficiently large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos expansion (PCE) to represent and propagate the uncertainties in parameters and states. However, PCKF suffers from the so-called “curse of dimensionality”. Its computational cost increases drastically with the increasing number of parameters and system nonlinearity. Furthermore, PCKF may fail to provide accurate estimations due to the joint updating scheme for strongly nonlinear models. Motivated by recent developments in uncertainty quantification and EnKF, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected at each assimilation step; the “restart” scheme is utilized to eliminate the inconsistency between updated model parameters and states variables. The performance of RAPCKF is systematically tested with numerical cases of unsaturated flow models. It is shown that the adaptive approach and restart scheme can significantly improve the performance of PCKF. Moreover, RAPCKF has been demonstrated to be more efficient than EnKF with the same computational cost.  相似文献   

4.
Non-linear numerical models of the injection phase of a carbon sequestration (CS) project are computationally demanding. Thus, the computational cost of the calibration of these models using sampling-based solutions can be formidable. The Bayesian adaptive response surface method (BARSM)—an adaptive response surface method (RSM)—is developed to mitigate the cost of sampling-based, continuous calibration of CS models. It is demonstrated that the adaptive scheme has a negligible effect on accuracy, while providing a significant increase in efficiency. In the BARSM, a meta-model replaces the computationally costly full model during the majority of the calibration cycles. In the remaining cycles, the full model is used and samples of these cycles are utilized for adaptively updating the meta-model. The idea behind the BARSM is to take advantage of the fact that sampling-based calibration algorithms typically tend to sample more frequently from areas with a larger posterior density than from areas with a smaller posterior density. This behavior of the sampling-based calibration algorithms is used to adaptively update the meta-model and to make it more accurate where it is most likely to be evaluated. The BARSM is integrated with Unscented Importance Sampling (UIS) (Sarkarfarshi and Gracie, Stoch Env Res Risk Assess 29: 975–993, 2015), which is an efficient Bayesian calibration algorithm. A synthesized case of supercritical CO2 injection in a heterogeneous saline aquifer is used to assess the performance of the BARSM and to compare it with a classical non-adaptive RSM approach and Bayesian calibration method UIS without using RSM. The BARSM is shown to reduce the computational cost compared to non-adaptive Bayesian calibration by 87 %, with negligible effect on accuracy. It is demonstrated that the error of the meta-model fitted using the BARSM, when samples are drawn from the posterior parameter distribution, is negligible and smaller than the monitoring error.  相似文献   

5.
The use of distributed data for model calibration is becoming more popular in the advent of the availability of spatially distributed observations. Hydrological model calibration has traditionally been carried out using single objective optimisation and only recently has been extended to a multi-objective optimisation domain. By formulating the calibration problem with several objectives, each objective relating to a set of observations, the parameter sets can be constrained more effectively. However, many previous multi-objective calibration studies do not consider individual observations or catchment responses separately, but instead utilises some form of aggregation of objectives. This paper proposes a multi-objective calibration approach that can efficiently handle many objectives using both clustering and preference ordered ranking. The algorithm is applied to calibrate the MIKE SHE distributed hydrologic model and tested on the Karup catchment in Denmark. The results indicate that the preferred solutions selected using the proposed algorithm are good compromise solutions and the parameter values are well defined. Clustering with Kohonen mapping was able to reduce the number of objective functions from 18 to 5. Calibration using the standard deviation of groundwater level residuals enabled us to identify a group of wells that may not be simulated properly, thus highlighting potential problems with the model parameterisation.  相似文献   

6.
This study presents single‐objective and multi‐objective particle swarm optimization (PSO) algorithms for automatic calibration of Hydrologic Engineering Center‐ Hydrologic Modeling Systems rainfall‐runoff model of Tamar Sub‐basin of Gorganroud River Basin in north of Iran. Three flood events were used for calibration and one for verification. Four performance criteria (objective functions) were considered in multi‐objective calibration where different combinations of objective functions were examined. For comparison purposes, a fuzzy set‐based approach was used to determine the best compromise solutions from the Pareto fronts obtained by multi‐objective PSO. The candidate parameter sets determined from different single‐objective and multi‐objective calibration scenarios were tested against the fourth event in the verification stage, where the initial abstraction parameters were recalibrated. A step‐by‐step screening procedure was used in this stage while evaluating and comparing the candidate parameter sets, which resulted in a few promising sets that performed well with respect to at least three of four performance criteria. The promising sets were all from the multi‐objective calibration scenarios which revealed the outperformance of the multi‐objective calibration on the single‐objective one. However, the results indicated that an increase of the number of objective functions did not necessarily lead to a better performance as the results of bi‐objective function calibration with a proper combination of objective functions performed as satisfactorily as those of triple‐objective function calibration. This is important because handling multi‐objective optimization with an increased number of objective functions is challenging especially from a computational point of view. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Considerable uncertainty occurs in the parameter estimates of traditional rainfall–water level transfer function noise (TFN) models, especially with the models built using monthly time step datasets. This is due to the equal weights assigned for rainfall occurring during both water level rise and water level drop events while estimating the TFN model parameters using the least square technique. As an alternative to this approach, a threshold rainfall-based binary-weighted least square method was adopted to estimate the TFN model parameters. The efficacy of this binary-weighted approach in estimating the TFN model parameters was tested on 26 observation wells distributed across the Adyar River basin in Southern India. Model performance indices such as mean absolute error and coefficient of determination values showed that the proposed binary-weighted approach of fitting independent threshold-based TFN models for water level rise and water level drop scenarios considerably improves the model accuracy over other traditional TFN models.
EDITOR D. Koutsoyiannis

ASSOCIATE EDITOR A. Fiori  相似文献   

8.
In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the performance of three calibration search procedures: a multi-start non-linear regression algorithm (i.e. multi-start Levenberg–Marquardt), a global search heuristic (i.e. particle swarm optimization), and a hybrid algorithm that combines the particle swarm procedure with a regression-based “polishing” step. Graphical analysis of the selected calibration problems revealed heterogeneous regions of extreme parameter sensitivity and insensitivity along with abundant numbers of local minima. These characteristics hindered the performance of the multi-start non-linear regression technique, which was generally the least effective of the considered algorithms. In most cases, the global search and hybrid methods were capable of producing improved model fits at comparable computational expense. In other cases, the multi-start and hybrid calibration algorithms yielded comparable fitness values but markedly differing parameter estimates and associated uncertainty measures.  相似文献   

9.
《Journal of Hydrology》2006,316(1-4):129-140
Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall–runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall–runoff model calibration. Journal of Hydrology, 268, 72–86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall–runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.  相似文献   

10.
In order to successfully calibrate an urban drainage model, multiple criteria should be considered, which raises the issue of adopting a method for comparing different parameter sets according to a set of objectives. Multi-objective genetic algorithms (MOGA) have proved effective in numerous such applications, where most of the techniques relying on the condition of Pareto efficiency to compare different solutions. However, as the number of criteria increases, the ratio of Pareto optimal to feasible solutions increases as well, worsening the efficiency of the genetic algorithm search. In this paper, firstly the drawbacks of single objective calibration approach are highlighted. Then, a new MOGA, the preference ordering genetic algorithm, is proposed, that alleviates the drawbacks of conventional Pareto-based methods. The efficacy of this algorithm is demonstrated on the calibration of a physically-based, distributed sewer network model, and the comparison is made with a known MOGA NSGA-II. The results are very encouraging because the obtained parameter sets closely resembled both calibration and validation events. The identifiability of 10 model parameters were analysed, showing significantly smaller ranges of optimal values for parameters related to impervious areas compared to those related to pervious areas, which is reasonable considering relatively low rainfall intensities. In addition to standard ways of presenting calibration results, “radar” plots were also used to present information on trade-off for eight objective functions for four rainfall-runoff events.  相似文献   

11.
Transverse isotropy with a tilted axis of symmetry (TTI) causes image distortion if isotropic models are assumed during data processing. A simple anisotropic migration approach needs long computational times and is sensitive to the signal-to-noise ratio. This paper presents an efficient, general approach to common-depth-point (CDP) mapping to image the subsurface in TTI media from qP-wave seismic data by adding anisotropic and dip parameters to the velocity model. The method consists of three steps: (i) calculating traveltimes and positions of the CDP points; (ii) determining CDP trajectories; (iii) CDP imaging. A crucial step is the rapid computation of traveltimes and raypaths in the TTI media, which is achieved by the Fermat method, specially adapted for anisotropic layered media. The algorithm can image the subsurface of a given model quickly and accurately, and is suitable for application to a bending reflector. The effectiveness of the method is demonstrated by comparing the raypaths, the traveltimes and the results of CDP mapping, when assuming isotropic media, transversely isotropic media with a vertical axis of symmetry (TIV), and TTI media.  相似文献   

12.
A new uncertainty estimation method, which we recently introduced in the literature, allows for the comprehensive search of model posterior space while maintaining a high degree of computational efficiency. The method starts with an optimal solution to an inverse problem, performs a parameter reduction step and then searches the resulting feasible model space using prior parameter bounds and sparse‐grid polynomial interpolation methods. After misfit rejection, the resulting model ensemble represents the equivalent model space and can be used to estimate inverse solution uncertainty. While parameter reduction introduces a posterior bias, it also allows for scaling this method to higher dimensional problems. The use of Smolyak sparse‐grid interpolation also dramatically increases sampling efficiency for large stochastic dimensions. Unlike Bayesian inference, which treats the posterior sampling problem as a random process, this geometric sampling method exploits the structure and smoothness in posterior distributions by solving a polynomial interpolation problem and then resampling from the resulting interpolant. The two questions we address in this paper are 1) whether our results are generally compatible with established Bayesian inference methods and 2) how does our method compare in terms of posterior sampling efficiency. We accomplish this by comparing our method for two electromagnetic problems from the literature with two commonly used Bayesian sampling schemes: Gibbs’ and Metropolis‐Hastings. While both the sparse‐grid and Bayesian samplers produce compatible results, in both examples, the sparse‐grid approach has a much higher sampling efficiency, requiring an order of magnitude fewer samples, suggesting that sparse‐grid methods can significantly improve the tractability of inference solutions for problems in high dimensions or with more costly forward physics.  相似文献   

13.
Hydrologic model development and calibration have continued in most cases to focus only on accurately reproducing streamflows. However, complex models, for example, the so‐called physically based models, possess large degrees of freedom that, if not constrained properly, may lead to poor model performance when used for prediction. We argue that constraining a model to represent streamflow, which is an integrated resultant of many factors across the watershed, is necessary but by no means sufficient to develop a high‐fidelity model. To address this problem, we develop a framework to utilize the Gravity Recovery and Climate Experiment's (GRACE) total water storage anomaly data as a supplement to streamflows for model calibration, in a multiobjective setting. The VARS method (Variogram Analysis of Response Surfaces) for global sensitivity analysis is used to understand the model behaviour with respect to streamflow and GRACE data, and the BORG multiobjective optimization method is applied for model calibration. Two subbasins of the Saskatchewan River Basin in Western Canada are used as a case study. Results show that the developed framework is superior to the conventional approach of calibration only to streamflows, even when multiple streamflow‐based error functions are simultaneously minimized. It is shown that a range of (possibly false) system trajectories in state variable space can lead to similar (acceptable) model responses. This observation has significant implications for land‐surface and hydrologic model development and, if not addressed properly, may undermine the credibility of the model in prediction. The framework effectively constrains the model behaviour (by constraining posterior parameter space) and results in more credible representation of hydrology across the watershed.  相似文献   

14.
Epistemic uncertainties can be classified into two major categories: parameter and model. While the first one stems from the difficulties in estimating the values of input model parameters, the second comes from the difficulties in selecting the appropriate type of model. Investigating their combined effects and ranking each of them in terms of their influence on the predicted losses can be useful in guiding future investigations. In this context, we propose a strategy relying on variance-based global sensitivity analysis, which is demonstrated using an earthquake loss assessment for Pointe-à-Pitre (Guadeloupe, France). For the considered assumptions, we show: that uncertainty of losses would be greatly reduced if all the models could be unambiguously selected; and that the most influential source of uncertainty (whether of parameter or model type) corresponds to the seismic activity group. Finally, a sampling strategy was proposed to test the influence of the experts’ weights on models and on the assumed coefficients of variation of parameter uncertainty. The former influenced the sensitivity measures of the model uncertainties, whereas the latter could completely change the importance rank of the uncertainties associated to the vulnerability assessment step.  相似文献   

15.
16.
A new methodology is proposed for the development of parameter-independent reduced models for transient groundwater flow models. The model reduction technique is based on Galerkin projection of a highly discretized model onto a subspace spanned by a small number of optimally chosen basis functions. We propose two greedy algorithms that iteratively select optimal parameter sets and snapshot times between the parameter space and the time domain in order to generate snapshots. The snapshots are used to build the Galerkin projection matrix, which covers the entire parameter space in the full model. We then apply the reduced subspace model to solve two inverse problems: a deterministic inverse problem and a Bayesian inverse problem with a Markov Chain Monte Carlo (MCMC) method. The proposed methodology is validated with a conceptual one-dimensional groundwater flow model. We then apply the methodology to a basin-scale, conceptual aquifer in the Oristano plain of Sardinia, Italy. Using the methodology, the full model governed by 29,197 ordinary differential equations is reduced by two to three orders of magnitude, resulting in a drastic reduction in computational requirements.  相似文献   

17.
We examine the value of additional information in multiple objective calibration in terms of model performance and parameter uncertainty. We calibrate and validate a semi‐distributed conceptual catchment model for two 11‐year periods in 320 Austrian catchments and test three approaches of parameter calibration: (a) traditional single objective calibration (SINGLE) on daily runoff; (b) multiple objective calibration (MULTI) using daily runoff and snow cover data; (c) multiple objective calibration (APRIORI) that incorporates an a priori expert guess about the parameter distribution as additional information to runoff and snow cover data. Results indicate that the MULTI approach performs slightly poorer than the SINGLE approach in terms of runoff simulations, but significantly better in terms of snow cover simulations. The APRIORI approach is essentially as good as the SINGLE approach in terms of runoff simulations but is slightly poorer than the MULTI approach in terms of snow cover simulations. An analysis of the parameter uncertainty indicates that the MULTI approach significantly decreases the uncertainty of the model parameters related to snow processes but does not decrease the uncertainty of other model parameters as compared to the SINGLE case. The APRIORI approach tends to decrease the uncertainty of all model parameters as compared to the SINGLE case. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
This paper develops concepts and methods to study stochastic hydrologic models. Problems regarding the application of the existing stochastic approaches in the study of groundwater flow are acknowledged, and an attempt is made to develop efficient means for their solution. These problems include: the spatial multi-dimensionality of the differential equation models governing transport-type phenomena; physically unrealistic assumptions and approximations and the inadequacy of the ordinary perturbation techniques. Multi-dimensionality creates serious mathematical and technical difficulties in the stochastic analysis of groundwater flow, due to the need for large mesh sizes and the poorly conditioned matrices arising from numerical approximations. An alternative to the purely computational approach is to simplify the complex partial differential equations analytically. This can be achieved efficiently by means of a space transformation approach, which transforms the original multi-dimensional problem to a much simpler unidimensional space. The space transformation method is applied to stochastic partial differential equations whose coefficients are random functions of space and/or time. Such equations constitute an integral part of groundwater flow and solute transport. Ordinary perturbation methods for studying stochastic flow equations are in many cases physically inadequate and may lead to questionable approximations of the actual flow. To address these problems, a perturbation analysis based on Feynman-diagram expansions is proposed in this paper. This approach incorporates important information on spatial variability and fulfills essential physical requirements, both important advantages over ordinary hydrologic perturbation techniques. Moreover, the diagram-expansion approach reduces the original stochastic flow problem to a closed set of equations for the mean and the covariance function.  相似文献   

19.
ABSTRACT

The calibration of hydrological models is formulated as a blackbox optimization problem where the only information available is the objective function value. Distributed hydrological models are generally computationally intensive, and their calibration may require several hours or days which can be an issue for many operational contexts. Different optimization algorithms have been developed over the years and exhibit different strengths when applied to the calibration of computationally intensive hydrological models. This paper shows how the dynamically dimensioned search (DDS) and the mesh adaptive direct search (MADS) algorithms can be combined to significantly reduce the computational time of calibrating distributed hydrological models while ensuring robustness and stability regarding the final objective function values. Five transitional features are described to adequately merge both algorithms. The hybrid approach is applied to the distributed and computationally intensive HYDROTEL model on three different river basins located in Québec (Canada).  相似文献   

20.
This paper presents an efficient numerical tool for the prediction of railway dynamic response.A behavior calibration of the infinite Euler-Bernoulli beam resting on continuous viscoelastic foundation is proposed.Constitutive laws of the discrete elements are determined for a rectilinear ballasted track.A three-dimensional model coupled with an adaptive meshing scheme is employed to calibrate the beam model impedances by finding the similarity between the output signals using the genetic algorithm.The model shows an important performance with significant reduction in computational effort.This study emphasizes the major impact of the excitation characteristics on the parameters of the discrete models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号