首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Calibration is typically used for improving the predictability of mechanistic simulation models by adjusting a set of model parameters and fitting model predictions to observations. Calibration does not, however, account for or correct potential misspecifications in the model structure, limiting the accuracy of modeled predictions. This paper presents a new approach that addresses both parameter error and model structural error to improve the predictive capabilities of a model. The new approach simultaneously conducts a numeric search for model parameter estimation and a symbolic (regression) search to determine a function to correct misspecifications in model equations. It is based on an evolutionary computation approach that integrates genetic algorithm and genetic programming operators. While this new approach is designed generically and can be applied to a broad array of mechanistic models, it is demonstrated for an illustrative case study involving water quality modeling and prediction. Results based on extensive testing and evaluation, show that the new procedure performs consistently well in fitting a set of training data as well as predicting a set of validation data, and outperforms a calibration procedure and an empirical model fitting procedure.  相似文献   

2.
3.
The error in physically-based rainfall-runoff modelling is broken into components, and these components are assigned to three groups: (1) model structure error, associated with the model’s equations; (2) parameter error, associated with the parameter values used in the equations; and (3) run time error, associated with rainfall and other forcing data. The error components all contribute to “integrated” errors, such as the difference between simulated and observed runoff, but their individual contributions cannot usually be isolated because the modelling process is complex and there is a lack of knowledge about the catchment and its hydrological responses. A simple model of the Slapton Wood Catchment is developed within a theoretical framework in which the catchment and its responses are assumed to be known perfectly. This makes it possible to analyse the contributions of the error components when predicting the effects of a physical change in the catchment. The standard approach to predicting change effects involves: (1) running “unchanged” simulations using current parameter sets; (2) making adjustments to the sets to allow for physical change; and (3) running “changed” simulations. Calibration or uncertainty-handling methods such as GLUE are used to obtain the current sets based on forcing and runoff data for a calibration period, by minimising or creating statistical bounds for the “integrated” errors in simulations of runoff. It is shown that current parameter sets derived in this fashion are unreliable for predicting change effects, because of model structure error and its interaction with parameter error, so caution is needed if the standard approach is to be used when making management decisions about change in catchments.  相似文献   

4.
 The prediction error of a relatively simple soil acidification model (SMART2) was assessed before and after calibration, focussing on the Al and NO3 concentrations on a block scale. Although SMART2 is especially developed for application on a national to European scale, it still runs at a point support. A 5×5 km2 grid was used for application on the European scale. Block characteristic values were obtained simply by taking the median value of the point support values within the corresponding grid cell. In order to increase confidence in model predictions on large spatial scales, the model was calibrated and validated for the Netherlands, using a resolution that is feasible for Europe as a whole. Because observations are available only at the point support, it was necessary to transfer them to the block support of the model results. For this purpose, about 250 point observations of soil solution concentrations in forest soils were upscaled to a 5×5 km2 grid map, using multiple linear regression analysis combined with block kriging. The resulting map with upscaled observations was used for both validation and calibration. A comparison of the map with model predictions using nominal parameter values and the map with the upscaled observations showed that the model overestimated the predicted Al and NO3 concentrations. The nominal model results were still in the 95% confidence interval of the upscaled observations, but calibration improved the model predictions and strongly reduced the model error. However, the model error after calibration remains rather large.  相似文献   

5.
利用非线性误差增长理论,以Lorenz系统为例比较研究了初始误差和参数误差对混沌系统可预报性的影响.结果表明:在初始误差和参数误差单独存在时,系统的可预报期限随误差大小的变化规律基本上相同;对于相同的误差大小,初始误差和参数误差对系统可预报期限的影响几乎相同,这一结果基本上不随参数范围的变化而变化.当初始误差和参数误差同时存在时,两者对可预报期限影响所起的作用大小主要取决于初始误差和参数误差的相对大小.当初始误差远大于参数误差时,Lorenz系统的可预报期限主要由初始误差决定,可以不用考虑参数误差对预报模式可预报性的影响;反之,当参数误差远大于初始误差时,Lorenz系统的可预报期限主要由参数误差决定;当初始误差和参数误差大小相当时,两者都对系统的可预报期限起重要作用.在后两种情况下,在考虑初始误差对可预报性影响的同时还必须考虑参数误差的作用.这提醒我们在作实际数值天气预报的时候,不仅要重视初值的确定,也要重视数值模式控制参数的确定.  相似文献   

6.
Hydrologic model development and calibration have continued in most cases to focus only on accurately reproducing streamflows. However, complex models, for example, the so‐called physically based models, possess large degrees of freedom that, if not constrained properly, may lead to poor model performance when used for prediction. We argue that constraining a model to represent streamflow, which is an integrated resultant of many factors across the watershed, is necessary but by no means sufficient to develop a high‐fidelity model. To address this problem, we develop a framework to utilize the Gravity Recovery and Climate Experiment's (GRACE) total water storage anomaly data as a supplement to streamflows for model calibration, in a multiobjective setting. The VARS method (Variogram Analysis of Response Surfaces) for global sensitivity analysis is used to understand the model behaviour with respect to streamflow and GRACE data, and the BORG multiobjective optimization method is applied for model calibration. Two subbasins of the Saskatchewan River Basin in Western Canada are used as a case study. Results show that the developed framework is superior to the conventional approach of calibration only to streamflows, even when multiple streamflow‐based error functions are simultaneously minimized. It is shown that a range of (possibly false) system trajectories in state variable space can lead to similar (acceptable) model responses. This observation has significant implications for land‐surface and hydrologic model development and, if not addressed properly, may undermine the credibility of the model in prediction. The framework effectively constrains the model behaviour (by constraining posterior parameter space) and results in more credible representation of hydrology across the watershed.  相似文献   

7.
Phytoplankton biomass is an important factor for short-term forecasts of algal blooms. Our new hydrodynamic-phytoplankton model is primarily intended for simulating the spatial and temporal distribution of phytoplankton in Lake Taihu within a time frame of 1-5 days. The model combines two modules: a simple phytoplankton kinetics module for growth and loss; and a mass-transport module, which defines phytoplankton transport horizontally with a two dimensional hydrodynamic model. To adapt field data for model input and calibration, we introduce two simplifications: (a) exclusion of some processes related to phytoplankton dynamics like nutrient dynamics, sediment resuspension, mineralization and nitrification, and (b) use of monthly measured data of the nutrient state. Chlorophyll-α concentration, representing phytoplankton biomass, is the only state variable in the model. A sensitivity analysis was carried out to identify the most sensitive parameter set in the phytoplankton kinetics module. The model was calibrated with field data collected in 2008 and validated with additional data obtained in 2009. A comparison of simulated and observed chlorophyll-α concentration for 33 grid cells achieved an accuracy of 78.7%. However, mean percent error and mean absolute percent error were 13.4% and 58.2%, respectively, which implies that further improvement is necessary, e.g. by reducing uncertainty of the model input and by an improved parameter calibration.  相似文献   

8.
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained.  相似文献   

9.
An integrated groundwater/surface water hydrological model with a 1 km2 grid has been constructed for Denmark covering 43,000 km2. The model is composed of a relatively simple root zone component for estimating the net precipitation, a comprehensive three-dimensional groundwater component for estimating recharge to and hydraulic heads in different geological layers, and a river component for streamflow routing and calculating stream–aquifer interaction. The model was constructed on the basis of the MIKE SHE code and by utilising comprehensive national databases on geology, soil, topography, river systems, climate and hydrology. The present paper describes the modelling process for the 7330 km2 island of Sjælland with emphasis on the problems experienced in combining the classical paradigms of groundwater modelling, such as inverse modelling of steady-state conditions, and catchment modelling, focussing on dynamic conditions and discharge simulation. Three model versions with different assumptions on input data and parameter values were required until the performance of the final, according to pre-defined accuracy criteria, model was evaluated as being satisfactory. The paper highlights the methodological issues related to establishment of performance criteria, parameterisation and assessment of parameter values from field data, calibration and validation test schemes. Most of the parameter values were assessed directly from field data, while about 10 ‘free’ parameters were subject to calibration using a combination of inverse steady-state groundwater modelling and manual trial-and-error dynamic groundwater/surface water modelling. Emphasising the importance of tests against independent data, the validation schemes included combinations of split-sample tests (another period) and proxy-basin tests (another area).  相似文献   

10.
This research is part of a larger effort to better understand and quantify the epistemic model uncertainty in dynamic response-history simulations. This paper focuses on how calibration methods influence model uncertainty. Structural models in earthquake engineering are typically built up from independently calibrated component models. During component calibration, engineers often use experimental component response under quasi-static loading to find parameters that minimize the error in structural response under dynamic loading. Since the calibration and the simulation environments are different, if a calibration method wants to provide optimal parameters for simulation, it has to focus on features of the component response that are important from the perspective of global structural behavior. Relevance describes how efficiently a calibration method can focus on such important features. A framework of virtual experiments and a methodology is proposed to evaluate the influence of calibration relevance on model error in simulations. The evaluation is demonstrated through a case study with buckling-restrained braced frames (BRBF). Two calibration methods are compared in the case study. The first, highly relevant calibration method is based on stiffness and hardening characteristics of braces; the second, less relevant calibration method is based on the axial force response of braces. The highly relevant calibration method consistently identified the preferable parameter sets. In contrast, the less relevant calibration method showed poor to mediocre performance. The framework and methodology presented here are not limited to BRBF. They have the potential to facilitate and systematize the improvement of component-model calibration methods for any structural system.  相似文献   

11.
This paper presents the analytical properties of the sensitivity of the two-dimensional, steady-state groundwater flow equation to the flow parameters and to the boundary conditions, based on the perturbation approach. These analytical properties are used to provide guidelines for model design, model calibration and monitoring network design. The sensitivity patterns are shown to depend on the nature of both the perturbed parameter and the variable investigated. Indeed, the sensitivity of the hydraulic head to the hydraulic conductivity extends mainly in the flow direction, while the sensitivity to the recharge spreads radially. Besides, the sensitivity of the flow longitudinal velocity to the hydraulic conductivity propagates in both the longitudinal and transverse directions, whereas the sensitivity of the flow transverse velocity propagates in the diagonal directions to the flow. The analytical results are confirmed by application examples on idealized and real-world simulations. These analytical findings allow some general rules to be established for model design, model calibration and monitoring network design. In particular, the optimal location of measurement points depends on the nature of the variable of interest. Measurement network design thus proves to be problem-dependent. Moreover, adequate monitoring well network design may allow to discriminate between the possible sources of error.  相似文献   

12.
A new methodology is proposed for the development of parameter-independent reduced models for transient groundwater flow models. The model reduction technique is based on Galerkin projection of a highly discretized model onto a subspace spanned by a small number of optimally chosen basis functions. We propose two greedy algorithms that iteratively select optimal parameter sets and snapshot times between the parameter space and the time domain in order to generate snapshots. The snapshots are used to build the Galerkin projection matrix, which covers the entire parameter space in the full model. We then apply the reduced subspace model to solve two inverse problems: a deterministic inverse problem and a Bayesian inverse problem with a Markov Chain Monte Carlo (MCMC) method. The proposed methodology is validated with a conceptual one-dimensional groundwater flow model. We then apply the methodology to a basin-scale, conceptual aquifer in the Oristano plain of Sardinia, Italy. Using the methodology, the full model governed by 29,197 ordinary differential equations is reduced by two to three orders of magnitude, resulting in a drastic reduction in computational requirements.  相似文献   

13.
In situ calibration is a proposed strategy for continuous as well as initial calibration of an impact disdrometer. In previous work, a collocated tipping bucket had been utilized to provide a rainfall rate based ~11/3 moment reference to an impact disdrometer’s signal processing system for implementation of adaptive calibration. Using rainfall rate only, transformation of impulse amplitude to a drop volume based on a simple power law was used to define an error surface in the model’s parameter space. By incorporating optical extinction second moment measurements with rainfall rate data, an improved in situ disdrometer calibration algorithm results due to utilization of multiple (two or more) independent moments of the drop size distribution in the error function definition. The resulting improvement in calibration performance can be quantified by detailed examination of the parameter space error surface using simulation as well as real data.  相似文献   

14.
Finding an operational parameter vector is always challenging in the application of hydrologic models, with over‐parameterization and limited information from observations leading to uncertainty about the best parameter vectors. Thus, it is beneficial to find every possible behavioural parameter vector. This paper presents a new methodology, called the patient rule induction method for parameter estimation (PRIM‐PE), to define where the behavioural parameter vectors are located in the parameter space. The PRIM‐PE was used to discover all regions of the parameter space containing an acceptable model behaviour. This algorithm consists of an initial sampling procedure to generate a parameter sample that sufficiently represents the response surface with a uniform distribution within the “good‐enough” region (i.e., performance better than a predefined threshold) and a rule induction component (PRIM), which is then used to define regions in the parameter space in which the acceptable parameter vectors are located. To investigate its ability in different situations, the methodology is evaluated using four test problems. The PRIM‐PE sampling procedure was also compared against a Markov chain Monte Carlo sampler known as the differential evolution adaptive Metropolis (DREAMZS) algorithm. Finally, a spatially distributed hydrological model calibration problem with two settings (a three‐parameter calibration problem and a 23‐parameter calibration problem) was solved using the PRIM‐PE algorithm. The results show that the PRIM‐PE method captured the good‐enough region in the parameter space successfully using 8 and 107 boxes for the three‐parameter and 23‐parameter problems, respectively. This good‐enough region can be used in a global sensitivity analysis to provide a broad range of parameter vectors that produce acceptable model performance. Moreover, for a specific objective function and model structure, the size of the boxes can be used as a measure of equifinality.  相似文献   

15.
ABSTRACT

Reliable simulations of hydrological models require that model parameters are precisely identified. In constraining model parameters to small ranges, high parameter identifiability is achieved. In this study, it is investigated how precisely model parameters can be constrained in relation to a set of contrasting performance criteria. For this, model simulations with identical parameter samplings are carried out with a hydrological model (SWAT) applied to three contrasting catchments in Germany (lowland, mid-range mountains, alpine regions). Ten performance criteria including statistical metrics and signature measures are calculated for each model simulation. Based on the parameter identifiability that is computed separately for each performance criterion, model parameters are constrained to smaller ranges individually for each catchment. An iterative repetition of model simulations with successively constrained parameter ranges leads to more precise parameter identifiability and improves model performance. Based on these results, a more consistent handling of model parameters is achieved for model calibration.  相似文献   

16.
Calibration of hydrologic models is very difficult because of measurement errors in input and response, errors in model structure, and the large number of non-identifiable parameters of distributed models. The difficulties even increase in arid regions with high seasonal variation of precipitation, where the modelled residuals often exhibit high heteroscedasticity and autocorrelation. On the other hand, support of water management by hydrologic models is important in arid regions, particularly if there is increasing water demand due to urbanization. The use and assessment of model results for this purpose require a careful calibration and uncertainty analysis. Extending earlier work in this field, we developed a procedure to overcome (i) the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, (ii) the problem of heteroscedasticity of errors by combining a Box–Cox transformation of results and data with seasonally dependent error variances, (iii) the problems of autocorrelated errors, missing data and outlier omission with a continuous-time autoregressive error model, and (iv) the problem of the seasonal variation of error correlations with seasonally dependent characteristic correlation times. The technique was tested with the calibration of the hydrologic sub-model of the Soil and Water Assessment Tool (SWAT) in the Chaohe Basin in North China. The results demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model. A comparison with an independent error model and with error models that only considered a subset of the suggested techniques clearly showed the superiority of the approach based on all the features (i)–(iv) mentioned above.  相似文献   

17.
In this paper, we present a methodology to perform geophysical inversion of large‐scale linear systems via a covariance‐free orthogonal transformation: the discrete cosine transform. The methodology consists of compressing the matrix of the linear system as a digital image and using the interesting properties of orthogonal transformations to define an approximation of the Moore–Penrose pseudo‐inverse. This methodology is also highly scalable since the model reduction achieved by these techniques increases with the number of parameters of the linear system involved due to the high correlation needed for these parameters to accomplish very detailed forward predictions and allows for a very fast computation of the inverse problem solution. We show the application of this methodology to a simple synthetic two‐dimensional gravimetric problem for different dimensionalities and different levels of white Gaussian noise and to a synthetic linear system whose system matrix has been generated via geostatistical simulation to produce a random field with a given spatial correlation. The numerical results show that the discrete cosine transform pseudo‐inverse outperforms the classical least‐squares techniques, mainly in the presence of noise, since the solutions that are obtained are more stable and fit the observed data with the lowest root‐mean‐square error. Besides, we show that model reduction is a very effective way of parameter regularisation when the conditioning of the reduced discrete cosine transform matrix is taken into account. We finally show its application to the inversion of a real gravity profile in the Atacama Desert (north Chile) obtaining very successful results in this non‐linear inverse problem. The methodology presented here has a general character and can be applied to solve any linear and non‐linear inverse problems (through linearisation) arising in technology and, particularly, in geophysics, independently of the geophysical model discretisation and dimensionality. Nevertheless, the results shown in this paper are better in the case of ill‐conditioned inverse problems for which the matrix compression is more efficient. In that sense, a natural extension of this methodology would be its application to the set of normal equations.  相似文献   

18.
Non-linear numerical models of the injection phase of a carbon sequestration (CS) project are computationally demanding. Thus, the computational cost of the calibration of these models using sampling-based solutions can be formidable. The Bayesian adaptive response surface method (BARSM)—an adaptive response surface method (RSM)—is developed to mitigate the cost of sampling-based, continuous calibration of CS models. It is demonstrated that the adaptive scheme has a negligible effect on accuracy, while providing a significant increase in efficiency. In the BARSM, a meta-model replaces the computationally costly full model during the majority of the calibration cycles. In the remaining cycles, the full model is used and samples of these cycles are utilized for adaptively updating the meta-model. The idea behind the BARSM is to take advantage of the fact that sampling-based calibration algorithms typically tend to sample more frequently from areas with a larger posterior density than from areas with a smaller posterior density. This behavior of the sampling-based calibration algorithms is used to adaptively update the meta-model and to make it more accurate where it is most likely to be evaluated. The BARSM is integrated with Unscented Importance Sampling (UIS) (Sarkarfarshi and Gracie, Stoch Env Res Risk Assess 29: 975–993, 2015), which is an efficient Bayesian calibration algorithm. A synthesized case of supercritical CO2 injection in a heterogeneous saline aquifer is used to assess the performance of the BARSM and to compare it with a classical non-adaptive RSM approach and Bayesian calibration method UIS without using RSM. The BARSM is shown to reduce the computational cost compared to non-adaptive Bayesian calibration by 87 %, with negligible effect on accuracy. It is demonstrated that the error of the meta-model fitted using the BARSM, when samples are drawn from the posterior parameter distribution, is negligible and smaller than the monitoring error.  相似文献   

19.
A simple phosphorus (P) transfer model of the Welland catchment, UK, is evaluated against multiple objective functions using a Monte Carlo approach that combines calibration, identifiability, sensitivity and uncertainty analysis. The model is based on simple conceptual rainfall‐runoff and river routing components, combined with estimates of the daily non‐point source load derived from annual landuse‐based export coefficients, disaggregated as a function of the runoff. The model has limited data requirements, consistent with data availability, and is parsimoneous with respect to the number of parameters identified through inverse modelling. The best performing parameter sets capture the main aspects of the observed flow and total P (TP) concentrations and provide a suitable basis for a decision‐support tool. However, a trade‐off is evident between matching the observed flow peaks, flow recessions and TP concentrations simultaneously, highlighting some limitations of the model structure and/or calibration data. Model analysis indicates that daily non‐point source load cannot be described as a function of near‐surface runoff and land use alone, but that other influences, including seasonality, are important. However, further model development to improve performance is likely to introduce additional complexity (in terms of parameter numbers), and hence additional problems of parameter identifiability and output uncertainty, which in turn raises issues of the information content of the available data. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

20.
The interactive multi-objective genetic algorithm (IMOGA) combines traditional optimization with an interactive framework that considers the subjective knowledge of hydro-geological experts in addition to quantitative calibration measures such as calibration errors and regularization to solve the groundwater inverse problem. The IMOGA is inherently a deterministic framework and identifies multiple large-scale parameter fields (typically head and transmissivity data are used to identify transmissivity fields). These large-scale parameter fields represent the optimal trade-offs between the different criteria (quantitative and qualitative) used in the IMOGA. This paper further extends the IMOGA to incorporate uncertainty both in the large-scale trends as well as the small-scale variability (which can not be resolved using the field data) in the parameter fields. The different parameter fields identified by the IMOGA represent the uncertainty in large-scale trends, and this uncertainty is modeled using a Bayesian approach where calibration error, regularization, and the expert’s subjective preference are combined to compute a likelihood metric for each parameter field. Small-scale (stochastic) variability is modeled using a geostatistical approach and added onto the large-scale trends identified by the IMOGA. This approach is applied to the Waste Isolation Pilot Plant (WIPP) case-study. Results, with and without expert interaction, are analyzed and the impact that expert judgment has on predictive uncertainty at the WIPP site is discussed. It is shown that for this case, expert interaction leads to more conservative solutions as the expert compensates for some of the lack of data and modeling approximations introduced in the formulation of the problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号