首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Hydrologic model development and calibration have continued in most cases to focus only on accurately reproducing streamflows. However, complex models, for example, the so‐called physically based models, possess large degrees of freedom that, if not constrained properly, may lead to poor model performance when used for prediction. We argue that constraining a model to represent streamflow, which is an integrated resultant of many factors across the watershed, is necessary but by no means sufficient to develop a high‐fidelity model. To address this problem, we develop a framework to utilize the Gravity Recovery and Climate Experiment's (GRACE) total water storage anomaly data as a supplement to streamflows for model calibration, in a multiobjective setting. The VARS method (Variogram Analysis of Response Surfaces) for global sensitivity analysis is used to understand the model behaviour with respect to streamflow and GRACE data, and the BORG multiobjective optimization method is applied for model calibration. Two subbasins of the Saskatchewan River Basin in Western Canada are used as a case study. Results show that the developed framework is superior to the conventional approach of calibration only to streamflows, even when multiple streamflow‐based error functions are simultaneously minimized. It is shown that a range of (possibly false) system trajectories in state variable space can lead to similar (acceptable) model responses. This observation has significant implications for land‐surface and hydrologic model development and, if not addressed properly, may undermine the credibility of the model in prediction. The framework effectively constrains the model behaviour (by constraining posterior parameter space) and results in more credible representation of hydrology across the watershed.  相似文献   

3.
In situ calibration is a proposed strategy for continuous as well as initial calibration of an impact disdrometer. In previous work, a collocated tipping bucket had been utilized to provide a rainfall rate based ~11/3 moment reference to an impact disdrometer’s signal processing system for implementation of adaptive calibration. Using rainfall rate only, transformation of impulse amplitude to a drop volume based on a simple power law was used to define an error surface in the model’s parameter space. By incorporating optical extinction second moment measurements with rainfall rate data, an improved in situ disdrometer calibration algorithm results due to utilization of multiple (two or more) independent moments of the drop size distribution in the error function definition. The resulting improvement in calibration performance can be quantified by detailed examination of the parameter space error surface using simulation as well as real data.  相似文献   

4.
A number of challenges including instability, nonconvergence, nonuniqueness, nonoptimality, and lack of a general guideline for inverse modelling have limited the application of automatic calibration by generic inversion codes in solving the saltwater intrusion problem in real‐world cases. A systematic parameter selection procedure for the selection of a small number of independent parameters is applied to a real case of saltwater intrusion in a small island aquifer system in the semiarid region of the Persian Gulf. The methodology aims at reducing parameter nonuniqueness and uncertainty and the time spent on inverse modelling computations. Subsequent to the automatic calibration of the numerical model, uncertainty is analysed by constrained nonlinear optimization of the inverse model. The results define the percentage of uncertainty in the parameter estimation that will maintain the model inside a user‐defined neighbourhood of the best possible calibrated model. Sensitivity maps of both pressure and concentration for the small island aquifer system are also developed. These sensitivity maps indicate higher sensitivity of pressure to model parameters compared with concentration. These sensitivity maps serve as a benchmark for correlation analysis and also assist in the selection of observations points of pressure and concentration in the calibration process. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
With the recent development of distributed hydrological models, the use of multi‐site observed data to evaluate model performance is becoming more common. Distributed hydrological model have many advantages, and at the same time, it also faces the challenge to calibrate over‐do parameters. As a typical distributed hydrological model, problems also exist in Soil and Water Assessment Tool (SWAT) parameter calibration. In the paper, four different uncertainty approaches – Particle Swarm Optimization (PSO) techniques, Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting algorithm (SUFI‐2) and Parameter Solution (PARASOL) – are taken to a comparative study with the SWAT model applied in Peace River Basin, central Florida. In our study, the observed river discharge data used in SWAT model calibration were collected from the three gauging stations at the main tributary of the Peace River. Behind these approaches, there is a shared philosophy; all methods seek out many parameter set to fit the uncertainties due to the non‐uniqueness in model parameter evaluation. On the basis of the statistical results of four uncertainty methods, difficulty level of each method, the number of runs and theoretical basis, the reasons that affected the accuracy of simulation were analysed and compared. Furthermore, for the four uncertainty method with SWAT model in the study area, the pairwise correlation between parameters and the distributions of model fit summary statistics computed from the sampling over the behavioural parameter and the entire model calibration parameter feasible spaces were identified and examined. It provided additional insight into the relative identifiability of the four uncertainty methods Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
The level of model complexity that can be effectively supported by available information has long been a subject of many studies in hydrologic modelling. In particular, distributed parameter models tend to be regarded as overparameterized because of numerous parameters used to describe spatially heterogeneous hydrologic processes. However, it is not clear how parameters and observations influence the degree of overparameterization, equifinality of parameter values, and uncertainty. This study investigated the impact of the numbers of observations and parameters on calibration quality including equifinality among calibrated parameter values, model performance, and output/parameter uncertainty using the Soil and Water Assessment Tool model. In the experiments, the number of observations was increased by expanding the calibration period or by including measurements made at inner points of a watershed. Similarly, additional calibration parameters were included in the order of their sensitivity. Then, unique sets of parameters were calibrated with the same objective function, optimization algorithm, and stopping criteria but different numbers of observations. The calibration quality was quantified with statistics calculated based on the ‘behavioural’ parameter sets, identified using 1% and 5% cut‐off thresholds in a generalized likelihood uncertainty estimation framework. The study demonstrated that equifinality, model performance, and output/parameter uncertainty were responsive to the numbers of observations and calibration parameters; however, the relationship between the numbers, equifinality, and uncertainty was not always conclusive. Model performance improved with increased numbers of calibration parameters and observations, and substantial equifinality did neither necessarily mean bad model performance nor large uncertainty in the model outputs and parameters. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
8.
The use of distributed data for model calibration is becoming more popular in the advent of the availability of spatially distributed observations. Hydrological model calibration has traditionally been carried out using single objective optimisation and only recently has been extended to a multi-objective optimisation domain. By formulating the calibration problem with several objectives, each objective relating to a set of observations, the parameter sets can be constrained more effectively. However, many previous multi-objective calibration studies do not consider individual observations or catchment responses separately, but instead utilises some form of aggregation of objectives. This paper proposes a multi-objective calibration approach that can efficiently handle many objectives using both clustering and preference ordered ranking. The algorithm is applied to calibrate the MIKE SHE distributed hydrologic model and tested on the Karup catchment in Denmark. The results indicate that the preferred solutions selected using the proposed algorithm are good compromise solutions and the parameter values are well defined. Clustering with Kohonen mapping was able to reduce the number of objective functions from 18 to 5. Calibration using the standard deviation of groundwater level residuals enabled us to identify a group of wells that may not be simulated properly, thus highlighting potential problems with the model parameterisation.  相似文献   

9.
Calibration is required for most soil moisture sensors if accurate measurements are to be obtained. This can be time consuming and costly, especially if field calibration is undertaken, but can be facilitated by a good understanding of the behaviour of the particular sensor being calibrated. We develop generalized temperature correction and soil water calibration relationships for Campbell Scientific CS615 water‐content reflectometer sensors. The temperature correction is estimated as a function of the raw sensor measurement. The calibration relationship requires one soil‐related parameter to be set. These relationships facilitate field calibration of these sensors to acceptable accuracies with only a small number of samples. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
常规三维大地电磁反演的正则项为L2范数,它以电阻率空间分布函数处处光滑为模型期望,弱化了算法对电性突变界面的分辨能力.本文实现了正则项为L1范数的三维大地电磁反演算法,让模型空间梯度向量更有机会取得稀疏解,在充分正则的迭代下能够有效突出模型真实电性界面.为避免L1范数零点不可导带来的求解困难,使用迭代重加权最小二乘法把原问题转换为一系列L2正则子问题迭代求解.每个子问题的极小方法使用改进型拟牛顿法,其下降方向既能保证正则项海塞矩阵的精确性,又能允许反演过程随迭代灵活更新正则因子.使用比值法或分段衰减法自适应更新正则因子以避免迭代早期陷入奇异解,从而提升反演收敛的稳定性并降低初始模型依赖度.合成的无噪数据反演表明L1正则算法的模型恢复效果优于L2正则;不同噪声水平的合成数据反演表明本文的算法具有稳健性;实测数据反演对比表明在合理的正则因子调整策略下,L1正则反演结果的模型分辨率优于L2正则.另外,不同初始模型的反演测试还表明,正则因子选取不合理时L1正则可能造成方块状假异常.  相似文献   

11.
Landscape evolution models (LEMs) have the capability to characterize key aspects of geomorphological and hydrological processes. However, their usefulness is hindered by model equifinality and paucity of available calibration data. Estimating uncertainty in the parameter space and resultant model predictions is rarely achieved as this is computationally intensive and the uncertainties inherent in the observed data are large. Therefore, a limits-of-acceptability (LoA) uncertainty analysis approach was adopted in this study to assess the value of uncertain hydrological and geomorphic data. These were used to constrain simulations of catchment responses and to explore the parameter uncertainty in model predictions. We applied this approach to the River Derwent and Cocker catchments in the UK using a LEM CAESAR-Lisflood. Results show that the model was generally able to produce behavioural simulations within the uncertainty limits of the streamflow. Reliability metrics ranged from 24.4% to 41.2% and captured the high-magnitude low-frequency sediment events. Since different sets of behavioural simulations were found across different parts of the catchment, evaluating LEM performance, in quantifying and assessing both at-a-point behaviour and spatial catchment response, remains a challenge. Our results show that evaluating LEMs within uncertainty analyses framework while taking into account the varying quality of different observations constrains behavioural simulations and parameter distributions and is a step towards a full-ensemble uncertainty evaluation of such models. We believe that this approach will have benefits for reflecting uncertainties in flooding events where channel morphological changes are occurring and various diverse (and yet often sparse) data have been collected over such events.  相似文献   

12.
A new parameter estimation algorithm based on ensemble Kalman filter (EnKF) is developed. The developed algorithm combined with the proposed problem parametrization offers an efficient parameter estimation method that converges using very small ensembles. The inverse problem is formulated as a sequential data integration problem. Gaussian process regression is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen–Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative regularized EnKF algorithm. The filter is converted to an optimization algorithm by using a pseudo time-stepping technique such that the model output matches the time dependent data. The EnKF Kalman gain matrix is regularized using truncated SVD to filter out noisy correlations. Numerical results show that the proposed algorithm is a promising approach for parameter estimation of subsurface flow models.  相似文献   

13.
A novel algorithm called Isometric Method (IM) for solving smooth real-valued non-linear inverse problems has been developed. Model and data spaces are represented by using m + 1 corresponding vectors at a time (m is the dimension of model space). Relations among vectors in the data space are set up and then transferred into the model space thus generating a new model. If the problem is truly linear, this new model is the exact solution of the inverse problem. If the problem is non-linear, the whole procedure has to be repeated iteratively. The basic underlying idea of IM is to postulate the distance in the model space in such a way that the model and data spaces are isometric, i.e. distances in both spaces have the same measure. As all model-data vector pairs are used many times in successive iterations, the number of the forward problem computations is minimized. There is no necessity to deal with derivatives. The requirement for the computer memory is low. IM is suitable especially for solving smooth medium non-linear problems when forward modelling is time-consuming and minimizing the number of function evaluations is topical. Applications of IM on synthetic and real geophysical problems are also presented. malek@irsm.cas.cz  相似文献   

14.
Calibrating a comprehensive, multi‐parameter conceptual hydrological model, such as the Hydrological Simulation Program Fortran model, is a major challenge. This paper describes calibration procedures for water‐quantity parameters of the HSPF version 10·11 using the automatic‐calibration parameter estimator model coupled with a geographical information system (GIS) approach for spatially averaged properties. The study area was the Grand River watershed, located in southern Ontario, Canada, between 79° 30′ and 80° 57′W longitude and 42° 51′ and 44° 31′N latitude. The drainage area is 6965 km2. Calibration efforts were directed to those model parameters that produced large changes in model response during sensitivity tests run prior to undertaking calibration. A GIS was used extensively in this study. It was first used in the watershed segmentation process. During calibration, the GIS data were used to establish realistic starting values for the surface and subsurface zone parameters LZSN, UZSN, COVER, and INFILT and physically reasonable ratios of these parameters among watersheds were preserved during calibration with the ratios based on the known properties of the subwatersheds determined using GIS. This calibration procedure produced very satisfactory results; the percentage difference between the simulated and the measured yearly discharge ranged between 4 to 16%, which is classified as good to very good calibration. The average simulated daily discharge for the watershed outlet at Brantford for the years 1981–85 was 67 m3 s?1 and the average measured discharge at Brantford was 70 m3 s?1. The coupling of a GIS with automatice calibration produced a realistic and accurate calibration for the HSPF model with much less effort and subjectivity than would be required for unassisted calibration. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
We introduce a nonlinear orthogonal matching pursuit (NOMP) for sparse calibration of subsurface flow models. Sparse calibration is a challenging problem as the unknowns are both the non-zero components of the solution and their associated weights. NOMP is a greedy algorithm that discovers at each iteration the most correlated basis function with the residual from a large pool of basis functions. The discovered basis (aka support) is augmented across the nonlinear iterations. Once a set of basis functions are selected, the solution is obtained by applying Tikhonov regularization. The proposed algorithm relies on stochastically approximated gradient using an iterative stochastic ensemble method (ISEM). In the current study, the search space is parameterized using an overcomplete dictionary of basis functions built using the K-SVD algorithm. The proposed algorithm is the first ensemble based algorithm that tackels the sparse nonlinear parameter estimation problem.  相似文献   

16.
A new uncertainty estimation method, which we recently introduced in the literature, allows for the comprehensive search of model posterior space while maintaining a high degree of computational efficiency. The method starts with an optimal solution to an inverse problem, performs a parameter reduction step and then searches the resulting feasible model space using prior parameter bounds and sparse‐grid polynomial interpolation methods. After misfit rejection, the resulting model ensemble represents the equivalent model space and can be used to estimate inverse solution uncertainty. While parameter reduction introduces a posterior bias, it also allows for scaling this method to higher dimensional problems. The use of Smolyak sparse‐grid interpolation also dramatically increases sampling efficiency for large stochastic dimensions. Unlike Bayesian inference, which treats the posterior sampling problem as a random process, this geometric sampling method exploits the structure and smoothness in posterior distributions by solving a polynomial interpolation problem and then resampling from the resulting interpolant. The two questions we address in this paper are 1) whether our results are generally compatible with established Bayesian inference methods and 2) how does our method compare in terms of posterior sampling efficiency. We accomplish this by comparing our method for two electromagnetic problems from the literature with two commonly used Bayesian sampling schemes: Gibbs’ and Metropolis‐Hastings. While both the sparse‐grid and Bayesian samplers produce compatible results, in both examples, the sparse‐grid approach has a much higher sampling efficiency, requiring an order of magnitude fewer samples, suggesting that sparse‐grid methods can significantly improve the tractability of inference solutions for problems in high dimensions or with more costly forward physics.  相似文献   

17.
Seismic Event Location: Nonlinear Inversion Using a Neighbourhood Algorithm   总被引:2,自引:0,他引:2  
—?A recently developed direct search method for inversion, known as a neighbourhood algorithm (NA), is applied to the hypocentre location problem. Like some previous methods the algorithm uses randomised, or stochastic, sampling of a four-dimensional hypocentral parameter space, to search for solutions with acceptable data fit. Considerable flexibility is allowed in the choice of misfit measure.¶At each stage the hypocentral parameter space is partitioned into a series of convex polygons called Voronoi cells. Each cell surrounds a previously generated hypocentre for which the fit to the data has been determined. As the algorithm proceeds new hypocentres are randomly generated in the neighbourhood of those hypocentres with smaller data misfit. In this way all previous hypocentres guide the search, and the more promising regions of parameter space are preferentially sampled.¶The NA procedure makes use of just two tuning parameters. It is possible to choose their values so that the behaviour of the algorithm is similar to that of a contracting irregular grid in 4-D. This is the feature of the algorithm that we exploit for hypocentre location. In experiments with different events and data sources, the NA approach is able to achieve comparable or better levels of data fit than a range of alternative methods; linearised least-squares, genetic algorithms, simulated annealing and a contracting grid scheme. Moreover, convergence was achieved with a substantially reduced number of travel-time/slowness calculations compared with other nonlinear inversion techniques. Even when initial parameter bounds are very loose, the NA procedure produced robust convergence with acceptable levels of data fit.  相似文献   

18.
本文采用有理函数Krylov子空间模型降阶算法实现了同时求解多频可控源电磁法三维正演响应的快速计算.首先采用基于Yee氏交错网格的拟态有限体积法实现控制方程的空间离散,将任意频率的电场响应表示为关于频率参数的传递函数.采用有理函数Krylov子空间算法求解该传递函数.针对构建m维有理函数Krylov子空间需要求解m次(几十到上百)关于有理函数极点和离散控制方程系数矩阵的线性方程组的问题,本文提出采用单个重复极点的有理函数Krylov子空间模型降阶算法,结合直接法求解器PARDISO,采用Gram-Schmidt方法,只需要1次系数矩阵分解和m次矩阵回代即可实现有理函数Krylov子空间的构建,极大地减少了计算量.针对最优化有理函数极点选取问题,本文根据传递函数的有理函数Krylov子空间投影算法的误差分析理论,引入关于单个重复极点的收敛率函数,通过求解有理函数的最大收敛率直接给出最优化的单个重复极点公式.最终实现了不同发射频率的可控源电磁法三维正演响应的快速计算.分别计算了典型层状模型多发射频率的CSAMT和海洋CSEM的正演响应,通过与解析解的对比验证了本文算法在多发射频率正演的计算精度和计算效率;并通过一个三维海洋CSEM勘探设计最优化发射频率和接收区域选取的例子进一步说明本文算法的优点.  相似文献   

19.
Watershed simulation models are used extensively to investigate hydrologic processes, landuse and climate change impacts, pollutant load assessments and best management practices (BMPs). Developing, calibrating and validating these models require a number of critical decisions that will influence the ability of the model to represent real world conditions. Understanding how these decisions influence model performance is crucial, especially when making science‐based policy decisions. This study used the Soil and Water Assessment Tool (SWAT) model in West Lake Erie Basin (WLEB) to examine the influence of several of these decisions on hydrological processes and streamflow simulations. Specifically, this study addressed the following objectives (1) demonstrate the importance of considering intra‐watershed processes during model development, (2) compare and evaluated spatial calibration versus calibration at outlet and (3) evaluate parameter transfers across temporal and spatial scales. A coarser resolution (HUC‐12) model and a finer resolution model (NHDPlus model) were used to support the objectives. Results showed that knowledge of watershed characteristics and intra‐watershed processes are critical to produced accurate and realistic hydrologic simulations. The spatial calibration strategy produced better results compared to outlet calibration strategy and provided more confidence. Transferring parameter values across spatial scales (i.e. from coarser resolution model to finer resolution model) needs additional fine tuning to produce realistic results. Transferring parameters across temporal scales (i.e. from monthly to yearly and daily time‐steps) performed well with a similar spatial resolution model. Furthermore, this study shows that relying solely on quantitative statistics without considering additional information can produce good but unrealistic simulations. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
The groundwater inverse problem of estimating heterogeneous groundwater model parameters (hydraulic conductivity in this case) given measurements of aquifer response (such as hydraulic heads) is known to be an ill-posed problem, with multiple parameter values giving similar fits to the aquifer response measurements. This problem is further exacerbated due to the lack of extensive data, typical of most real-world problems. In such cases, it is desirable to incorporate expert knowledge in the estimation process to generate more reasonable estimates. This work presents a novel interactive framework, called the ‘Interactive Multi-Objective Genetic Algorithm’ (IMOGA), to solve the groundwater inverse problem considering different sources of quantitative data as well as qualitative expert knowledge about the site. The IMOGA is unique in that it looks at groundwater model calibration as a multi-objective problem consisting of quantitative objectives – calibration error and regularization – and a ‘qualitative’ objective based on the preference of the geological expert for different spatial characteristics of the conductivity field. All these objectives are then included within a multi-objective genetic algorithm to find multiple solutions that represent the best combination of all quantitative and qualitative objectives. A hypothetical aquifer case-study (based on the test case presented by Freyberg [Freyberg DL. An exercise in ground-water model calibration and prediction. Ground Water 1988;26(3)], for which the ‘true’ parameter values are known, is used as a test case to demonstrate the applicability of this method. It is shown that using automated calibration techniques without using expert interaction leads to parameter values that are not consistent with site-knowledge. Adding expert interaction is shown to not only improve the plausibility of the estimated conductivity fields but also the predictive accuracy of the calibrated model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号