首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
L. Foglia  S.W. Mehl 《Ground water》2015,53(1):130-139
In this work, we provide suggestions for designing experiments where calibration of many models is required and guidance for identifying problematic calibrations. Calibration of many conceptual models which have different representations of the physical processes in the system, as is done in cross‐validation studies or multi‐model analysis, often uses computationally frugal inversion techniques to achieve tractable execution times. However, because these frugal methods are usually local methods, and the inverse problem is almost always nonlinear, there is no guarantee that the optimal solution will be found. Furthermore, evaluation of each inverse model's performance to identify poor calibrations can be tedious. Results of this study show that if poorly calibrated models are included in the analysis, simulated predictions and measures of prediction uncertainty can be affected in unexpected ways. Guidelines are provided to help identify problematic regressions and correct them.  相似文献   

2.
This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.  相似文献   

3.
Simulation tools used for management purposes should fulfill several conditions by being computationally fast, user-friendly, realistic, generic and reliable. These traits are often counteracting since they simultaneously demand for model complexity as well as simplicity. Here we develop a strategy to overcome this general problem of environmental modelling for management use. Major ingredients are model analysis and reduction as new core components of the modelling process. In detail, a set of combined methods is proposed. Within a large class of models the set allows for automatically exploring model behaviour and for aggregating fine scale process knowledge together with spatio temporal resolution. Applications to a huge aquatic European regional seas ecosystem model (ERSEM), a complex photosynthesis model (PGEN) as well as a simple diagenetic model are presented. The analysis and aggregation methods provide first steps towards a new generation of decision support tools able to cope with an increase in scientific knowledge as well as management demands.  相似文献   

4.
5.
Realistic environmental models used for decision making typically require a highly parameterized approach. Calibration of such models is computationally intensive because widely used parameter estimation approaches require individual forward runs for each parameter adjusted. These runs construct a parameter-to-observation sensitivity, or Jacobian, matrix used to develop candidate parameter upgrades. Parameter estimation algorithms are also commonly adversely affected by numerical noise in the calculated sensitivities within the Jacobian matrix, which can result in unnecessary parameter estimation iterations and less model-to-measurement fit. Ideally, approaches to reduce the computational burden of parameter estimation will also increase the signal-to-noise ratio related to observations influential to the parameter estimation even as the number of forward runs decrease. In this work a simultaneous increments, an iterative ensemble smoother (IES), and a randomized Jacobian approach were compared to a traditional approach that uses a full Jacobian matrix. All approaches were applied to the same model developed for decision making in the Mississippi Alluvial Plain, USA. Both the IES and randomized Jacobian approach achieved a desirable fit and similar parameter fields in many fewer forward runs than the traditional approach; in both cases the fit was obtained in fewer runs than the number of adjustable parameters. The simultaneous increments approach did not perform as well as the other methods due to inability to overcome suboptimal dropping of parameter sensitivities. This work indicates that use of highly efficient algorithms can greatly speed parameter estimation, which in turn increases calibration vetting and utility of realistic models used for decision making.  相似文献   

6.
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations.  相似文献   

7.
Earthquake dynamic response analysis of large complex structures, especially in the presence of nonlinearities, usually turns out to be computationally expensive. In this paper, the methodical developments of a new model order reduction strategy (MOR) based on the proper orthogonal decomposition (POD) method as well as its practical applicability to a realistic building structure are presented. The seismic performance of the building structure, a medical complex, is to be improved by means of base isolation realized by frictional pendulum bearings. According to the new introduced MOR strategy, a set of deterministic POD modes (transformation matrix) is assembled, which is derived based on the information of parts of the response history, so‐called snapshots, of the structure under a representative earthquake excitation. Subsequently, this transformation matrix is utilized to create reduced‐order models of the structure subjected to different earthquake excitations. These sets of nonlinear low‐order representations are now solved in a fractional amount of time in comparison with the computations of the full (non‐reduced) systems. The results demonstrate accurate approximations of the physical (full) responses by means of this new MOR strategy if the probable behavior of the structure has already been captured in the POD snapshots. Copyright © 2016 The Authors. Earthquake Engineering & Structural Dynamics Published by John Wiley & Sons Ltd.  相似文献   

8.
The main goal of this study is to assess the potential of evolutionary algorithms to solve highly non-linear and multi-modal tomography problems (such as first arrival traveltime tomography) and their abilities to estimate reliable uncertainties. Classical tomography methods apply derivative-based optimization algorithms that require the user to determine the value of several parameters (such as regularization level and initial model) prior to the inversion as they strongly affect the final inverted model. In addition, derivative-based methods only perform a local search dependent on the chosen starting model. Global optimization methods based on Markov chain Monte Carlo that thoroughly sample the model parameter space are theoretically insensitive to the initial model but turn out to be computationally expensive. Evolutionary algorithms are population-based global optimization methods and are thus intrinsically parallel, allowing these algorithms to fully handle available computer resources. We apply three evolutionary algorithms to solve a refraction traveltime tomography problem, namely the differential evolution, the competitive particle swarm optimization and the covariance matrix adaptation–evolution strategy. We apply these methodologies on a smoothed version of the Marmousi velocity model and compare their performances in terms of optimization and estimates of uncertainty. By performing scalability and statistical analysis over the results obtained with several runs, we assess the benefits and shortcomings of each algorithm.  相似文献   

9.
In 1988, an important publication moved model calibration and forecasting beyond case studies and theoretical analysis. It reported on a somewhat idyllic graduate student modeling exercise where many of the system properties were known; the primary forecasts of interest were heads in pumping wells after a river was modified. The model was calibrated using manual trial-and-error approaches where a model's forecast quality was not related to how well it was calibrated. Here, we investigate whether tools widely available today obviate the shortcomings identified 30 years ago. A reconstructed version of the 1988 true model was tested using increasing parameter estimation sophistication. The parameter estimation demonstrated the inverse problem was non-unique because only head data were available for calibration. When a flux observation was included, current parameter estimation approaches were able to overcome all calibration and forecast issues noted in 1988. The best forecasts were obtained from a highly parameterized model that used pilot points for hydraulic conductivity and was constrained with soft knowledge. Like the 1988 results, however, the best calibrated model did not produce the best forecasts due to parameter overfitting. Finally, a computationally frugal linear uncertainty analysis demonstrated that the single-zone model was oversimplified, with only half of the forecasts falling within the calculated uncertainty bounds. Uncertainties from the highly parameterized models had all six forecasts within the calculated uncertainty. The current results outperformed those of the 1988 effort, demonstrating the value of quantitative parameter estimation and uncertainty analysis methods.  相似文献   

10.
Haitjema HM 《Ground water》2006,44(1):102-105
The analytic element method, like the boundary integral equation method, gives rise to a system of equations with a fully populated coefficient matrix. For simple problems, these systems of equations are linear, and a direct solution method, such as Gauss elimination, offers the most efficient solution strategy. However, more realistic models of regional ground water flow involve nonlinear equations, particularly when including surface water and ground water interactions. The problem may still be solved by use of Gauss elimination, but it requires an iterative procedure with a reconstruction and decomposition of the coefficient matrix at every iteration step. The nonlinearities manifest themselves as changes in individual matrix coefficients and the elimination (or reintroduction) of several equations between one iteration and the other. The repeated matrix reconstruction and decomposition is computationally intense and may be avoided by use of the Sherman-Morrison formula, which can be used to modify the original solution in accordance with (small) changes in the coefficient matrix. The computational efficiency of the Sherman-Morrison formula decreases with increasing numbers of equations to be modified. In view of this, the Sherman-Morrison formula is only used to remove equations from the original set of equations, while treating all other nonlinearities by use of an iterative refinement procedure.  相似文献   

11.
Inverse modeling is widely used to assist with forecasting problems in the subsurface. However, full inverse modeling can be time-consuming requiring iteration over a high dimensional parameter space with computationally expensive forward models and complex spatial priors. In this paper, we investigate a prediction-focused approach (PFA) that aims at building a statistical relationship between data variables and forecast variables, avoiding the inversion of model parameters altogether. The statistical relationship is built by first applying the forward model related to the data variables and the forward model related to the prediction variables on a limited set of spatial prior models realizations, typically generated through geostatistical methods. The relationship observed between data and prediction is highly non-linear for many forecasting problems in the subsurface. In this paper we propose a Canonical Functional Component Analysis (CFCA) to map the data and forecast variables into a low-dimensional space where, if successful, the relationship is linear. CFCA consists of (1) functional principal component analysis (FPCA) for dimension reduction of time-series data and (2) canonical correlation analysis (CCA); the latter aiming to establish a linear relationship between data and forecast components. If such mapping is successful, then we illustrate with several cases that (1) simple regression techniques with a multi-Gaussian framework can be used to directly quantify uncertainty on the forecast without any model inversion and that (2) such uncertainty is a good approximation of uncertainty obtained from full posterior sampling with rejection sampling.  相似文献   

12.
Many methods developed for calibration and validation of physically based distributed hydrological models are time consuming and computationally intensive. Only a small set of input parameters can be optimized, and the optimization often results in unrealistic values. In this study we adopted a multi‐variable and multi‐site approach to calibration and validation of the Soil Water Assessment Tool (SWAT) model for the Motueka catchment, making use of extensive field measurements. Not only were a number of hydrological processes (model components) in a catchment evaluated, but also a number of subcatchments were used in the calibration. The internal variables used were PET, annual water yield, daily streamflow, baseflow, and soil moisture. The study was conducted using an 11‐year historical flow record (1990–2000); 1990–94 was used for calibration and 1995–2000 for validation. SWAT generally predicted well the PET, water yield and daily streamflow. The predicted daily streamflow matched the observed values, with a Nash–Sutcliffe coefficient of 0·78 during calibration and 0·72 during validation. However, values for subcatchments ranged from 0·31 to 0·67 during calibration, and 0·36 to 0·52 during validation. The predicted soil moisture remained wet compared with the measurement. About 50% of the extra soil water storage predicted by the model can be ascribed to overprediction of precipitation; the remaining 50% discrepancy was likely to be a result of poor representation of soil properties. Hydrological compensations in the modelling results are derived from water balances in the various pathways and storage (evaporation, streamflow, surface runoff, soil moisture and groundwater) and the contributions to streamflow from different geographic areas (hill slopes, variable source areas, sub‐basins, and subcatchments). The use of an integrated multi‐variable and multi‐site method improved the model calibration and validation and highlighted the areas and hydrological processes requiring greater calibration effort. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
冯波  王华忠  冯伟 《地球物理学报》2019,62(4):1471-1479
地震波的运动学信息(走时、斜率等)通常用于宏观速度建模.针对走时反演方法,一个基本问题是走时拾取或反射时差的估计.对于成像域反演方法,可以通过成像道集的剩余深度差近似计算反射波时差.在数据域中,反射地震观测数据是有限频带信号,如果不能准确地确定子波的起跳时间,难以精确地确定反射波的到达时间.另一方面,如果缺乏关于模型的先验信息,则很难精确测量自地下同一个反射界面的观测数据同相轴和模拟数据同相轴之间的时差.针对走时定义及时差测量问题,首先从叠前地震数据的稀疏表达出发,利用特征波场分解方法,提取反射子波并估计局部平面波的入射和出射射线参数.进一步,为了实现自动和稳定的走时拾取,用震相的包络极值对应的时间定义反射波的到达时,实现了立体数据中间的自动生成.理论上讲,利用包络极值定义的走时大于真实的反射波走时,除非观测信号具有无限带宽(即delta脉冲).然而,走时反演的目的是估计中-大尺度的背景速度结构,因此走时误差导致的速度误差仍然在可以接受的误差范围内.利用局部化传播算子及特征波聚焦成像条件将特征波数据直接投影到地下虚拟反射点,提出了一种新的反射时差估计方法.既避免了周期跳跃现象以及串层等可能性,又消除了振幅因素对时差测量的影响.最后,在上述工作基础之上,提出了一种基于特征波场分解的新型全自动反射走时反演方法(CWRTI).通过对泛函梯度的线性化近似,并用全变差正则化方法提取梯度的低波数部分,实现了背景速度迭代反演.在理论上,无需长偏移距观测数据或低频信息、对初始模型依赖性低且计算效率高,可以为后续的全波形反演提供可靠的初始速度模型.理论和实际资料的测试结果证明了本文方法的有效性.  相似文献   

14.
With the popularity of complex hydrologic models, the time taken to run these models is increasing substantially. Comparing and evaluating the efficacy of different optimization algorithms for calibrating computationally intensive hydrologic models is becoming a nontrivial issue. In this study, five global optimization algorithms (genetic algorithms, shuffled complex evolution, particle swarm optimization, differential evolution, and artificial immune system) were tested for automatic parameter calibration of a complex hydrologic model, Soil and Water Assessment Tool (SWAT), in four watersheds. The results show that genetic algorithms (GA) outperform the other four algorithms given model evaluation numbers larger than 2000, while particle swarm optimization (PSO) can obtain better parameter solutions than other algorithms given fewer number of model runs (less than 2000). Given limited computational time, the PSO algorithm is preferred, while GA should be chosen given plenty of computational resources. When applying GA and PSO for parameter optimization of SWAT, small population size should be chosen. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
We develop methodologies to enable applications of reliability-based design optimization (RBDO) to environmental policy setting problems. RBDO considers uncertainty as random variables and parameters in an optimization framework with probabilistic constraints. Three challenges in environmental decision-making problems not addressed by current RBDO methods are efficient methods in handling: (1) non-normally distributed random parameters, (2) discrete random parameters, and (3) joint reliability constraints (e.g., meeting constraints simultaneously with a single reliability). We propose a modified sequential quadratic programming algorithm to address these challenges. An active set strategy is combined with a reliability contour formulation to solve problems with multiple non-normal random parameters. The reliability contour formulation can also handle discrete random parameters by converting them to equivalent continuous ones. Joint reliability constraints are estimated by their theoretical upper bounds using reliability indexes and angles of normal vectors between active constraints. To demonstrate the methods, we consider a simplified airshed example where CO and NOx standards are violated and are brought into compliance by reducing the speed limits of two nearby highways. This analytical example is based on the CALINE4 model. Results show the potential of this approach to handle complex large-scale environmental regulation problems.  相似文献   

16.
Mediterranean catchments are characterized by strong nonlinearities in their hydrological behaviour. Properly simulating those nonlinearities still represents a great challenge and, at the same time, an important issue in order to improve our knowledge of their hydrological behaviour. The main aim of this work is find out diverse modelling approaches to reproduce the observed nonlinear hydrological behaviour in a small Mediterranean catchment, Can Vila (Vallcebre, NE Spain). To this end, three hydrological models were considered: two lumped models called LU3 and LU4 of increasing complexity, and a distributed model called TETIS. The structures of these different models were used as hypotheses, which could explain and reproduce the observed nonlinear behaviour at the outlet. Four analyses were carried out: (i) goodness‐of‐fit criteria analysis, (ii) residual errors analysis, (iii) sensitivity analysis and (iv) multicriteria analysis based on the concept of Pareto Optimal. These analyses showed the higher capability and robustness of the distributed model to reproduce the observed complex hydrological behaviour in this catchment. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
A Stable and Efficient Numerical Algorithm for Unconfined Aquifer Analysis   总被引:2,自引:0,他引:2  
The nonlinearity of equations governing flow in unconfined aquifers poses challenges for numerical models, particularly in field-scale applications. Existing methods are often unstable, do not converge, or require extremely fine grids and small time steps. Standard modeling procedures such as automated model calibration and Monte Carlo uncertainty analysis typically require thousands of model runs. Stable and efficient model performance is essential to these analyses. We propose a new method that offers improvements in stability and efficiency and is relatively tolerant of coarse grids. It applies a strategy similar to that in the MODFLOW code to the solution of Richard's equation with a grid-dependent pressure/saturation relationship. The method imposes a contrast between horizontal and vertical permeability in gridblocks containing the water table, does not require "dry" cells to convert to inactive cells, and allows recharge to flow through relatively dry cells to the water table. We establish the accuracy of the method by comparison to an analytical solution for radial flow to a well in an unconfined aquifer with delayed yield. Using a suite of test problems, we demonstrate the efficiencies gained in speed and accuracy over two-phase simulations, and improved stability when compared to MODFLOW. The advantages for applications to transient unconfined aquifer analysis are clearly demonstrated by our examples. We also demonstrate applicability to mixed vadose zone/saturated zone applications, including transport, and find that the method shows great promise for these types of problem as well.  相似文献   

18.
Dynamic substructuring refers to physical testing with computational models in the loop. This paper presents a new strategy for such testing. The key feature of this strategy is that it decouples the substructuring controller from the physical subsystem. Unlike conventional approaches, it does not explicitly include a tracking controller. Consequently, the design and implementation of the substructuring controls are greatly simplified. This paper motivates the strategy and discusses the main concept along with details of the substructuring control design. The focus is on configurations that use shake tables and active mass drivers. An extensive experimental assessment of the new strategy is presented in a companion paper, where the influence of various factors such as virtual subsystem dynamics, control gains, and nonlinearities is investigated, and it is shown that robustly stable and accurate substructuring is achieved.  相似文献   

19.
Modeling the spread of subsurface contaminants requires coupling a groundwater flow model with a contaminant transport model. Such coupling may provide accurate estimates of future subsurface hydrologic states if essential flow and contaminant data are assimilated in the model. Assuming perfect flow, an ensemble Kalman filter (EnKF) can be used for direct data assimilation into the transport model. This is, however, a crude assumption as flow models can be subject to many sources of uncertainty. If the flow is not accurately simulated, contaminant predictions will likely be inaccurate even after successive Kalman updates of the contaminant model with the data. The problem is better handled when both flow and contaminant states are concurrently estimated using the traditional joint state augmentation approach. In this paper, we introduce a dual estimation strategy for data assimilation into a one-way coupled system by treating the flow and the contaminant models separately while intertwining a pair of distinct EnKFs, one for each model. The presented strategy only deals with the estimation of state variables but it can also be used for state and parameter estimation problems. This EnKF-based dual state-state estimation procedure presents a number of novel features: (i) it allows for simultaneous estimation of both flow and contaminant states in parallel; (ii) it provides a time consistent sequential updating scheme between the two models (first flow, then transport); (iii) it simplifies the implementation of the filtering system; and (iv) it yields more stable and accurate solutions than does the standard joint approach. We conducted synthetic numerical experiments based on various time stepping and observation strategies to evaluate the dual EnKF approach and compare its performance with the joint state augmentation approach. Experimental results show that on average, the dual strategy could reduce the estimation error of the coupled states by 15% compared with the joint approach. Furthermore, the dual estimation is proven to be very effective computationally, recovering accurate estimates at a reasonable cost.  相似文献   

20.
Hydrologic risk analysis for dam safety relies on a series of probabilistic analyses of rainfall-runoff and flow routing models, and their associated inputs. This is a complex problem in that the probability distributions of multiple independent and derived random variables need to be estimated in order to evaluate the probability of dam overtopping. Typically, parametric density estimation methods have been applied in this setting, and the exhaustive Monte Carlo simulation (MCS) of models is used to derive some of the distributions. Often, the distributions used to model some of the random variables are inappropriate relative to the expected behaviour of these variables, and as a result, simulations of the system can lead to unrealistic values of extreme rainfall or water surface levels and hence of the probability of dam overtopping. In this paper, three major innovations are introduced to address this situation. The first is the use of nonparametric probability density estimation methods for selected variables, the second is the use of Latin Hypercube sampling to improve the efficiency of MCS driven by the multiple random variables, and the third is the use of Bootstrap resampling to determine initial water surface level. An application to the Soyang Dam in South Korea illustrates how the traditional parametric approach can lead to potentially unrealistic estimates of dam safety, while the proposed approach provides rather reasonable estimates and an assessment of their sensitivity to key parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号