首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The main goal of this study is to assess the potential of evolutionary algorithms to solve highly non-linear and multi-modal tomography problems (such as first arrival traveltime tomography) and their abilities to estimate reliable uncertainties. Classical tomography methods apply derivative-based optimization algorithms that require the user to determine the value of several parameters (such as regularization level and initial model) prior to the inversion as they strongly affect the final inverted model. In addition, derivative-based methods only perform a local search dependent on the chosen starting model. Global optimization methods based on Markov chain Monte Carlo that thoroughly sample the model parameter space are theoretically insensitive to the initial model but turn out to be computationally expensive. Evolutionary algorithms are population-based global optimization methods and are thus intrinsically parallel, allowing these algorithms to fully handle available computer resources. We apply three evolutionary algorithms to solve a refraction traveltime tomography problem, namely the differential evolution, the competitive particle swarm optimization and the covariance matrix adaptation–evolution strategy. We apply these methodologies on a smoothed version of the Marmousi velocity model and compare their performances in terms of optimization and estimates of uncertainty. By performing scalability and statistical analysis over the results obtained with several runs, we assess the benefits and shortcomings of each algorithm.  相似文献   

2.
Abstract

The use of a physically-based hydrological model for streamflow forecasting is limited by the complexity in the model structure and the data requirements for model calibration. The calibration of such models is a difficult task, and running a complex model for a single simulation can take up to several days, depending on the simulation period and model complexity. The information contained in a time series is not uniformly distributed. Therefore, if we can find the critical events that are important for identification of model parameters, we can facilitate the calibration process. The aim of this study is to test the applicability of the Identification of Critical Events (ICE) algorithm for physically-based models and to test whether ICE algorithm-based calibration depends on any optimization algorithm. The ICE algorithm, which uses the data depth function, was used herein to identify the critical events from a time series. Low depth in multivariate data is an unusual combination and this concept was used to identify the critical events on which the model was then calibrated. The concept is demonstrated by applying the physically-based hydrological model WaSiM-ETH on the Rems catchment, Germany. The model was calibrated on the whole available data, and on critical events selected by the ICE algorithm. In both calibration cases, three different optimization algorithms, shuffled complex evolution (SCE-UA), parameter estimation (PEST) and robust parameter estimation (ROPE), were used. It was found that, for all the optimization algorithms, calibration using only critical events gave very similar performance to that using the whole time series. Hence, the ICE algorithm-based calibration is suitable for physically-based models; it does not depend much on the kind of optimization algorithm. These findings may be useful for calibrating physically-based models on much fewer data.

Editor D. Koutsoyiannis; Associate editor A. Montanari

Citation Singh, S.K., Liang, J.Y., and Bárdossy, A., 2012. Improving calibration strategy of physically-based model WaSiM-ETH using critical events. Hydrological Sciences Journal, 57 (8), 1487–1505.  相似文献   

3.
The use of optimized arrays generated using the ‘Compare R’ method for cross‐borehole resistivity measurements is examined in this paper. We compare the performances of two array optimization algorithms, one that maximizes the model resolution and another that minimizes the point spread value. Although both algorithms give similar results, the model resolution maximization algorithm is several times faster. A study of the point spread function plots for a cross‐borehole survey shows that the model resolution within the central zone surrounded by the borehole electrodes is much higher than near the bottom end of the boreholes. Tests with synthetic and experimental data show that the optimized arrays generated by the ‘Compare R’ method have significantly better resolution than a ‘standard’ measurement sequence used in previous surveys. The resolution of the optimized arrays is less if arrays with both current (or both potential) electrodes in the same borehole are excluded. However, they are still better than the ‘standard’ arrays.  相似文献   

4.
Stochastic optimization methods, such as genetic algorithms, search for the global minimum of the misfit function within a given parameter range and do not require any calculation of the gradients of the misfit surfaces. More importantly, these methods collect a series of models and associated likelihoods that can be used to estimate the posterior probability distribution. However, because genetic algorithms are not a Markov chain Monte Carlo method, the direct use of the genetic‐algorithm‐sampled models and their associated likelihoods produce a biased estimation of the posterior probability distribution. In contrast, Markov chain Monte Carlo methods, such as the Metropolis–Hastings and Gibbs sampler, provide accurate posterior probability distributions but at considerable computational cost. In this paper, we use a hybrid method that combines the speed of a genetic algorithm to find an optimal solution and the accuracy of a Gibbs sampler to obtain a reliable estimation of the posterior probability distributions. First, we test this method on an analytical function and show that the genetic algorithm method cannot recover the true probability distributions and that it tends to underestimate the true uncertainties. Conversely, combining the genetic algorithm optimization with a Gibbs sampler step enables us to recover the true posterior probability distributions. Then, we demonstrate the applicability of this hybrid method by performing one‐dimensional elastic full‐waveform inversions on synthetic and field data. We also discuss how an appropriate genetic algorithm implementation is essential to attenuate the “genetic drift” effect and to maximize the exploration of the model space. In fact, a wide and efficient exploration of the model space is important not only to avoid entrapment in local minima during the genetic algorithm optimization but also to ensure a reliable estimation of the posterior probability distributions in the subsequent Gibbs sampler step.  相似文献   

5.
6.
Single and multiple surrogate models were compared for single-objective pumping optimization problems of a hypothetical and a real-world coastal aquifer. Different instances of radial basis functions and kriging surrogates were utilized to reduce the computational cost of direct optimization with variable density and salt transport models. An adaptive surrogate update scheme was embedded in the operations of an evolutionary algorithm to efficiently control the feasibility of optimal solutions in pumping optimization problems with multiple constraints. For a set of independent optimization runs, results showed that multiple surrogates, either by selecting the best or by using ensembles, did not necessarily outperform the single surrogate approach. Nevertheless, the ensemble with optimal weights produced slightly better results than selecting only the best surrogates or applying a simple averaging approach. For all cases, the computational cost, by using single or multiple surrogate models, was reduced by up to 90% of the direct optimization.  相似文献   

7.
With the popularity of complex hydrologic models, the time taken to run these models is increasing substantially. Comparing and evaluating the efficacy of different optimization algorithms for calibrating computationally intensive hydrologic models is becoming a nontrivial issue. In this study, five global optimization algorithms (genetic algorithms, shuffled complex evolution, particle swarm optimization, differential evolution, and artificial immune system) were tested for automatic parameter calibration of a complex hydrologic model, Soil and Water Assessment Tool (SWAT), in four watersheds. The results show that genetic algorithms (GA) outperform the other four algorithms given model evaluation numbers larger than 2000, while particle swarm optimization (PSO) can obtain better parameter solutions than other algorithms given fewer number of model runs (less than 2000). Given limited computational time, the PSO algorithm is preferred, while GA should be chosen given plenty of computational resources. When applying GA and PSO for parameter optimization of SWAT, small population size should be chosen. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
湖泊富营养化响应与流域优化调控决策的模型研究进展   总被引:2,自引:0,他引:2  
湖泊富营养化是全球水环境领域面临的长期挑战,富营养化响应与流域优化决策模型是制定经济和高效调控方案的关键.然而已有的模型研究综述主要集中于模型开发、案例应用、敏感性分析、不确定性分析等单一方面,而缺少针对非线性响应、生态系统长期演变等最新湖泊治理挑战的研究总结.本文对数据驱动的统计模型、因果驱动的机理模型和决策导向的优化模型进行了综述.其中,统计模型包含经典统计、贝叶斯统计和机器学习模型,常用于建立响应关系、时间序列特征分析以及预报预警;机理模型包含流域的水文与污染物输移模拟以及湖泊的水文、水动力、水质、水生态等过程的模拟,用于不同时空尺度的变化过程模拟,其中复杂机理模型的敏感性分析、参数校验、模型不确定性等需要较高的计算成本;优化模型结合机理模型形成“模拟优化”体系,在不确定性条件下衍生出随机、区间优化等多种方法,通过并行计算、简化与替代模型可一定程度上解决计算时间成本的瓶颈.本文识别了湖泊治理面临的挑战,包括:①如何定量表征外源输入的非线性叠加和湖泊氮、磷、藻变化的非均匀性?②如何提高优化调控决策和水质目标的关联与精准性?③如何揭示湖泊生态系统的长期变化轨迹与驱动因素?最后,本文针对这些挑战提出研究展望,主要包括:①基于多源数据融合与机器学习算法以提升湖泊的短期水质预测精度;②以生物量为基础的机理模型与行为驱动的个体模型的升尺度或降尺度耦合以表达多种尺度的物质交互过程;③机器学习算法与机理模型的直接耦合或数据同化以降低模拟误差;④时空尺度各异的多介质模拟模型融合以实现精准和动态的优化调控.  相似文献   

9.
Coupling basin- and site-scale inverse models of the Española aquifer   总被引:1,自引:0,他引:1  
Large-scale models are frequently used to estimate fluxes to small-scale models. The uncertainty associated with these flux estimates, however, is rarely addressed. We present a case study from the Espa?ola Basin, northern New Mexico, where we use a basin-scale model coupled with a high-resolution, nested site-scale model. Both models are three-dimensional and are analyzed by codes FEHM and PEST. Using constrained nonlinear optimization, we examine the effect of parameter uncertainty in the basin-scale model on the nonlinear confidence limits of predicted fluxes to the site-scale model. We find that some of the fluxes are very well constrained, while for others there is fairly large uncertainty. Site-scale transport simulation results, however, are relatively insensitive to the estimated uncertainty in the fluxes. We also compare parameter estimates obtained by the basin- and site-scale inverse models. Differences in the model grid resolution (scale of parameter estimation) result in differing delineation of hydrostratigraphic units, so the two models produce different estimates for some units. The effect is similar to the observed scale effect in medium properties owing to differences in tested volume. More important, estimation uncertainty of model parameters is quite different at the two scales. Overall, the basin inverse model resulted in significantly lower estimates of uncertainty, because of the larger calibration dataset available. This suggests that the basin-scale model contributes not only important boundary condition information but also improved parameter identification for some units. Our results demonstrate that caution is warranted when applying parameter estimates inferred from a large-scale model to small-scale simulations, and vice versa.  相似文献   

10.
Stochastic ground motion models produce synthetic time‐histories by modulating a white noise sequence through functions that address spectral and temporal properties of the excitation. The resultant ground motions can be then used in simulation‐based seismic risk assessment applications. This is established by relating the parameters of the aforementioned functions to earthquake and site characteristics through predictive relationships. An important concern related to the use of these models is the fact that through current approaches in selecting these predictive relationships, compatibility to the seismic hazard is not guaranteed. This work offers a computationally efficient framework for the modification of stochastic ground motion models to match target intensity measures (IMs) for a specific site and structure of interest. This is set as an optimization problem with a dual objective. The first objective minimizes the discrepancy between the target IMs and the predictions established through the stochastic ground motion model for a chosen earthquake scenario. The second objective constraints the deviation from the model characteristics suggested by existing predictive relationships, guaranteeing that the resultant ground motions not only match the target IMs but are also compatible with regional trends. A framework leveraging kriging surrogate modeling is formulated for performing the resultant multi‐objective optimization, and different computational aspects related to this optimization are discussed in detail. The illustrative implementation shows that the proposed framework can provide ground motions with high compatibility to target IMs with small only deviation from existing predictive relationships and discusses approaches for selecting a final compromise between these two competing objectives.  相似文献   

11.
A hybrid algorithm, combining Monte-Carlo optimization with simultaneous iterative reconstructive technique (SIRT) tomography, is used to invert first arrival traveltimes from seismic data for building a velocity model. Stochastic algorithms may localize a point around the global minimum of the misfit function but are not suitable for identifying the precise solution. On the other hand, a tomographic model reconstruction, based on a local linearization, will only be successful if an initial model already close to the best solution is available. To overcome these problems, in the method proposed here, a first model obtained using a classical Monte Carlo-based optimization is used as a good initial guess for starting the local search with the SIRT tomographic reconstruction. In the forward problem, the first-break times are calculated by solving the eikonal equation through a velocity model with a fast finite-difference method instead of the traditional slow ray-tracing technique. In addition, for the SIRT tomography the seismic energy from sources to receivers is propagated by applying a fast Fresnel volume approach which when combined with turning rays can handle models with both positive and negative velocity gradients. The performance of this two-step optimization scheme has been tested on synthetic and field data for building a geologically plausible velocity model.This is an efficient and fast search mechanism, which permits insertion of geophysical, geological and geodynamic a priori constraints into the grid model and ray path is completed avoided. Extension of the technique to 3D data and also to the solution of 'static correction' problems is easily feasible.  相似文献   

12.
In the analysis and design of unbraced steel frames various models are employed to represent the behaviour of beam-to-column connections. In one such model, termed here as ‘Simple Construction’, pinned connections are assumed when resisting gravity loads, whereas the same connections are assumed to be moment-resistant rigid connections when resisting lateral loads due to an earthquake or wind. Such connections are designed for moments due to lateral loads only; thus, they are not only flexible but may yield when the gravity and lateral loads act concurrently. This paper establishes the seismic performance of two (one 5-storey and the other 10-storey) unbraced steel building frames designed based on the ‘Simple Construction’ technique and on limit state principles. The first part of the paper describes briefly the design of such frames and compares their static responses with the corresponding responses of frames designed based on the ‘Continuous Construction’ assumption. Using realistic moment-rotation behaviour for flexible beam-to-column connections and realistic member behaviour, the non-linear dynamic responses of such frames for the 1940 El Centro record and 2 times the 1952 Taft record have been established using step-by-step time-history analyses. Floor lateral displacement envelopes, storey shear envelopes and cumulative inelastic rotations of beams, columns and connections are presented. The results indicate that the ‘Simple Construction’ frames experience larger lateral deflections while attracting lesser storey shears. During a major earthquake, the columns and connections of the ‘Simple Construction’ frames experience yielding, whereas in ‘Continuous Construction’ frames the beams and columns experience yielding. The cyclic plastic rotations in the connections and in the columns associated with ‘Simple Construction’ frames are found to be considerably higher.  相似文献   

13.
14.
In this paper, we propose a coupling of a finite element model with a metaheuristic optimization algorithm for solving the inverse problem in groundwater flow (Darcy's equations). This coupling performed in 2 phases is based on the combination of 2 codes: This is the HySubF‐FEM code (hydrodynamic of subsurface flow by finite element method) used for the first phase allowing the calculation of the flow and the CMA‐ES code (covariance matrix adaptation evolution strategy) adopted in the second phase for the optimization process. The combination of these 2 codes was implemented to identify the transmissivity field of groundwater by knowing the hydraulic head in some point of the studied domain. The integrated optimization algorithm HySubF‐FEM/CMA‐ES has been validated successfully on a schematic case offering an analytical solution. As realistic application, the integrated optimization algorithm HySubF‐FEM/CMA‐ES was applied to a complex groundwater in the north of France to identify the transmissivity field. This application does not use zonation techniques but solves an optimization problem at each internal node of the mesh. The obtained results are considered excellent with high accuracy and fully consistent with the hydrogeological characteristics of the studied aquifer.However, the various numerical simulations performed in this paper have shown that the CMA‐ES algorithm is time‐consuming. Finally, the paper concludes that the proposed algorithm can be considered as an efficient tool for solving inverse problems in groundwater flow.  相似文献   

15.
With the availability of spatially distributed data, distributed hydrologic models are increasingly used for simulation of spatially varied hydrologic processes to understand and manage natural and human activities that affect watershed systems. Multi‐objective optimization methods have been applied to calibrate distributed hydrologic models using observed data from multiple sites. As the time consumed by running these complex models is increasing substantially, selecting efficient and effective multi‐objective optimization algorithms is becoming a nontrivial issue. In this study, we evaluated a multi‐algorithm, genetically adaptive multi‐objective method (AMALGAM) for multi‐site calibration of a distributed hydrologic model—Soil and Water Assessment Tool (SWAT), and compared its performance with two widely used evolutionary multi‐objective optimization (EMO) algorithms (i.e. Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Non‐dominated Sorted Genetic Algorithm II (NSGA‐II)). In order to provide insights into each method's overall performance, these three methods were tested in four watersheds with various characteristics. The test results indicate that the AMALGAM can consistently provide competitive or superior results compared with the other two methods. The multi‐method search framework of AMALGAM, which can flexibly and adaptively utilize multiple optimization algorithms, makes it a promising tool for multi‐site calibration of the distributed SWAT. For practical use of AMALGAM, it is suggested to implement this method in multiple trials with relatively small number of model runs rather than run it once with long iterations. In addition, incorporating different multi‐objective optimization algorithms and multi‐mode search operators into AMALGAM deserves further research. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper,we apply particle swarm optimization(PSO),an artificial intelligence technique,to velocity calibration in microseismic monitoring.We ran simulations with four 1-D layered velocity models and three different initial model ranges.The results using the basic PSO algorithm were reliable and accurate for simple models,but unsuccessful for complex models.We propose the staged shrinkage strategy(SSS) for the PSO algorithm.The SSS-PSO algorithm produced robust inversion results and had a fast convergence rate.We investigated the effects of PSO's velocity clamping factor in terms of the algorithm reliability and computational efficiency.The velocity clamping factor had little impact on the reliability and efficiency of basic PSO,whereas it had a large effect on the efficiency of SSS-PSO.Reassuringly,SSS-PSO exhibits marginal reliability fluctuations,which suggests that it can be confidently implemented.  相似文献   

17.
A methodology for the optimal design of supplemental viscous dampers for regular as well as irregular yielding shear‐frames is presented. It addresses the problem of minimizing the added damping subject to a constraint on an energy‐based global damage index (GDI) for an ensemble of realistic ground motion records. The applicability of the methodology for irregular structures is achieved by choosing an appropriate GDI. For a particular choice of the parameters comprising the GDI, a design for the elastic behavior of the frame or equal damage for all stories is achieved. The use of a gradient‐based optimization algorithm for the solution of the optimization problem is enabled by first deriving an expression for the gradient of the constraint. The optimization process is started for one ‘active’ ground motion record which is efficiently selected from the given ensemble. If the resulting optimal design fails to satisfy the constraints for other records from the original ensemble, additional ground motions (loading conditions) are added one by one to the ‘active’ set until the optimum is reached. Two examples for the optimal designs of supplemental dampers are given: a 2‐story shear frame with varying strength distribution and a 10‐story shear frame. The 2‐story shear frame is designed for one given ground motion whereas the 10‐story frame is designed for an ensemble of twenty ground motions. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
During exploration and pre-feasibility studies of a typical petroleum project many analyses are required to support decision making. Among them is reservoir lithofacies modeling, preferably using uncertainty assessment, which can be carried out with geostatistical simulation. The resulting multiple equally probable facies models can be used, for instance, in flow simulations. This allows assessing uncertainties in reservoir flow behavior during its production lifetime, which is useful for injector and producer well planning. Flow, among other factors, is controlled by elements that act as flow corridors and barriers. Clean sand channels and shale layers are examples of such reservoir elements that have specific geometries. Besides simulating the necessary facies, it is also important to simulate their shapes. Object-based and process-based simulations excel in geometry reproduction, while variogram-based simulations perform very well at data conditioning. Multiple-point geostatistics (MPS) combines both characteristics, consequently it was employed in this study to produce models of a real-world reservoir that are both data adherent and geologically realistic. This work aims at illustrating how subsurface information typically available in petroleum projects can be used with MPS to generate realistic reservoir models. A workflow using the SNESIM algorithm is demonstrated incorporating various sources of information. Results show that complex structures (e.g. channel networks) emerged from a simple model (e.g. single branch) and the reservoir facies models produced with MPS were judged suitable for geometry-sensitive applications such as flow simulations.  相似文献   

19.
《Journal of Hydrology》2006,316(1-4):266-280
Traditionally, the calibration of groundwater models has depended on gradient-based local optimization methods. These methods provide a reasonable degree of success only when the objective function is smooth, second-order differentiable, and satisfies the Lipschitz's condition. For complicated and highly nonlinear objective functions it is almost impractical to satisfy these conditions simultaneously. Research in the calibration of conceptual rainfall-runoff models, has shown that global optimization methods are more successful in locating the global optimum in the region of multiple local optima. In this study, a global optimization technique, known as shuffle complex evolution (SCE), is coupled to the gradient-based Lavenberg–Marquardt algorithm (GBLM). The resultant hybrid global optimization algorithm (SCEGB) is then deployed in parallel testing with SCE and GBLM to solve several inverse problems where parameters of a nonlinear numerical groundwater flow model are estimated. Using perfect (i.e. noise-free) observation data, it is shown SCEGB and SCE are successful at identifying the global optimum and predicting all model parameters; whereas, the commonly applied GBLM fails to identify the optimum. In subsequent inverse simulations using observation data corrupted with noise, SCEGB and SCE again outperform GBLM by consistently producing more accurate parameter estimates. Finally, in all simulations the hybrid SCEGB is seen to be equally effective as SCE but computationally more efficient.  相似文献   

20.
Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems.In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号