首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this study, we determine an updated finite element model of a reinforced concrete building—which was damaged from shaking during 1994 Northridge earthquake—using forced‐vibration test data and a novel model‐updating technique. Developed and verified in the companion paper (viz. BVLSrc, Earthquake Eng. Struct. Dyn. 2006; this issue), this iterative technique incorporates novel sensitivity‐based relative constraints to avoid ill conditioning that results from spatial incompleteness of measured data. We used frequency response functions and natural frequencies as input for the model‐updating problem. These data were extracted from measurements obtained during a white‐noise excitation applied at the roof of the building using a linear inertial shaker. Flexural stiffness values of properly grouped structural members, modal damping ratios, and translational and rotational mass values were chosen as the updating parameters, so that the converged results had direct physical interpretations, and thus, comparisons with common parameters used in seismic design and evaluation of buildings could be made. We investigated the veracity of the updated finite element model by comparing the predicted and measured dynamic responses under a second, and different type of forced (sine‐sweep) vibration, test. These results indicate that the updated model replicates the dynamic behaviour of the building reasonably well. Furthermore, the updated stiffness factors appear to be well correlated with the observed building damage patterns (i.e. their location and severity). Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
Regularization is the most popular technique to overcome the null space of model parameters in geophysical inverse problems, and is implemented by including a constraint term as well as the data‐misfit term in the objective function being minimized. The weighting of the constraint term relative to the data‐fitting term is controlled by a regularization parameter, and its adjustment to obtain the best model has received much attention. The empirical Bayes approach discussed in this paper determines the optimum value of the regularization parameter from a given data set. The regularization term can be regarded as representing a priori information about the model parameters. The empirical Bayes approach and its more practical variant, Akaike's Bayesian Information Criterion, adjust the regularization parameter automatically in response to the level of data noise and to the suitability of the assumed a priori model information for the given data. When the noise level is high, the regularization parameter is made large, which means that the a priori information is emphasized. If the assumed a priori information is not suitable for the given data, the regularization parameter is made small. Both these behaviours are desirable characteristics for the regularized solutions of practical inverse problems. Four simple examples are presented to illustrate these characteristics for an underdetermined problem, a problem adopting an improper prior constraint and a problem having an unknown data variance, all frequently encountered geophysical inverse problems. Numerical experiments using Akaike's Bayesian Information Criterion for synthetic data provide results consistent with these characteristics. In addition, concerning the selection of an appropriate type of a priori model information, a comparison between four types of difference‐operator model – the zeroth‐, first‐, second‐ and third‐order difference‐operator models – suggests that the automatic determination of the optimum regularization parameter becomes more difficult with increasing order of the difference operators. Accordingly, taking the effect of data noise into account, it is better to employ the lower‐order difference‐operator models for inversions of noisy data.  相似文献   

3.
A reliable computational model is necessary for evaluating the state and predicting the future performance of existing structures, especially after exposure to damaging effects such as an earthquake. A major problem with the existing iterative‐based model updating methods is that the search might be trapped in local optima. The genetic algorithms (GAs) offer a desirable alternative because of their ability in performing a robust search for the global optimal solution. This paper presents a GA‐based model updating approach using a real‐coding scheme for global model updating based on dynamic measurement data. An eigensensitivity method is employed to further fine‐tune the GA updated results in case the sensitivity problem arises due to restricted measurement information. The application on shear‐type frames reveals that with a limited amount of modal data, namely the lowest three natural frequencies and the first mode shape, it is possible to achieve satisfactory updating by the GA alone for cases involving a limited number of parameters (storey stiffness herein). With the incorporation of the eigensensitivity algorithm, the updating capability is extended to a sufficiently large number of parameters. In case the modal data contain errors, the GA is also shown to be able to update the model to a satisfactory accuracy, provided the required amount of modal data is available. An example is given in which a 6‐DOF stick model for an actual six‐storey RC frame is updated using the measured dynamic properties. The effectiveness of the updating is evaluated by comparing the measured and predicted seismic response using the updated model. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
Most groundwater models simulate stream‐aquifer interactions with a head‐dependent flux boundary condition based on a river conductance (CRIV). CRIV is usually calibrated with other parameters by history matching. However, the inverse problem of groundwater models is often ill‐posed and individual model parameters are likely to be poorly constrained. Ill‐posedness can be addressed by Tikhonov regularization with prior knowledge on parameter values. The difficulty with a lumped parameter like CRIV, which cannot be measured in the field, is to find suitable initial and regularization values. Several formulations have been proposed for the estimation of CRIV from physical parameters. However, these methods are either too simple to provide a reliable estimate of CRIV, or too complex to be easily implemented by groundwater modelers. This paper addresses the issue with a flexible and operational tool based on a 2D numerical model in a local vertical cross section, where the river conductance is computed from selected geometric and hydrodynamic parameters. Contrary to other approaches, the grid size of the regional model and the anisotropy of the aquifer hydraulic conductivity are also taken into account. A global sensitivity analysis indicates the strong sensitivity of CRIV to these parameters. This enhancement for the prior estimation of CRIV is a step forward for the calibration and uncertainty analysis of surface‐subsurface models. It is especially useful for modeling objectives that require CRIV to be well known such as conjunctive surface water‐groundwater use.  相似文献   

5.
Pump‐and‐treat systems can prevent the migration of groundwater contaminants and candidate systems are typically evaluated with groundwater models. Such models should be rigorously assessed to determine predictive capabilities and numerous tools and techniques for model assessment are available. While various assessment methodologies (e.g., model calibration, uncertainty analysis, and Bayesian inference) are well‐established for groundwater modeling, this paper calls attention to an alternative assessment technique known as screening‐level sensitivity analysis (SLSA). SLSA can quickly quantify first‐order (i.e., main effects) measures of parameter influence in connection with various model outputs. Subsequent comparisons of parameter influence with respect to calibration vs. prediction outputs can suggest gaps in model structure and/or data. Thus, while SLSA has received little attention in the context of groundwater modeling and remedial system design, it can nonetheless serve as a useful and computationally efficient tool for preliminary model assessment. To illustrate the use of SLSA in the context of designing groundwater remediation systems, four SLSA techniques were applied to a hypothetical, yet realistic, pump‐and‐treat case study to determine the relative influence of six hydraulic conductivity parameters. Considered methods were: Taguchi design‐of‐experiments (TDOE); Monte Carlo statistical independence (MCSI) tests; average composite scaled sensitivities (ACSS); and elementary effects sensitivity analysis (EESA). In terms of performance, the various methods identified the same parameters as being the most influential for a given simulation output. Furthermore, results indicate that the background hydraulic conductivity is important for predicting system performance, but calibration outputs are insensitive to this parameter (KBK). The observed insensitivity is attributed to a nonphysical specified‐head boundary condition used in the model formulation which effectively “staples” head values located within the conductivity zone. Thus, potential strategies for improving model predictive capabilities include additional data collection targeting the KBK parameter and/or revision of model structure to reduce the influence of the specified head boundary.  相似文献   

6.
A two‐and‐half dimensional model‐based inversion algorithm for the reconstruction of geometry and conductivity of unknown regions using marine controlled‐source electromagnetic (CSEM) data is presented. In the model‐based inversion, the inversion domain is described by the so‐called regional conductivity model and both geometry and material parameters associated with this model are reconstructed in the inversion process. This method has the advantage of using a priori information such as the background conductivity distribution, structural information extracted from seismic and/or gravity measurements, and/or inversion results a priori derived from a pixel‐based inversion method. By incorporating this a priori information, the number of unknown parameters to be retrieved becomes significantly reduced. The inversion method is the regularized Gauss‐Newton minimization scheme. The robustness of the inversion is enhanced by adopting nonlinear constraints and applying a quadratic line search algorithm to the optimization process. We also introduce the adjoint formulation to calculate the Jacobian matrix with respect to the geometrical parameters. The model‐based inversion method is validated by using several numerical examples including the inversion of the Troll field data. These results show that the model‐based inversion method can quantitatively reconstruct the shapes and conductivities of reservoirs.  相似文献   

7.
This paper presents a finite element (FE) model updating procedure applied to complex structures using an eigenvalue sensitivity‐based updating approach. The objective of the model updating is to reduce the difference between the calculated and the measured frequencies. The method is based on the first‐order Taylor‐series expansion of the eigenvalues with respect to some structural parameters selected to be adjusted. These parameters are assumed to be bounded by some prescribed regions which are determined according to the degrees of uncertainty that exist in the parameters. The changes of these parameters are found iteratively by solving a constrained optimization problem. The improvement of the current study is in the use of an objective function that is the sum of a weighted frequency error norm and a weighted perturbation norm of the parameters. Two weighting matrices are introduced to provide flexibility for individual tuning of frequency errors and parameters' perturbations. The proposed method is applied to a 1/150 scaled suspension bridge model. Using 11 measured frequencies as reference, the FE model is updated by adjusting ten selected structural parameters. The final updated FE model for the suspension bridge model is able to produce natural frequencies in close agreement with the measured ones. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

8.
Inversion for seismic impedance is an inherently complicated problem. It is ill‐posed and band‐limited. Thus the inversion results are non‐unique and the process is unstable. Combining regularization with constraints using sonic and density log data can help to reduce these problems. To achieve this, we developed an inversion method by constructing a new objective function, including edge‐preserving regularization and a soft constraint based on a Markov random field. The method includes the selection of proper initial values of the regularization parameters by a statistical method, and it adaptively adjusts the regularization parameters by the maximum likelihood method in a fast simulated‐annealing procedure to improve the inversion result and the convergence speed. Moreover, the method uses two kinds of regularization parameter: a ‘weighting factor’λ and a ‘scaling parameter’δ. We tested the method on both synthetic and field data examples. Tests on 2D synthetic data indicate that the inversion results, especially the aspects of the discontinuity, are significantly different for different regularization functions. The initial values of the regularization parameters are either too large or too small to avoid either an unstable or an over‐smoothed result, and they affect the convergence speed. When selecting the initial values of λ, the type of the regularization function should be considered. The results obtained by constant regularization parameters are smoother than those obtained by adaptively adjusting the regularization parameters. The inversion results of the field data provide more detailed information about the layers, and they match the impedance curves calculated from the well logs at the three wells, over most portions of the curves.  相似文献   

9.
Inversion of gravity and/or magnetic data attempts to recover the density and/or magnetic susceptibility distribution in a 3D earth model for subsequent geological interpretation. This is a challenging problem for a number of reasons. First, airborne gravity and magnetic surveys are characterized by very large data volumes. Second, the 3D modelling of data from large‐scale surveys is a computationally challenging problem. Third, gravity and magnetic data are finite and noisy and their inversion is ill posed so regularization must be introduced for the recovery of the most geologically plausible solutions from an infinite number of mathematically equivalent solutions. These difficulties and how they can be addressed in terms of large‐scale 3D potential field inversion are discussed in this paper. Since potential fields are linear, they lend themselves to full parallelization with near‐linear scaling on modern parallel computers. Moreover, we exploit the fact that an instrument’s sensitivity (or footprint) is considerably smaller than the survey area. As multiple footprints superimpose themselves over the same 3D earth model, the sensitivity matrix for the entire earth model is constructed. We use the re‐weighted regularized conjugate gradient method for minimizing the objective functional and incorporate a wide variety of regularization options. We demonstrate our approach with the 3D inversion of 1743 line km of FALCON gravity gradiometry and magnetic data acquired over the Timmins district in Ontario, Canada. Our results are shown to be in good agreement with independent interpretations of the same data.  相似文献   

10.
This study presents analytical solutions of the three‐dimensional groundwater flow to a well in leaky confined and leaky water table wedge‐shaped aquifers. Leaky wedge‐shaped aquifers with and without storage in the aquitard are considered, and both transient and steady‐state drawdown solutions are derived. Unlike the previous solutions of the wedge‐shaped aquifers, the leakages from aquitard are considered in these solutions and unlike similar previous work for leaky aquifers, leakage from aquitards and from the water table are treated as the lower and upper boundary conditions. A special form of finite Fourier transforms is used to transform the z‐coordinate in deriving the solutions. The leakage induced by a partially penetrating pumping well in a wedge‐shaped aquifer depends on aquitard hydraulic parameters, the wedge‐shaped aquifer parameters, as well as the pumping well parameters. We calculate lateral boundary dimensionless flux at a representative line and investigate its sensitivity to the aquitard hydraulic parameters. We also investigate the effects of wedge angle, partial penetration, screen location and piezometer location on the steady‐state dimensionless drawdown for different leakage parameters. Results of our study are presented in the form of dimensionless flux‐dimensionless time and dimensionless drawdown‐leakage parameter type curves. The results are useful for evaluating the relative role of lateral wedge boundaries and leakage source on flow in wedge‐shaped aquifers. This is very useful for water management problems and for assessing groundwater pollution. The presented analytical solutions can also be used in parameter identification and in calculating stream depletion rate and volume. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
The anisotropy of the land surface can be best described by the bidirectional reflectance distribution function (BRDF). As the field of multiangular remote sensing advances, it is increasingly probable that BRDF models can be inverted to estimate the important biological or climatological parameters of the earth surface such as leaf area index and albedo. The state-of-the-art of BRDF is the use of the linear kernel-driven models, mathematically described as the linear combination of the isotropic kernel, volume scattering kernel and geometric optics kernel. The computational stability is characterized by the algebraic operator spectrum of the kernel-matrix and the observation errors. Therefore, the retrieval of the model coefficients is of great importance for computation of the land surface albedos. We first consider the smoothing solution method of the kernel-driven BRDF models for retrieval of land surface albedos. This is known as an ill-posed inverse problem. The ill-posedness arises from that the linear kernel driven BRDF model is usually underdetermined if there are too few looks or poor directional ranges, or the observations are highly dependent. For example, a single angular observation may lead to an under-determined system whose solution is infinite (the null space of the kernel operator contains nonzero vectors) or no solution (the rank of the coefficient matrix is not equal to the augmented matrix). Therefore, some smoothing or regularization technique should be applied to suppress the ill-posedness. So far, least squares error methods with a priori knowledge, QR decomposition method for inversion of the BRDF model and regularization theories for ill-posed inversion were developed. In this paper, we emphasize on imposing a priori information in different spaces. We first propose a general a priori imposed regularization model problem, and then address two forms of regularization scheme. The first one is a regularized singular value decomposition method, and then we propose a retrieval method in I 1 space. We show that the proposed method is suitable for solving land surface parameter retrieval problem if the sampling data are poor. Numerical experiments are also given to show the efficiency of the proposed methods. Supported by National Natural Science Foundation of China (Grant Nos. 10501051, 10871191), and Key Project of Chinese National Programs for Fundamental Research and Development (Grant Nos. 2007CB714400, 2005CB422104)  相似文献   

12.
When studying hydrological processes with a numerical model, global sensitivity analysis (GSA) is essential if one is to understand the impact of model parameters and model formulation on results. However, different definitions of sensitivity can lead to a difference in the ranking of importance of the different model factors. Here we combine a fuzzy performance function with different methods of calculating global sensitivity to perform a multi‐method global sensitivity analysis (MMGSA). We use an application of a finite element subsurface flow model (ESTEL‐2D) on a flood inundation event on a floodplain of the River Severn to illustrate this new methodology. We demonstrate the utility of the method for model understanding and show how the prediction of state variables, such as Darcian velocity vectors, can be affected by such a MMGSA. This paper is a first attempt to use GSA with a numerically intensive hydrological model. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

13.
This paper presents a novel nonlinear finite element (FE) model updating framework, in which advanced nonlinear structural FE modeling and analysis techniques are used jointly with the extended Kalman filter (EKF) to estimate time‐invariant parameters associated to the nonlinear material constitutive models used in the FE model of the structural system of interest. The EKF as a parameter estimation tool requires the computation of structural FE response sensitivities (total partial derivatives) with respect to the material parameters to be estimated. Employing the direct differentiation method, which is a well‐established procedure for FE response sensitivity analysis, facilitates the application of the EKF in the parameter estimation problem. To verify the proposed nonlinear FE model updating framework, two proof‐of‐concept examples are presented. For each example, the FE‐simulated response of a realistic prototype structure to a set of earthquake ground motions of varying intensity is polluted with artificial measurement noise and used as structural response measurement to estimate the assumed unknown material parameters using the proposed nonlinear FE model updating framework. The first example consists of a cantilever steel bridge column with three unknown material parameters, while a three‐story three‐bay moment resisting steel frame with six unknown material parameters is used as second example. Both examples demonstrate the excellent performance of the proposed parameter estimation framework even in the presence of high measurement noise. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
First‐arrival traveltime tomography is a robust tool for near‐surface velocity estimation. A common approach to stabilizing the ill‐posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a first‐arrival traveltime tomography method with modified total‐variation regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into the two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total‐variation problem. We apply the conjugate gradient method and split‐Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization, and creates less artefacts than the total variation regularization method for the models with sharp interfaces. For the field data, pre‐stack time migration sections show that the modified total‐variation traveltime tomography produces a near‐surface velocity model, which makes statics corrections more accurate.  相似文献   

15.
The nonlinear finite element (FE) analysis has been widely used in the design and analysis of structural or geotechnical systems. The response sensitivities (or gradients) to the model parameters are of significant importance in these realistic engineering problems. However the sensitivity calculation has lagged behind, leaving a gap between advanced FE response analysis and other research hotspots using the response gradient. The response sensitivity analysis is crucial for any gradient-based algorithms, such as reliability analysis, system identification and structural optimization. Among various sensitivity analysis methods, the direct differential method (DDM) has advantages of computing efficiency and accuracy, providing an ideal tool for the response gradient calculation. This paper extended the DDM framework to realistic complicated soil-foundation-structure interaction (SFSI) models by developing the response gradients for various constraints, element and materials involved. The enhanced framework is applied to three-dimensional SFSI system prototypes for a pile-supported bridge pier and a pile-supported reinforced concrete building frame structure, subjected to earthquake loading conditions. The DDM results are verified by forward finite difference method (FFD). The relative importance (RI) of the various material parameters on the responses of SFSI system are investigated based on the DDM response sensitivity results. The FFD converges asymptotically toward the DDM results, demonstrating the advantages of DDM (e.g., accurate, efficient, insensitive to numerical noise). Furthermore, the RI and effects of the model parameters of structure, foundation and soil materials on the responses of SFSI systems are investigated by taking advantage of the sensitivity analysis results. The extension of DDM to SFSI systems greatly broaden the application areas of the d gradient-based algorithms, e.g. FE model updating and nonlinear system identification of complicated SFSI systems.  相似文献   

16.
The quantitative explanation of the potential field data of three‐dimensional geological structures remains one of the most challenging issues in modern geophysical inversion. Obtaining a stable solution that can simultaneously resolve complicated geological structures is a critical inverse problem in the geophysics field. I have developed a new method for determining a three‐dimensional petrophysical property distribution, which produces a corresponding potential field anomaly. In contrast with the tradition inverse algorithm, my inversion method proposes a new model norm, which incorporates two important weighting functions. One is the L0 quasi norm (enforcing sparse constraints), and the other is depth‐weighting that counteracts the influence of source depth on the resulting potential field data of the solution. Sparseness constraints are imposed by using the L0 quasinorm on model parameters. To solve the representation problem, an L0 quasinorm minimisation model with different smooth approximations is proposed. Hence, the data space (N) method, which is much smaller than model space (M), combined with the gradient‐projected method, and the model space, combined with the modified Newton method for L0 quasinorm sparse constraints, leads to a computationally efficient method by using an N × N system versus an M × M one because N ? M. Tests on synthetic data and real datasets demonstrate the stability and validity of the L0 quasinorm spare norms inversion method. With the aim of obtaining the blocky results, the inversion method with the L0 quasinorm sparse constraints method performs better than the traditional L2 norm (standard Tikhonov regularisation). It can obtain the focus and sparse results easily. Then, the Bouguer anomaly survey data of the salt dome, offshore Louisiana, is considered as a real case study. The real inversion result shows that the inclusion the L0 quasinorm sparse constraints leads to a simpler and better resolved solution, and the density distribution is obtained in this area to reveal its geological structure. These results confirm the validity of the L0 quasinorm sparse constraints method and indicate its application for other potential field data inversions and the exploration of geological structures.  相似文献   

17.
Linearized inversion methods such as Gauss‐Newton and multiple re‐weighted least‐squares are iterative processes in which an update in the current model is computed as a function of data misfit and the gradient of data with respect to model parameters. The main advantage of those methods is their ability to refine the model parameters although they have a high computational cost for seismic inversion. In the Gauss‐Newton method a system of equations, corresponding to the sensitivity matrix, is solved in the least‐squares sense at each iteration, while in the multiple re‐weighted least‐squares method many systems are solved using the same sensitivity matrix. The sensitivity matrix arising from these methods is usually not sparse, thus limiting the use of standard preconditioners in the solution of the linearized systems. For reduction of the computational cost of the linearized inversion methods, we propose the use of preconditioners based on a partial orthogonalization of the columns of the sensitivity matrix. The new approach collapses a band of co‐diagonals of the normal equations matrix into the main diagonal, being equivalent to computing the least‐squares solution starting from a partial solution of the linear system. The preconditioning is driven by a bandwidth L which can be interpreted as the distance for which the correlation between model parameters is relevant. To illustrate the benefit of the proposed approach to the reduction of the computational cost of the inversion we apply the multiple re‐weighted least‐squares method to the 2D acoustic seismic waveform inversion problem. We verify the reduction in the number of iterations in the conjugate'gradient algorithm as the bandwidth of the preconditioners increases. This effect reduces the total computational cost of inversion as well.  相似文献   

18.
Structural damage assessment under external loading, such as earthquake excitation, is an important issue in structural safety evaluation. In this regard, appropriate data analysis and feature extraction techniques are required to interpret the measured data and to identify the state of the structure and, if possible, to detect the damage. In this study, the recursive subspace identification with Bona‐fide LQ renewing algorithm (RSI‐BonaFide‐Oblique) incorporated with moving window technique is utilized to identify modal parameters such as natural frequencies, damping ratios, and mode shapes at each instant of time during the strong earthquake excitation. From which the least square stiffness method (LSSM) combined with the model updating technique, called efficient model correction method (EMCM), is used to estimate the first‐stage system stiffness matrix using the simplified model from the previously identified modal parameters (nominal model). In the second stage, 2 different damage assessment algorithms related to the nominal system stiffness matrix were derived. First, the model updating technique, called EMCM, is applied to correct the nominal model by the newly identified modal parameters during the strong motion. Second, the element damage index can be calculated using element damage index method (EDIM) to quantify the damage extent in each element. Verification of the proposed methods through the shaking table test data of 2 different types of structures and a building earthquake response data is demonstrated to specify its corresponding damage location, the time of occurrence during the excitation, and the percentage of stiffness reduction.  相似文献   

19.
A new tool for two‐dimensional apparent‐resistivity data modelling and inversion is presented. The study is developed according to the idea that the best way to deal with ill‐posedness of geoelectrical inverse problems lies in constructing algorithms which allow a flexible control of the physical and mathematical elements involved in the resolution. The forward problem is solved through a finite‐difference algorithm, whose main features are a versatile user‐defined discretization of the domain and a new approach to the solution of the inverse Fourier transform. The inversion procedure is based on an iterative smoothness‐constrained least‐squares algorithm. As mentioned, the code is constructed to ensure flexibility in resolution. This is first achieved by starting the inversion from an arbitrarily defined model. In our approach, a Jacobian matrix is calculated at each iteration, using a generalization of Cohn's network sensitivity theorem. Another versatile feature is the issue of introducing a priori information about the solution. Regions of the domain can be constrained to vary between two limits (the lower and upper bounds) by using inequality constraints. A second possibility is to include the starting model in the objective function used to determine an improved estimate of the unknown parameters and to constrain the solution to the above model. Furthermore, the possibility either of defining a discretization of the domain that exactly fits the underground structures or of refining the mesh of the grid certainly leads to more accurate solutions. Control on the mathematical elements in the inversion algorithm is also allowed. The smoothness matrix can be modified in order to penalize roughness in any one direction. An empirical way of assigning the regularization parameter (damping) is defined, but the user can also decide to assign it manually at each iteration. An appropriate tool was constructed with the purpose of handling the inversion results, for example to correct reconstructed models and to check the effects of such changes on the calculated apparent resistivity. Tests on synthetic and real data, in particular in handling indeterminate cases, show that the flexible approach is a good way to build a detailed picture of the prospected area.  相似文献   

20.
C. Dobler  F. Pappenberger 《水文研究》2013,27(26):3922-3940
The increasing complexity of hydrological models results in a large number of parameters to be estimated. In order to better understand how these complex models work, efficient screening methods are required in order to identify the most important parameters. This is of particular importance for models that are used within an operational real‐time forecasting chain such as HQsim. The objectives of this investigation are to (i) identify the most sensitive parameters of the complex HQsim model applied in the Alpine Lech catchment and (ii) compare model parameter sensitivity rankings attained from three global sensitivity analysis techniques. The techniques presented are the (i) regional sensitivity analysis, (ii) Morris analysis and (iii) state‐dependent parameter modelling. The results indicate that parameters affecting snow melt as well as processes in the unsaturated soil zone reveal high significance in the analysed catchment. The snow melt parameters show clear temporal patterns in the sensitivity whereas most of the parameters affecting processes in the unsaturated soil zone do not vary in importance across the year. Overall, the maximum degree day factor (meltfunc_max) has been identified to play a key role within the HQsim model. Although the parameter sensitivity rankings are equivalent between methods for a number of parameters, for several key parameters differing results were obtained. An uncertainty analysis demonstrates that a parameter ranking attained from only one method is subjected to large uncertainty. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号