首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Parameter identification is an essential step in constructing a groundwater model. The process of recognizing model parameter values by conditioning on observed data of the state variable is referred to as the inverse problem. A series of inverse methods has been proposed to solve the inverse problem, ranging from trial-and-error manual calibration to the current complex automatic data assimilation algorithms. This paper does not attempt to be another overview paper on inverse models, but rather to analyze and track the evolution of the inverse methods over the last decades, mostly within the realm of hydrogeology, revealing their transformation, motivation and recent trends. Issues confronted by the inverse problem, such as dealing with multiGaussianity and whether or not to preserve the prior statistics are discussed.  相似文献   

2.
黏弹性双相介质参数反演的同伦方法   总被引:4,自引:1,他引:3       下载免费PDF全文
本文以介质的动态时域响应(位移、速度、加速度)作为反演的依据,对黏弹性双相介质的材料参数进行了反分析研究.根据附加条件,并考虑到由于测量误差和测量数据的不充分造成的反演问题的不适定性,将黏弹性双相介质的参数反演问题转化为非线性泛函的极小值问题,然后应用大范围收敛的同伦方法求出使泛函极小的根作为反问题的解.最后,以二维半空间黏弹性双相介质模型为例进行了数值反演分析,数值结果表明本文方法在参数的反演过程中目标函数稳定收敛,且具有一定的抗“噪声”干扰性能;与利用速度和加速度信息反演的结果相比,利用位移信息反演的结果具有较高的精度和抗噪声能力.  相似文献   

3.
This paper addresses the parametric inverse problem of locating the point of release of atmospheric pollution. A finite set of observed mixing ratios is compared, by use of least squares, with the analogous mixing ratios computed by an adjoint dispersion model for all possible locations of the release. Classically, the least squares are weighted using the covariance matrix of the measurement errors. However, in practice, this matrix cannot be determined for the prevailing part of these errors arising from the limited representativity of the dispersion model. An alternative weighting proposed here is related to a unified approach of the parametric and assimilative inverse problems corresponding, respectively, to identification of the point of emission or estimation of the distributed emissions. The proposed weighting is shown to optimize the resolution and numerical stability of the inversion. The importance of the most common monitoring networks, with point detectors at various locations, is stressed as a misleading singular case. During the procedure it is also shown that a monitoring network, under given meteorological conditions, itself contains natural statistics about the emissions, irrespective of prior assumptions.  相似文献   

4.
An inverse method is developed to simultaneously estimate multiple hydraulic conductivities, source/sink strengths, and boundary conditions, for two-dimensional confined and unconfined aquifers under non-pumping or pumping conditions. The method incorporates noisy observed data (hydraulic heads, groundwater fluxes, or well rates) at measurement locations. With a set of hybrid formulations, given sufficient measurement data, the method yields well-posed systems of equations that can be solved efficiently via nonlinear optimization. The solution is stable when measurement errors are increased. The method is successfully tested on problems with regular and irregular geometries, different heterogeneity patterns and variances (maximum Kmax/Kmin tested is 10,000), and error magnitudes. Under non-pumping conditions, when error-free observed data are used, the estimated conductivities and recharge rates are accurate within 8% of the true values. When data contain increasing errors, the estimated parameters become less accurate, as expected. For problems where the underlying parameter variation is unknown, equivalent conductivities and average recharge rates can be estimated. Under pumping (and/or injection) conditions, a hybrid formulation is developed to address these local source/sink effects, while different types of boundary conditions can also exert significant influences on drawdowns. Local grid refinement near wells is not needed to obtain accurate results, thus inversion is successful with coarse inverse grids, leading to high computation efficiency. Furthermore, flux measurements are not needed for the inversion to succeed; data requirement of the method is thus not much different from that of interpreting classic well tests. Finally, inversion accuracy is not sensitive to the degree of nonlinearity of the flow equations. Performance of the inverse method for confined and unconfined aquifer problems is similar in terms of the accuracy of the estimated parameters, the recovered head fields, and the solver speed.  相似文献   

5.
On seismograms recorded at sea bubble pulse oscillations can present a serious problem to an interpreter. We propose a new approach, based on generalized linear inverse theory, to the solution of the debubbling problem. Under the usual assumption that a seismogram can be modelled as the convolution of the earth's impulse response and a source wavelet we show that estimation of either the wavelet or the impulse response can be formulated as a generalized linear inverse problem. This parametric approach involves solution of a system of equations by minimizing the error vector (ΔX = Xobs– Xcal) in a least squares sense. One of the most significant results is that the method enables us to control the accuracy of the solution so that it is consistent with the observational errors and/or known noise levels. The complete debubbling procedure can be described in four steps: (1) apply minimum entropy deconvolution to the observed data to obtain a deconvolved spike trace, a first approximation to the earth's response function; (2) use this trace and the observed data as input for the generalized linear inverse procedure to compute an estimated basic bubble pulse wavelet; (3) use the results of steps 1 and 2 to construct the compound source signature consisting of the primary pulse plus appropriate bubble oscillations; and (4) use the compound source signature and the observed data as input for the generalized linear inverse method to determine the estimated earth impulse response—a debubbled, deconvolved seismogram. We illustrate the applicability of the new approach with a set of synthetic seismic traces and with a set of field seismograms. A disadvantage of the procedure is that it is computationally expensive. Thus it may be more appropriate to apply the technique in cases where standard analysis techniques do not give acceptable results. In such cases the inherent advantages of the method may be exploited to provide better quality seismograms.  相似文献   

6.
基于非均匀Fourier变换的地震数据重建方法研究   总被引:3,自引:2,他引:1       下载免费PDF全文
不规则采样地震数据会对地震数据的多道处理造成严重影响,将非均匀Fourier变换和贝叶斯参数反演方法相结合,对不规则空间带限地震数据进行反演重建.对每一个频率依据最小视速度确定出重建数据的带宽,然后从不规则地震数据中估计出重建数据的空间Fourier系数.将不规则地震数据重建视为信息重建的地球物理反演问题,运用贝叶斯参数反演理论来估计Fourier系数.在反演求解时,使用共轭梯度算法,以保证求解的稳定性,加快解的收敛速度.理论模型和实际资料处理验证了本方法的有效性和实用性.  相似文献   

7.
Benjamin Ross 《Ground water》1984,22(5):569-572
In using least-squares parameter estimation techniques to solve for hydrogeologic parameters, one may use a weighting function to reflect differing reliabilities of head measurements. In studies published to date, the weighting function has been used in an ad boc manner or not at all. The inverse square of the observed hydraulic gradient, adjusted to reflect the modeler's perception of geologic heterogeneity and data reliability, is typically an appropriate weighting function.  相似文献   

8.
The paper discusses the smallest obtainable parameter errors (variances) in the interpretation with the least-squares method. Useful approximations of the sum of squares contained in the minimum error expressions are obtained using results of numerical integration. The approximations lead to especially simple results for long interpretation profiles, when the parameter errors are proportional to the square root of the point separation. Formulae are developed and examples shown for minimum error calculation in gravimetric interpretation with the cylindrical model and in magnetic interpretation with the two-dimensional plate model. Smallest errors are obtained when the interpretation profile is chosen around the anomaly maximum except for dip and depth extent interpretation of magnetic plates.  相似文献   

9.
10.
The interactive multi-objective genetic algorithm (IMOGA) combines traditional optimization with an interactive framework that considers the subjective knowledge of hydro-geological experts in addition to quantitative calibration measures such as calibration errors and regularization to solve the groundwater inverse problem. The IMOGA is inherently a deterministic framework and identifies multiple large-scale parameter fields (typically head and transmissivity data are used to identify transmissivity fields). These large-scale parameter fields represent the optimal trade-offs between the different criteria (quantitative and qualitative) used in the IMOGA. This paper further extends the IMOGA to incorporate uncertainty both in the large-scale trends as well as the small-scale variability (which can not be resolved using the field data) in the parameter fields. The different parameter fields identified by the IMOGA represent the uncertainty in large-scale trends, and this uncertainty is modeled using a Bayesian approach where calibration error, regularization, and the expert’s subjective preference are combined to compute a likelihood metric for each parameter field. Small-scale (stochastic) variability is modeled using a geostatistical approach and added onto the large-scale trends identified by the IMOGA. This approach is applied to the Waste Isolation Pilot Plant (WIPP) case-study. Results, with and without expert interaction, are analyzed and the impact that expert judgment has on predictive uncertainty at the WIPP site is discussed. It is shown that for this case, expert interaction leads to more conservative solutions as the expert compensates for some of the lack of data and modeling approximations introduced in the formulation of the problem.  相似文献   

11.
A calibration method to solve the groundwater inverse problem under steady- and transient-state conditions is presented. The method compares kriged and numerical head field gradients to modify hydraulic conductivity without the use of non-linear optimization techniques. The process is repeated iteratively until a close match with piezometric data is reached. The approach includes a damping factor to avoid divergence and oscillation of the solution in areas of low hydraulic gradient and a weighting factor to account for temporal head variation in transient simulations. The efficiency of the method in terms of computing time and calibration results is demonstrated with a synthetic field. It is shown that the proposed method provides parameter fields that reproduce both hydraulic conductivity and piezometric data in few forward model solutions. Stochastic numerical experiments are conducted to evaluate the sensitivity of the method to the damping function and to the head field estimation errors.  相似文献   

12.
The influence of the vertical transverse isotropy (VTI) on amplitude versus angle (AVA) responses is first studied on the linearized formula of the PP-reflection coefficient. Up to medium angles of incidence, as in the isotropic case, only two quantities can be retrieved, the second with less accuracy than the first. These quantities are the P-impedance and the S-impedance multiplied by 1− δ/2, where δ is one of the two anisotropic parameters introduced by Thomsen. To extend these results to the exact formulae, the AVA analysis is then formulated as an inverse problem and a least-squares cost function is defined. A study of the eigenvalues and eigenvectors of the Hessian of the least-squares cost function confirms these results. Though these results are dependent on the amount of data and on the maximum angle of incidence available, they are appropriate for small and medium angles of incidence. Thanks to this inverse formulation, this work can be extended to the case of multicomponent AVA responses. The addition of PS-reflection data further constrains the problem, but the S-impedance and δ are still coupled. However, the addition of SS-reflection data gives an estimation of both P- and S-impedances and δ. The last two parameters, the density and the second anisotropic parameter ɛ, remain difficult to determine, at least with small-to-medium angular apertures.  相似文献   

13.
The error in physically-based rainfall-runoff modelling is broken into components, and these components are assigned to three groups: (1) model structure error, associated with the model’s equations; (2) parameter error, associated with the parameter values used in the equations; and (3) run time error, associated with rainfall and other forcing data. The error components all contribute to “integrated” errors, such as the difference between simulated and observed runoff, but their individual contributions cannot usually be isolated because the modelling process is complex and there is a lack of knowledge about the catchment and its hydrological responses. A simple model of the Slapton Wood Catchment is developed within a theoretical framework in which the catchment and its responses are assumed to be known perfectly. This makes it possible to analyse the contributions of the error components when predicting the effects of a physical change in the catchment. The standard approach to predicting change effects involves: (1) running “unchanged” simulations using current parameter sets; (2) making adjustments to the sets to allow for physical change; and (3) running “changed” simulations. Calibration or uncertainty-handling methods such as GLUE are used to obtain the current sets based on forcing and runoff data for a calibration period, by minimising or creating statistical bounds for the “integrated” errors in simulations of runoff. It is shown that current parameter sets derived in this fashion are unreliable for predicting change effects, because of model structure error and its interaction with parameter error, so caution is needed if the standard approach is to be used when making management decisions about change in catchments.  相似文献   

14.
Tomography is the inversion of boundary projections to reconstruct the internal characteristics of the medium between the source and detector boreholes. Tomography is used to image the structure of geological formations and localized inhomogenieties. This imaging technique may be applied to either seismic or electromagnetic data, typically recorded as transmission measurements between two or more boreholes. Algebraic algorithms are error-driven solutions where the goal is to minimize the error between measured and predicted projections. The purpose of this study is to assess the effect of the ray propagation model, the measurement errors, and the error functions on the resolving ability of algebraic algorithms. The problem under consideration is the identification of a two-dimensional circular anomaly surveyed using crosshole measurements. The results show that: (1) convergence to the position of the circular anomaly in depth between vertical boreholes is significantly better than for convergence in the horizontal direction; (2) error surfaces may not be convex, even in the absence of measurement and model errors; (3) the distribution of information content significantly affects the convexity of averaging error functions; (4) measurement noise and model inaccuracy manifest in increased residuals and in reduced convergence gradients near optimum convergence; (5) the maximum ray error function increases convergence gradients compared with the average error function, and is unaffected by the distribution of information content; however, it has higher probability of local minima. Therefore, inversions based on the minimization of the maximum ray error may be advantageous in crosshole tomography but it requires smooth projections. These results are applicable to both electromagnetic and seismic data for wavelengths significantly smaller than the size of anomalies.  相似文献   

15.
A new methodology is proposed for the development of parameter-independent reduced models for transient groundwater flow models. The model reduction technique is based on Galerkin projection of a highly discretized model onto a subspace spanned by a small number of optimally chosen basis functions. We propose two greedy algorithms that iteratively select optimal parameter sets and snapshot times between the parameter space and the time domain in order to generate snapshots. The snapshots are used to build the Galerkin projection matrix, which covers the entire parameter space in the full model. We then apply the reduced subspace model to solve two inverse problems: a deterministic inverse problem and a Bayesian inverse problem with a Markov Chain Monte Carlo (MCMC) method. The proposed methodology is validated with a conceptual one-dimensional groundwater flow model. We then apply the methodology to a basin-scale, conceptual aquifer in the Oristano plain of Sardinia, Italy. Using the methodology, the full model governed by 29,197 ordinary differential equations is reduced by two to three orders of magnitude, resulting in a drastic reduction in computational requirements.  相似文献   

16.
The seismological inverse problem has much in common with the data assimilation problem found in meteorology and oceanography. Using the data assimilation methodology, I will formulate the seismological inverse problem for estimating seismic source and Earth structure parameters in the form of weak-constraint generalized inverse, in which the seismic wave equation and the associated initial and boundary conditions are allowed to contain errors. The resulting Euler?CLagrange equations are closely related to the adjoint method and the scattering-integral method, which have been successfully applied in full-3D, full-wave seismic tomography and earthquake source parameter inversions. I will review some recent applications of the full-wave methodology in seismic tomography and seismic source parameter inversions and discuss some challenging issues related to the computational implementation and the effective exploitation of seismic waveform data.  相似文献   

17.
18.
Regularization is the most popular technique to overcome the null space of model parameters in geophysical inverse problems, and is implemented by including a constraint term as well as the data‐misfit term in the objective function being minimized. The weighting of the constraint term relative to the data‐fitting term is controlled by a regularization parameter, and its adjustment to obtain the best model has received much attention. The empirical Bayes approach discussed in this paper determines the optimum value of the regularization parameter from a given data set. The regularization term can be regarded as representing a priori information about the model parameters. The empirical Bayes approach and its more practical variant, Akaike's Bayesian Information Criterion, adjust the regularization parameter automatically in response to the level of data noise and to the suitability of the assumed a priori model information for the given data. When the noise level is high, the regularization parameter is made large, which means that the a priori information is emphasized. If the assumed a priori information is not suitable for the given data, the regularization parameter is made small. Both these behaviours are desirable characteristics for the regularized solutions of practical inverse problems. Four simple examples are presented to illustrate these characteristics for an underdetermined problem, a problem adopting an improper prior constraint and a problem having an unknown data variance, all frequently encountered geophysical inverse problems. Numerical experiments using Akaike's Bayesian Information Criterion for synthetic data provide results consistent with these characteristics. In addition, concerning the selection of an appropriate type of a priori model information, a comparison between four types of difference‐operator model – the zeroth‐, first‐, second‐ and third‐order difference‐operator models – suggests that the automatic determination of the optimum regularization parameter becomes more difficult with increasing order of the difference operators. Accordingly, taking the effect of data noise into account, it is better to employ the lower‐order difference‐operator models for inversions of noisy data.  相似文献   

19.
《水文科学杂志》2013,58(5):917-935
Abstract

For urban drainage and urban flood modelling applications, fine spatial and temporal rainfall resolution is required. Simulation methods are developed to overcome the problem of data limitations. Although temporal resolution higher than 10–20 minutes is not well suited for detailed rainfall—runoff modelling for urban drainage networks, in the absence of monitored data, longer time intervals can be used for master planning or similar purposes. A methodology is presented for temporal disaggregation and spatial distribution of hourly rainfall fields, tested on observations for a 10-year period at 16 raingauges in the urban catchment of Dalmuir (UK). Daily rainfall time series are simulated with a generalized linear model (GLM). Next, using a single-site disaggregation model, the daily data of the central gauge in the catchment are downscaled to an hourly time scale. This hourly pattern is then applied linearly in space to disaggregate the daily data into hourly rainfall at all sites. Finally, the spatial rainfall field is obtained using inverse distance weighting (IDW) to interpolate the data over the whole catchment. Results are satisfactory: at individual sites within the region the simulated data preserve properties that match the observed statistics to an acceptable level for practical purposes.  相似文献   

20.
Root zone soil water content impacts plant water availability, land energy and water balances. Because of unknown hydrological model error, observation errors and the statistical characteristics of the errors, the widely used Kalman filter (KF) and its extensions are challenged to retrieve the root zone soil water content using the surface soil water content. If the soil hydraulic parameters are poorly estimated, the KF and its extensions fail to accurately estimate the root zone soil water. The H‐infinity filter (HF) represents a robust version of the KF. The HF is widely used in data assimilation and is superior to the KF, especially when the performance of the model is not well understood. The objective of this study is to study the impact of uncertain soil hydraulic parameters, initial soil moisture content and observation period on the ability of HF assimilation to predict in situ soil water content. In this article, we study seven cases. The results show that the soil hydraulic parameters hold a critical role in the course of assimilation. When the soil hydraulic parameters are poorly estimated, an accurate estimation of root soil water content cannot be retrieved by the HF assimilation approach. When the estimated soil hydraulic parameters are similar to actual values, the soil water content at various depths can be accurately retrieved by the HF assimilation. The HF assimilation is not very sensitive to the initial soil water content, and the impact of the initial soil water content on the assimilation scheme can be eliminated after about 5–7 days. The observation interval is important for soil water profile distribution retrieval with the HF, and the shorter the observation interval, the shorter the time required to achieve actual soil water content. However, the retrieval results are not very accurate at a depth of 100 cm. Also it is complex to determine the weighting coefficient and the error attenuation parameter in the HF assimilation. In this article, the trial‐and‐error method was used to determine the weighting coefficient and the error attenuation parameter. After the first establishment of limited range of the parameters, ‘the best parameter set’ was selected from the range of values. For the soil conditions investigated, the HF assimilation results are better than the open‐loop results. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号