首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
Regularization is the most popular technique to overcome the null space of model parameters in geophysical inverse problems, and is implemented by including a constraint term as well as the data‐misfit term in the objective function being minimized. The weighting of the constraint term relative to the data‐fitting term is controlled by a regularization parameter, and its adjustment to obtain the best model has received much attention. The empirical Bayes approach discussed in this paper determines the optimum value of the regularization parameter from a given data set. The regularization term can be regarded as representing a priori information about the model parameters. The empirical Bayes approach and its more practical variant, Akaike's Bayesian Information Criterion, adjust the regularization parameter automatically in response to the level of data noise and to the suitability of the assumed a priori model information for the given data. When the noise level is high, the regularization parameter is made large, which means that the a priori information is emphasized. If the assumed a priori information is not suitable for the given data, the regularization parameter is made small. Both these behaviours are desirable characteristics for the regularized solutions of practical inverse problems. Four simple examples are presented to illustrate these characteristics for an underdetermined problem, a problem adopting an improper prior constraint and a problem having an unknown data variance, all frequently encountered geophysical inverse problems. Numerical experiments using Akaike's Bayesian Information Criterion for synthetic data provide results consistent with these characteristics. In addition, concerning the selection of an appropriate type of a priori model information, a comparison between four types of difference‐operator model – the zeroth‐, first‐, second‐ and third‐order difference‐operator models – suggests that the automatic determination of the optimum regularization parameter becomes more difficult with increasing order of the difference operators. Accordingly, taking the effect of data noise into account, it is better to employ the lower‐order difference‐operator models for inversions of noisy data.  相似文献   

4.
Nonuniqueness in geophysical inverse problems is naturally resolved by incorporating prior information about unknown models into observed data. In practical estimation procedures, the prior information must be quantitatively expressed. We represent the prior information in the same form as observational equations, nonlinear equations with random errors in general, and treat as data. Then we may define a posterior probability density function of model parameters for given observed data and prior data, and use the maximum likelihood criterion to solve the problem. Supposing Gaussian errors both in observed data and prior data, we obtain a simple algorithm for iterative search to find the maximum likelihood estimates. We also obtain an asymptotic expression of covariance for estimation errors, which gives a good approximation to exact covariance when the estimated model is linearly close to a true model. We demonstrate that our approach is a general extension of various inverse methods dealing with Gaussian data. By way of example, we apply the new approach to a problem of inferring the final rupture state of the 1943 Tottori earthquake (M = 7.4) from coseismic geodetic data. The example shows that the use of sufficient prior information effectively suppresses both the nonuniqueness and the nonlinearity of the problem.  相似文献   

5.
Man's engineering activities are concentrated on the uppermost part of the earth's crust which is called engineering-geologic zone. This zone is characterized by a significant spatialtemporal variation of the physical properties status of rocks, and saturating waters. This variation determines the specificity of geophysical and, particularly, geoelectrical investigations. Planning of geoelectric investigations in the engineering-geologic zone and their subsequent interpretation requires a priori) geologic-geophysical information on the main peculiarities of the engineering-geologic and hydrogeologic conditions in the region under investigation. This information serves as a basis for the creation of an initial geoelectric model of the section. Following field investigations the model is used in interpretation. Formalization of this a priori) model can be achieved by the solution of direct geoelectric problems. An additional geologic-geophysical information realized in the model of the medium allows to diminish the effect of the “principle of equivalence” by introducing flexible limitations in the section's parameters. Further geophysical observations as well as the correlations between geophysical and engineering-geologic parameters of the section permit the following step in the specification of the geolectric model and its approximation to the real medium. Next correction of this model is made upon accumulation of additional information. The solution of inverse problems with the utilization of computer programs permits specification of the model in the general iterational cycle of interpretation.  相似文献   

6.
A new class of algorithms for solving the inverse problems of gravity prospecting is considered. The best interpretation is selected from the set Q of the admissible versions by the optimality criteria that are borrowed from the solution-making theory and adapted for the geophysical problems. The concept of retrieving the information about the sources of gravity anomalies, which treats the result of the interpretation as a set of locally optimal solutions of the inverse problem but not as a single globally optimal solution is discussed. The locally optimal solutions of the inverse problem are sort of singularity points of set Q. They are preferable to the other admissible solutions by a certain criterion formulated in terms of the geologically important information about the anomalous bodies. The admissible versions of the interpretation of the gravimetry data that meet the criteria of the decision-making theory are the primary candidates for the singularity points. The results of the numerical calculations are presented. The set of the admissible solutions from which the locally optimal versions of interpretation are selected is formed by the modifications of the assembly method developed by V.N. Strakhov.  相似文献   

7.
The classical aim of non-linear inversion of seismograms is to obtain the earth model which, for null initial conditions and given sources, best predicts the observed seismograms. This problem is currently solved by an iterative method: each iteration involves the resolution of the wave equation with the actual sources in the current medium, the resolution of the wave equation, backwards in time, with the current residuals as sources; and the correlation, at each point of space, of the two wavefields thus obtained. Our view of inversion is more general: we want to obtain a whole set of earth model, initial conditions, source functions, and predicted seismograms, which are the closest to some a priori values, and which are related through the wave equation. It allows us to justify the previous method, but it also allows us to set the same inverse problem in a different way: what is now searched for is the best fit between calculated and a priori initial conditions, for given sources and observed surface displacements. This leads to a completely different iterative method, in which each iteration involves the downward extrapolation of given surface displacements and tractions, down to a given depth (the‘bottom’), the upward extrapolation of null displacements and tractions at the bottom, using as sources the initial time conditions of the previous field, and a correlation, at each point of the space, of the two wavefields thus obtained. Besides the theoretical interest of the result, it opens the way to alternative numerical methods of resolution of the inverse problem. If the non-linear inversion using forward-backward time propagations now works, this non-linear inversion using downward-upward extrapolations will give the same results but more economically, because of some tricks which may be used in depth extrapolation (calculation frequency by frequency, inversion of the top layers before the bottom layers, etc.).  相似文献   

8.
This paper gives a review of Bayesian parameter estimation. The Bayesian approach is fundamental and applicable to all kinds of inverse problems. Its basic formulation is probabilistic. Information from data is combined with a priori information on model parameters. The result is called the a posteriori probability density function and it is the solution to the inverse problem. In practice an estimate of the parameters is obtained by taking its maximum. Well-known estimation procedures like least-squares inversion or l1 norm inversion result, depending on the type of noise and a priori information given. Due to the a priori information the maximum will be unique and the estimation procedures will be stable except (in theory) for the most pathological problems which are very unlikely to occur in practice. The approach of Tarantola and Valette can be derived within classical probability theory. The Bayesian approach allows a full resolution and uncertainty analysis which is discussed in Part II of the paper.  相似文献   

9.
A parameter estimation or inversion procedure is incomplete without an analysis of uncertainties in the results. In the fundamental approach of Bayesian parameter estimation, discussed in Part I of this paper, the a posteriori probability density function (pdf) is the solution to the inverse problem. It is the product of the a priori pdf, containing a priori information on the parameters, and the likelihood function, which represents the information from the data. The maximum of the a posteriori pdf is usually taken as a point estimate of the parameters. The shape of this pdf, however, gives the full picture of uncertainty in the parameters. Uncertainty analysis is strictly a problem of information reduction. This can be achieved in several stages. Standard deviations can be computed as overall uncertainty measures of the parameters, when the shape of the a posteriori pdf is not too far from Gaussian. Covariance and related matrices give more detailed information. An eigenvalue or principle component analysis allows the inspection of essential linear combinations of the parameters. The relative contributions of a priori information and data to the solution can be elegantly studied. Results in this paper are especially worked out for the non-linear Gaussian case. Comparisons with other approaches are given. The procedures are illustrated with a simple two-parameter inverse problem.  相似文献   

10.
用于位场数据处理的广义反演技术   总被引:3,自引:3,他引:3       下载免费PDF全文
本文讨论了线性广义反演方法对位场延拓问题的应用。如果考虑能量有限的约束,应用拉格朗日乘子法便可得到与随机逆相同的反演公式(Franklin),而不必假定模型为高斯白噪的随机过程。对反演算子进行谱分解之后,拉格朗日乘子起到折衷因子的作用,因此改变拉格朗日乘子的值便可找到在分辨率和误差之间取最佳折衷的反问题数值解。将这种广义反演技术用于位场从任意曲面向下延拓到源顶面,给出了较好的结果。与BG方法相比,位场向下延拓结果的精度相近,但计算速度可以提高。  相似文献   

11.
Satellite remote sensing deals with a complex system coupling atmosphere and surface. Any physical model with reasonable precision needs several to tens of parameters. Without a priori knowledge of these parameters, Proposition 3 of Verstraete et al. requires the number of independent observations to be greater than the number of unknown parameters. This requirement can hardly be satisfied even in the coming EOS era. As Tarantola pointed out, the inversion problems in geoscience are always underdetermined in some sense. In order to make good use of every kind of a priori knowledge for effectively extracting information from remote sensing observations, the right question to set is as follows:Given an imperfect model and a certain amount ofa priori information on model parameters, in which sense should one modify thea priori information, given the actual observation with noise?A priori knowledge of physical parameters can be presented in different ways such as physical limits, global statistical means and variance fora certain landcover type, or previous statistics and temporal variation of a specific target. When sucha priori knowledge can be expressed as joint probability density. Bayessian theorem can be used in the inversion to obtain posterior probability densities of parameters using newly acquired observations. There is no prerequirement on how many independent observations must be made, and the knowledge gained merely depends on the information content of the new observations. Some specific problems about knowledge accumulation and renewal are also discussed.  相似文献   

12.
In the traditional inversion of the Rayleigh dispersion curve, layer thickness, which is the second most sensitive parameter of modelling the Rayleigh dispersion curve, is usually assumed as correct and is used as fixed a priori information. Because the knowledge of the layer thickness is typically not precise, the use of such a priori information may result in the traditional Rayleigh dispersion curve inversions getting trapped in some local minima and may show results that are far from the real solution. In this study, we try to avoid this issue by using a joint inversion of the Rayleigh dispersion curve data with vertical electric sounding data, where we use the common‐layer thickness to couple the two methods. The key idea of the proposed joint inversion scheme is to combine methods in one joint Jacobian matrix and to invert for layer S‐wave velocity, resistivity, and layer thickness as an additional parameter, in contrast with a traditional Rayleigh dispersion curve inversion. The proposed joint inversion approach is tested with noise‐free and Gaussian noise data on six characteristic, synthetic sub‐surface models: a model with a typical dispersion; a low‐velocity, half‐space model; a model with particularly stiff and soft layers, respectively; and a model reproduced from the stiff and soft layers for different layer‐resistivity propagation. In the joint inversion process, the non‐linear damped least squares method is used together with the singular value decomposition approach to find a proper damping value for each iteration. The proposed joint inversion scheme tests many damping values, and it chooses the one that best approximates the observed data in the current iteration. The quality of the joint inversion is checked with the relative distance measure. In addition, a sensitivity analysis is performed for the typical dispersive sub‐surface model to illustrate the benefits of the proposed joint scheme. The results of synthetic models revealed that the combination of the Rayleigh dispersion curve and vertical electric sounding methods in a joint scheme allows to provide reliable sub‐surface models even in complex and challenging situations and without using any a priori information.  相似文献   

13.
The methods of anomaly transformations considered are based on a system of combined analysis of the geophysical field and a priori) information on the structure of a geological object. The methods involve calculation of a transformative polynomial (describing geophysical noise) which makes it possible to separate the residual field component related to the geological characteristic under study in a correlatively optimal way. The structure of the transformative polynomial is determined by the nature of the geophysical noise that is eliminated by the field transformation. Various correlation methods of anomaly transformations arise, depending on the structure of the transformative polynomial chosen. By way of example, the correlation method employed for separating the geophysical anomalies is shown to be highly effective in investigating the local geological structure.  相似文献   

14.
Optimization of Cell Parameterizations for Tomographic Inverse Problems   总被引:1,自引:0,他引:1  
—?We develop algorithms for the construction of irregular cell (block) models for parameterization of tomographic inverse problems. The forward problem is defined on a regular basic grid of non-overlapping cells. The basic cells are used as building blocks for construction of non-overlapping irregular cells. The construction algorithms are not computationally intensive and not particularly complex, and, in general, allow for grid optimization where cell size is determined from scalar functions, e.g., measures of model sampling or a priori estimates of model resolution. The link between a particular cell j in the regular basic grid and its host cell k in the irregular grid is provided by a pointer array which implicitly defines the irregular cell model. The complex geometrical aspects of irregular cell models are not needed in the forward or in the inverse problem. The matrix system of tomographic equations is computed once on the regular basic cell model. After grid construction, the basic matrix equation is mapped using the pointer array on a new matrix equation in which the model vector relates directly to cells in the irregular model. Next, the mapped system can be solved on the irregular grid. This approach avoids forward computation on the complex geometry of irregular grids. Generally, grid optimization can aim at reducing the number of model parameters in volumes poorly sampled by the data while elsewhere retaining the power to resolve the smallest scales warranted by the data. Unnecessary overparameterization of the model space can be avoided and grid construction can aim at improving the conditioning of the inverse problem. We present simple theory and optimization algorithms in the context of seismic tomography and apply the methods to Rayleigh-wave group velocity inversion and global travel-time tomography.  相似文献   

15.
16.
Abstract

Recent work pertaining to estimating error and accuracies in geomagnetic field modeling is reviewed from a unified viewpoint and illustrated with examples. The formulation of a finite dimensional approximation to the underlying infinite dimensional problem is developed. Central to the formulation is an inner product and norm in the solution space through which a priori information can be brought to bear on the problem. Such information is crucial to estimation of the effects of higher degree fields at the Core-Mantle boundary (CMB) because the behavior of higher degree fields is masked in our measurements by the presence of the field from the Earth's crust. Contributions to the errors in predicting geophysical quantities based on the approximate model are separated into three categories: (1) the usual error from the measurement noise; (2) the error from unmodeled fields, i.e. from sources in the crust, ionosphere, etc.; and (3) the error from truncating to a finite dimensioned solution and prediction space. The combination of the first two is termed low degree error while the third is referred to as truncation error.

The error analysis problem consists of “characterizing” the difference δz = z—z, where z is some quantity depending on the magnetic field and z is the estimate of z resulting from our model. Two approaches are discussed. The method of Confidence Set Inference (CSI) seeks to find an upper bound for |z—?|. Statistical methods, i.e. Bayesian or Stochastic Estimation, seek to estimate Ez2 ), where E is the expectation value. Estimation of both the truncation error and low degree error is discussed for both approaches. Expressions are found for an upper bound for |δz| and for Ez2 ). Of particular interest is the computation of the radial field, B., at the CMB for which error estimates are made as examples of the methods. Estimated accuracies of the Gauss coefficients are given for the various methods. In general, the lowest error estimates result when the greatest amount of a priori information is available and, indeed, the estimates for truncation error are completely dependent upon the nature of the a priori information assumed. For the most conservative approach, the error in computing point values of Br at the CMB is unbounded and one must be content with, e.g., averages over some large area. The various assumptions about a priori information are reviewed. Work is needed to extend and develop this information. In particular, information regarding the truncated fields is needed to determine if the pessimistic bounds presently available are realistic or if there is a real physical basis for lower error estimates. Characterization of crustal fields for degree greater than 50 is needed as is more rigorous characterization of the external fields.  相似文献   

17.
Inversion of multimode surface-wave data is of increasing interest in the near-surface geophysics community. For a given near-surface geophysical problem, it is essential to understand how well the data, calculated according to a layered-earth model, might match the observed data. A data-resolution matrix is a function of the data kernel (determined by a geophysical model and a priori information applied to the problem), not the data. A data-resolution matrix of high-frequency (≥2 Hz) Rayleigh-wave phase velocities, therefore, offers a quantitative tool for designing field surveys and predicting the match between calculated and observed data. We employed a data-resolution matrix to select data that would be well predicted and we find that there are advantages of incorporating higher modes in inversion. The resulting discussion using the data-resolution matrix provides insight into the process of inverting Rayleigh-wave phase velocities with higher-mode data to estimate S-wave velocity structure. Discussion also suggested that each near-surface geophysical target can only be resolved using Rayleigh-wave phase velocities within specific frequency ranges, and higher-mode data are normally more accurately predicted than fundamental-mode data because of restrictions on the data kernel for the inversion system. We used synthetic and real-world examples to demonstrate that selected data with the data-resolution matrix can provide better inversion results and to explain with the data-resolution matrix why incorporating higher-mode data in inversion can provide better results. We also calculated model-resolution matrices in these examples to show the potential of increasing model resolution with selected surface-wave data.  相似文献   

18.
The anisotropy of the land surface can be best described by the bidirectional reflectance distribution function (BRDF). As the field of multiangular remote sensing advances, it is increasingly probable that BRDF models can be inverted to estimate the important biological or climatological parameters of the earth surface such as leaf area index and albedo. The state-of-the-art of BRDF is the use of the linear kernel-driven models, mathematically described as the linear combination of the isotropic kernel, volume scattering kernel and geometric optics kernel. The computational stability is characterized by the algebraic operator spectrum of the kernel-matrix and the observation errors. Therefore, the retrieval of the model coefficients is of great importance for computation of the land surface albedos. We first consider the smoothing solution method of the kernel-driven BRDF models for retrieval of land surface albedos. This is known as an ill-posed inverse problem. The ill-posedness arises from that the linear kernel driven BRDF model is usually underdetermined if there are too few looks or poor directional ranges, or the observations are highly dependent. For example, a single angular observation may lead to an under-determined system whose solution is infinite (the null space of the kernel operator contains nonzero vectors) or no solution (the rank of the coefficient matrix is not equal to the augmented matrix). Therefore, some smoothing or regularization technique should be applied to suppress the ill-posedness. So far, least squares error methods with a priori knowledge, QR decomposition method for inversion of the BRDF model and regularization theories for ill-posed inversion were developed. In this paper, we emphasize on imposing a priori information in different spaces. We first propose a general a priori imposed regularization model problem, and then address two forms of regularization scheme. The first one is a regularized singular value decomposition method, and then we propose a retrieval method in I 1 space. We show that the proposed method is suitable for solving land surface parameter retrieval problem if the sampling data are poor. Numerical experiments are also given to show the efficiency of the proposed methods. Supported by National Natural Science Foundation of China (Grant Nos. 10501051, 10871191), and Key Project of Chinese National Programs for Fundamental Research and Development (Grant Nos. 2007CB714400, 2005CB422104)  相似文献   

19.
On the basis of comparison of the approaches to the solution of inverse problems in information theory and geophysics, it is shown that results, obtained in information theory, are suitable to supplement the theory of geophysical inverse problems. The conditions of the existence and uniqueness of the solutions of inverse problems in their practical discrete statement are specified. The terms of ɛ-entropy H ɛ and informational capacity C ɛ, characterizing “volumes” of unknown and observed data, are introduced. It is shown, that the instability of the solution of the inverse problem decreases with increase in H ɛ, (increase in the “complexity” of studied section), if the relation H ɛC ɛ is maintained.  相似文献   

20.
Linearized residual statics estimation will often fail when large static corrections are needed. Cycle skipping may easily occur and the consequence may be that the solution is trapped in a local maximum of the stack-power function. In order to find the global solution, Monte Carlo optimization in terms of simulated annealing has been applied in the stack-power maximization technique. However, a major problem when using simulated annealing is to determine a critical parameter known as the temperature. An efficient solution to this difficulty was provided by Nulton and Salamon (1988) and Andresen et al. (1988), who used statistical information about the problem, acquired during the optimization itself, to compute near optimal annealing schedules. Although theoretically solved, the problem of finding the Nulton–Salamon temperature schedule often referred to as the schedule at constant thermodynamic speed, may itself be computationally heavy. Many extra iterations are needed to establish the schedule. For an important geophysical inverse problem, the residual statics problem of reflection seismology, we suggest a strategy to avoid the many extra iterations. Based on an analysis of a few residual statics problems we compute approximations to Nulton–Salamon schedules for almost arbitrary residual statics problems. The performance of the approximated schedules is evaluated on synthetic and real data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号