首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The similarity between maximum entropy (MaxEnt) and minimum relative entropy (MRE) allows recent advances in probabilistic inversion to obviate some of the shortcomings in the former method. The purpose of this paper is to review and extend the theory and practice of minimum relative entropy. In this regard, we illustrate important philosophies on inversion and the similarly and differences between maximum entropy, minimum relative entropy, classical smallest model (SVD) and Bayesian solutions for inverse problems. MaxEnt is applicable when we are determining a function that can be regarded as a probability distribution. The approach can be extended to the case of the general linear problem and is interpreted as the model which fits all the constraints and is the one model which has the greatest multiplicity or “spreadout” that can be realized in the greatest number of ways. The MRE solution to the inverse problem differs from the maximum entropy viewpoint as noted above. The relative entropy formulation provides the advantage of allowing for non-positive models, a prior bias in the estimated pdf and `hard' bounds if desired. We outline how MRE can be used as a measure of resolution in linear inversion and show that MRE provides us with a method to explore the limits of model space. The Bayesian methodology readily lends itself to the problem of updating prior probabilities based on uncertain field measurements, and whose truth follows from the theorems of total and compound probabilities. In the Bayesian approach information is complete and Bayes' theorem gives a unique posterior pdf. In comparing the results of the classical, MaxEnt, MRE and Bayesian approaches we notice that the approaches produce different results. In␣comparing MaxEnt with MRE for Jayne's die problem we see excellent comparisons between the results. We compare MaxEnt, smallest model and MRE approaches for the density distribution of an equivalent spherically-symmetric earth and for the contaminant plume-source problem. Theoretical comparisons between MRE and Bayesian solutions for the case of the linear model and Gaussian priors may show different results. The Bayesian expected-value solution approaches that of MRE and that of the smallest model as the prior distribution becomes uniform, but the Bayesian maximum aposteriori (MAP) solution may not exist for an underdetermined case with a uniform prior.  相似文献   

2.
The similarity between maximum entropy (MaxEnt) and minimum relative entropy (MRE) allows recent advances in probabilistic inversion to obviate some of the shortcomings in the former method. The purpose of this paper is to review and extend the theory and practice of minimum relative entropy. In this regard, we illustrate important philosophies on inversion and the similarly and differences between maximum entropy, minimum relative entropy, classical smallest model (SVD) and Bayesian solutions for inverse problems. MaxEnt is applicable when we are determining a function that can be regarded as a probability distribution. The approach can be extended to the case of the general linear problem and is interpreted as the model which fits all the constraints and is the one model which has the greatest multiplicity or “spreadout” that can be realized in the greatest number of ways. The MRE solution to the inverse problem differs from the maximum entropy viewpoint as noted above. The relative entropy formulation provides the advantage of allowing for non-positive models, a prior bias in the estimated pdf and `hard' bounds if desired. We outline how MRE can be used as a measure of resolution in linear inversion and show that MRE provides us with a method to explore the limits of model space. The Bayesian methodology readily lends itself to the problem of updating prior probabilities based on uncertain field measurements, and whose truth follows from the theorems of total and compound probabilities. In the Bayesian approach information is complete and Bayes' theorem gives a unique posterior pdf. In comparing the results of the classical, MaxEnt, MRE and Bayesian approaches we notice that the approaches produce different results. In␣comparing MaxEnt with MRE for Jayne's die problem we see excellent comparisons between the results. We compare MaxEnt, smallest model and MRE approaches for the density distribution of an equivalent spherically-symmetric earth and for the contaminant plume-source problem. Theoretical comparisons between MRE and Bayesian solutions for the case of the linear model and Gaussian priors may show different results. The Bayesian expected-value solution approaches that of MRE and that of the smallest model as the prior distribution becomes uniform, but the Bayesian maximum aposteriori (MAP) solution may not exist for an underdetermined case with a uniform prior.  相似文献   

3.
Determination of impedance or velocity from a stacked seismic trace generally suffers from noise and the fact that seismic data are bandlimited. These deficiencies can frequently be alleviated by ancillary information which is often expressed more naturally in terms of probabilities than in the form of equations or inequalities. In such a situation information theory can be used to include ‘soft’information in the inversion process. The vehicle used for this purpose is the Maximum Entropy (ME) principle. The basic idea is that a prior probability distribution (pd) of the unknown parameter(s) or function(s) is converted into a posterior pd which has a larger entropy than any other pd which also accounts for the information. Since providing new information generally lowers the entropy, this means that the ME pd is as non-committal as possible with regard to information which is not (yet) available. If the information used is correct, then the ME pd cannot be contradicted by new, also correct, data and thus represents a conservative solution to the inverse problem. In the actual implementation, the final result is, generally, not the pd itself (which may be quite broad) but rather the expectation values of the desired parameter(s) or function(s). A general problem of the ME approach is the need for a prior pd for the parameter(s) to be estimated. The approach used here for the velocity is based on an invariance criterion, which ensures that the result is the same whether velocity or slowness is estimated. Unfortunately, this criterion does not provide a unique prior pd but rather a class of functions from which a suitable one must be selected with the help of other considerations.  相似文献   

4.
5.
Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.  相似文献   

6.
The guaranteed approach to the solution of inverse problems of gravimetry, which is fundamentally different from the common solution, is presented. Instead of providing single (optimum) estimates of the model parameters, whose quality is, generally, random, the interpretation in the suggested approach yields that volume of reliable information on disturbing objects, which is contained in the field measurements, together with the a priori constraints. The method of solving the inverse problem of gravimetry is developed within this approach, where, in contrast to the conventional approach, both the geometrical parameters of the geological bodies and the densities of their composing rocks are treated as unknowns. The generalized assembly algorithm suggested by V.N. Strakhov is proposed as a basic working tool to implement this approach. The results of testing this algorithm and the guaranteed approach itself on the model and practical examples are discussed.  相似文献   

7.
Application of minimum relative entropy theory for streamflow forecasting   总被引:1,自引:1,他引:0  
This paper develops and applies the minimum relative entropy (MRE) theory with spectral power as a random variable for streamflow forecasting. The MRE theory consists of (1) hypothesizing a prior probability distribution for the random variable, (2) determining the spectral power distribution, (3) extending the autocorrelation function, and (4) doing forecasting. The MRE theory was verified using streamflow data from the Mississippi River watershed. The exponential distribution was chosen as a prior probability in applying the MRE theory by evaluating the historical data of the Mississippi River. If no prior information is given, the MRE theory is equivalent to the Burg entropy (BE) theory. The spectral density obtained by the MRE theory led to higher resolution than did the BE theory. The MRE theory did not miss the largest peak at 1/12th frequency, which is the main periodicity of streamflow of the Mississippi River, but the BE theory sometimes did. The MRE theory was found to be capable of forecasting monthly streamflow with a lead time from 12 to 48 months. The coefficient of determination (r 2) between observed and forecasted stream flows was 0.912 for Upper Mississippi River and was 0.855 for Lower Mississippi River. Both MRE and BE theories were generally more reliable and had longer forecasting lead times than the autoregressive (AR) method. The forecasting lead time for MRE and BE could be as long as 48–60 months, while it was less than 48 months for the AR method. However, BE was comparable to MRE only when observations fitted the AR process well. The MRE theory provided more reliable forecasts than did the BE theory, and the advantage of using MRE is more significant for downstream flows with irregular flow patterns or where the periodicity information is limited. The reliability of monthly streamflow forecasting was the highest for MRE, followed by BE followed by AR.  相似文献   

8.
A method is presented for the two-dimensional combined inversion of short- and long-normal tool direct current resistivity data with symmetry. The forward problem is solved using the finite element method in the cylindrical coordinates system. The inverse problem is solved using a conjugate gradient technique with the partial derivatives obtained using reciprocity. The parameters were obtained by means of both conjugate gradient relaxation and conventional conjugate gradient method. The solution of this highly underdetermined inverse problem is stabilized using Tikhonov regularization and the scheme yields a blurred image of the subsurface. The scheme is tested using synthetic data and field data. Tests using synthetic data suggest that traces of the horizontal boundaries are delineated in the range of the exploration distance while the resolution of vertical boundaries depends upon the solution regularization. Application to field data shows that additional information is necessary for resolving the resistivity structure when there are low resistivity contrasts between formation units.  相似文献   

9.
In the case of 3D multilayered structures the 2D interval velocity analysis may be inaccurate. This fact is illustrated by synthetic examples. The method proposed solves the 3D inverse problem within the scope of the ray approach. The solution, i.e. the interval velocities and the reflection interface position, is obtained using data from conventional 2D line profiles arbitrarily located and from normal incidence time maps. Although the input information is essentially limited, the method presented reveals only minor biased velocity estimates. In order to implement the proposed 3D inversion method, we developed a processing procedure. The procedure performs the evaluation of reflection time and ray parameters along line profiles, 3D interval velocity estimation, and time-to-depth map migration. Tools to stabilize the 3D inversion are investigated. The application of the 3D inversion technique to synthetic and real data is compared with results of the 2D inversion.  相似文献   

10.
一种新的实时电磁逆散射方法   总被引:2,自引:0,他引:2       下载免费PDF全文
为解决介质圆柱体逆散射问题,提出一种新的在线逆散射方法,通过支持向量机将逆散射问题转化成一个回归估计问题. 该方法可应用于各种逆散射方面, 尤其是目标的几何与电磁参数重构和埋地目标探测. 文中首次将支持向量机方法应用到该领域,设置多个散射场的观测点,通过提取散射场的不同信息作为样本信息训练支持向量机, 建立了介质圆柱体的逆散射模型, 利用该模型重构了介质圆柱体的电磁参数,同时探测了埋地位置. 数值结果显示了该方法的有效性和准确性,为目标的实时逆散射研究提供了一种有效方法.  相似文献   

11.
Common-depth-point stacking velocities may differ from root-mean-square velocities because of large offset and because of dipping reflectors. This paper shows that the two effects may be treated separately, and proceeds to examine the effect of dip. If stacking velocities are assumed equal to rms velocities for the purpose of time to depth conversion, then errors are introduced comparable to the difference between migrated and unmigrated depths. Consequently, if the effect of dip on stacking velocity is ignored, there is no point in migrating the resulting depth data. For a multi-layered model having parallel dip, a formula is developed to compute interval velocities and depths from the stacking velocities, time picks, and time slope of the seismic section. It is shown that cross-dip need not be considered, if all the reflectors have the same dip azimuth. The problem becomes intractable if the dips are not parallel. But the inverse problem is soluble: to obtain, stacking velocities; time picks, and time slopes from a given depth and interval velocity model. Finally, the inverse solution is combined with an approximate forward solution. This provides an iterative method to obtain depths and interval velocities from stacking velocities, time picks and time slopes. It is assumed that the dip azimuth is the same for all reflectors, but not necessarily in the plane of the section, and that the curvature of the reflecting horizons is negligible. The effect of onset delay is examined. It is shown that onset corrections may be unnecessary when converting from time to depth.  相似文献   

12.
Categorical data play an important role in a wide variety of spatial applications, while modeling and predicting this type of statistical variable has proved to be complex in many cases. Among other possible approaches, the Bayesian maximum entropy methodology has been developed and advocated for this goal and has been successfully applied in various spatial prediction problems. This approach aims at building a multivariate probability table from bivariate probability functions used as constraints that need to be fulfilled, in order to compute a posterior conditional distribution that accounts for hard or soft information sources. In this paper, our goal is to generalize further the theoretical results in order to account for a much wider type of information source, such as probability inequalities. We first show how the maximum entropy principle can be implemented efficiently using a linear iterative approximation based on a minimum norm criterion, where the minimum norm solution is obtained at each step from simple matrix operations that converges to the requested maximum entropy solution. Based on this result, we show then how the maximum entropy problem can be related to the more general minimum divergence problem, which might involve equality and inequality constraints and which can be solved based on iterated minimum norm solutions. This allows us to account for a much larger panel of information types, where more qualitative information, such as probability inequalities can be used. When combined with a Bayesian data fusion approach, this approach deals with the case of potentially conflicting information that is available. Although the theoretical results presented in this paper can be applied to any study (spatial or non-spatial) involving categorical data in general, the results are illustrated in a spatial context where the goal is to predict at best the occurrence of cultivated land in Ethiopia based on crowdsourced information. The results emphasize the benefit of the methodology, which integrates conflicting information and provides a spatially exhaustive map of these occurrence classes over the whole country.  相似文献   

13.
Utilizing electromagnetic data in geophysical exploration work is difficult when measured responses are complicated by the effects of 3D structures. 1D and 2D models may not be capable of accurately simulating the physical processes that contribute to a measured response. 3D conductive-host modelling is difficult, costly and time-consuming. Using a 3D inverse procedure it is possible to automate the interpretation of controlled-source electromagnetic data. This procedure uses an inverse formulation based on frequency-domain, volume integral equations and a pulse-basis representation for the internal electrical field and anomalous conductivity. Beginning with an initial model composed of a 3D inhomogeneous region residing in a laterally homogeneous (layered-earth) geoelectrical section, iterative least-squares algorithms are used to refine the geometry and the conductivity of the inhomogeneity. This novel approach for 3D electromagnetic interpretation yields a reliable and stable inverse solution provided constraints on how much the variable can change at each iteration are incorporated. Integral-equation-based inverse formulations that do not correctly address the non-linearity of this inverse problem may have poor convergence properties, particularly when dealing with the high conductivity contrasts that are typical of many exploration problems. While problems associated with contamination of the data by random noise and non-uniqueness of solutions do not usually influence the inverse solution in an adverse manner, problems associated with model inadequacy and errors in an assumed background conductivity structure can produce undesirable effects.  相似文献   

14.
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained.  相似文献   

15.
We consider the iterative numerical method for solving two-dimensional (2D) inverse problems of magnetotelluric sounding, which significantly reduces the computational burden of the inverse problem solution in the class of quasi-layered models. The idea of the method is to replace the operator of the direct 2D problem of calculating the low-frequency electromagnetic field in a quasi-layered medium by a quasi-one dimensional operator at each observation point. The method is applicable for solving the inverse problems of magnetotellurics with either the E- and H-polarized fields and in the case when the inverse problem is simultaneously solved using the impedance values for the fields with both polarizations. We describe the numerical method and present the examples of its application to the numerical solution of a number of model inverse problems of magnetotelluric sounding.  相似文献   

16.
On seismograms recorded at sea bubble pulse oscillations can present a serious problem to an interpreter. We propose a new approach, based on generalized linear inverse theory, to the solution of the debubbling problem. Under the usual assumption that a seismogram can be modelled as the convolution of the earth's impulse response and a source wavelet we show that estimation of either the wavelet or the impulse response can be formulated as a generalized linear inverse problem. This parametric approach involves solution of a system of equations by minimizing the error vector (ΔX = Xobs– Xcal) in a least squares sense. One of the most significant results is that the method enables us to control the accuracy of the solution so that it is consistent with the observational errors and/or known noise levels. The complete debubbling procedure can be described in four steps: (1) apply minimum entropy deconvolution to the observed data to obtain a deconvolved spike trace, a first approximation to the earth's response function; (2) use this trace and the observed data as input for the generalized linear inverse procedure to compute an estimated basic bubble pulse wavelet; (3) use the results of steps 1 and 2 to construct the compound source signature consisting of the primary pulse plus appropriate bubble oscillations; and (4) use the compound source signature and the observed data as input for the generalized linear inverse method to determine the estimated earth impulse response—a debubbled, deconvolved seismogram. We illustrate the applicability of the new approach with a set of synthetic seismic traces and with a set of field seismograms. A disadvantage of the procedure is that it is computationally expensive. Thus it may be more appropriate to apply the technique in cases where standard analysis techniques do not give acceptable results. In such cases the inherent advantages of the method may be exploited to provide better quality seismograms.  相似文献   

17.
Regularization is the most popular technique to overcome the null space of model parameters in geophysical inverse problems, and is implemented by including a constraint term as well as the data‐misfit term in the objective function being minimized. The weighting of the constraint term relative to the data‐fitting term is controlled by a regularization parameter, and its adjustment to obtain the best model has received much attention. The empirical Bayes approach discussed in this paper determines the optimum value of the regularization parameter from a given data set. The regularization term can be regarded as representing a priori information about the model parameters. The empirical Bayes approach and its more practical variant, Akaike's Bayesian Information Criterion, adjust the regularization parameter automatically in response to the level of data noise and to the suitability of the assumed a priori model information for the given data. When the noise level is high, the regularization parameter is made large, which means that the a priori information is emphasized. If the assumed a priori information is not suitable for the given data, the regularization parameter is made small. Both these behaviours are desirable characteristics for the regularized solutions of practical inverse problems. Four simple examples are presented to illustrate these characteristics for an underdetermined problem, a problem adopting an improper prior constraint and a problem having an unknown data variance, all frequently encountered geophysical inverse problems. Numerical experiments using Akaike's Bayesian Information Criterion for synthetic data provide results consistent with these characteristics. In addition, concerning the selection of an appropriate type of a priori model information, a comparison between four types of difference‐operator model – the zeroth‐, first‐, second‐ and third‐order difference‐operator models – suggests that the automatic determination of the optimum regularization parameter becomes more difficult with increasing order of the difference operators. Accordingly, taking the effect of data noise into account, it is better to employ the lower‐order difference‐operator models for inversions of noisy data.  相似文献   

18.
Anyone working on inverse problems is aware of their ill-posed character. In the case of inverse problems, this concept (ill-posed) proposed by J. Hadamard in 1902, admits revision since it is somehow related to their ill-conditioning and the use of local optimization methods to find their solution. A more general and interesting approach regarding risk analysis and epistemological decision making would consist in analyzing the existence of families of equivalent model parameters that are compatible with the prior information and predict the observed data within the same error bounds. Otherwise said, the ill-posed character of discrete inverse problems (ill-conditioning) originates that their solution is uncertain. Traditionally nonlinear inverse problems in discrete form have been solved via local optimization methods with regularization, but linear analysis techniques failed to account for the uncertainty in the solution that it is adopted. As a result of this fact uncertainty analysis in nonlinear inverse problems has been approached in a probabilistic framework (Bayesian approach), but these methods are hindered by the curse of dimensionality and by the high computational cost needed to solve the corresponding forward problems. Global optimization techniques are very attractive, but most of the times are heuristic and have the same limitations than Monte Carlo methods. New research is needed to provide uncertainty estimates, especially in the case of high dimensional nonlinear inverse problems with very costly forward problems. After the discredit of deterministic methods and some initial years of Bayesian fever, now the pendulum seems to return back, because practitioners are aware that the uncertainty analysis in high dimensional nonlinear inverse problems cannot (and should not be) solved via random sampling methodologies. The main reason is that the uncertainty “space” of nonlinear inverse problems has a mathematical structure that is embedded in the forward physics and also in the observed data. Thus, problems with structure should be approached via linear algebra and optimization techniques. This paper provides new insights to understand uncertainty from a deterministic point of view, which is a necessary step to design more efficient methods to sample the uncertainty region(s) of equivalent solutions.  相似文献   

19.
Parameters in a stack of homogeneous anelastic layers are estimated from seismic data, using the amplitude versus offset (AVO) variations and the travel-times. The unknown parameters in each layer are the layer thickness, the P-wave velocity, the S-wave velocity, the density and the quality factor. Dynamic ray tracing is used to solve the forward problem. Multiple reflections are included, but wave-mode conversions are not considered. The S-wave velocities are estimated from the PP reflection and transmission coefficients. The inverse problem is solved using a stabilized least-squares procedure. The Gauss-Newton approximation to the Hessian matrix is used, and the derivatives of the dynamic ray-tracing equation are calculated analytically for each iteration. A conventional velocity analysis, the common mid-point (CMP) stack and a set of CMP gathers are used to identify the number of layers and to establish initial estimates for the P-wave velocities and the layer thicknesses. The inversion is carried out globally for all parameters simultaneously or by a stepwise approach where a smaller number of parameters is considered in each step. We discuss several practical problems related to inversion of real data. The performance of the algorithm is tested on one synthetic and two real data sets. For the real data inversion, we explained up to 90% of the energy in the data. However, the reliability of the parameter estimates must at this stage be considered as uncertain.  相似文献   

20.
The root cause of the instability problem of the least-squares (LS) solution of the resistivity inverse problem is the ill-conditioning of the sensitivity matrix. To circumvent this problem a new LS approach has been investigated in this paper. At each iteration, the sensitivity matrix is weighted in multiple ways generating a set of systems of linear equations. By solving each system, several candidate models are obtained. As a consequence, the space of models is explored in a more extensive and effective way resulting in a more robust and stable LS approach to solving the resistivity inverse problem. This new approach is called the multiple reweighted LS method (MRLS). The problems encountered when using the L 1- or L 2-norm are discussed and the advantages of working with the MRLS method are highlighted. A five-layer earth model which generates an ill-conditioned matrix due to equivalence is used to generate a synthetic data set for the Schlumberger configuration. The data are randomly corrupted by noise and then inverted by using L 2, L 1 and the MRLS algorithm. The stabilized solutions, even though blurred, could only be obtained by using a heavy ridge regression parameter in L 2- and L 1-norms. On the other hand, the MRLS solution is stable without regression factors and is superior and clearer. For a better appraisal the same initial model was used in all cases. The MRLS algorithm is also demonstrated for a field data set: a stable solution is obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号