首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

The use of spontaneous potential (SP) anomalies is well known in the geophysical literatures because of its effectiveness and significance in solving many complex problems in mineral exploration. The inverse problem of self-potential data interpretation is generally ill-posed and nonlinear. Methods based on derivative analysis usually fail to reach the optimal solution (global minimum) and trapped in a local minimum. A new simple heuristic solution to SP anomalies due to 2D inclined sheet of infinite horizontal length is investigated in this study to solve these problems. This method is based on utilizing whale optimization algorithm (WOA) as an effective heuristic solution to the inverse problem of self-potential field due to a 2D inclined sheet. In this context, the WOA was applied first to synthetic example, where the effect of the random noise was examined and the method revealed good results using proper MATLAB code. The technique was then applied on several real field profiles from different localities aiming to determine the parameters of mineralized zones or the associated shear zones. The inversion parameters revealed that WOA detected accurately the unknown parameters and showed a good validation when compared with the published inversion methods.

  相似文献   

2.
3.
We investigate the use of general, non- l 2 measures of data misfit and model structure in the solution of the non-linear inverse problem. Of particular interest are robust measures of data misfit, and measures of model structure which enable piecewise-constant models to be constructed. General measures can be incorporated into traditional linearized, iterative solutions to the non-linear problem through the use of an iteratively reweighted least-squares (IRLS) algorithm. We show how such an algorithm can be used to solve the linear inverse problem when general measures of misfit and structure are considered. The magnetic stripe example of Parker (1994 ) is used as an illustration. This example also emphasizes the benefits of using a robust measure of misfit when outliers are present in the data. We then show how the IRLS algorithm can be used within a linearized, iterative solution to the non-linear problem. The relevant procedure contains two iterative loops which can be combined in a number of ways. We present two possibilities. The first involves a line search to determine the most appropriate value of the trade-off parameter and the complete solution, via the IRLS algorithm, of the linearized inverse problem for each value of the trade-off parameter. In the second approach, a schedule of prescribed values for the trade-off parameter is used and the iterations required by the IRLS algorithm are combined with those for the linearized, iterative inversion procedure. These two variations are then applied to the 1-D inversion of both synthetic and field time-domain electromagnetic data.  相似文献   

4.
The inversion of high-resolution geoid anomaly maps derived from satellite altimetry should allow one to retrieve the lithospheric elastic thickness, T e , and crustal density, c . Indeed, the bending of a lithospheric plate under the load of a seamount depends on both parameters, and the associated geoid anomaly is correspondingly dependent on the two parameters. The difference between the observed and modelled geoid signatures is estimated by a cost function, J , of the two variables, T e and c . We show that this cost function forms a valley structure along which many local minima appear, the global minimum of J corresponding to the true values of the lithospheric parameters. Classical gradient methods fail to find this global minimum because they converge to the first local minimum of J encountered, so that the final parameter estimate strongly depends on the starting pair of values ( T e ,   c ). We here implement a non-linear optimization algorithm to recover these two parameters from altimetry data. We demonstrate from the inversion of synthetic data that this approach ensures robust estimates of T e and c by activating two search phases alternately: a gradient phase to find a local minimum of J , and a tunnelling phase through high values of the cost function. The accuracy of the solution can be improved by a search in an iteratively restricted parameter subspace. Applying our non-linear inversion to the Great Meteor Seamount geoid data, we further show that the inverse problem is intrinsically ill-posed. As a consequence, minute geoid (or gravity) data errors can induce large changes in any recovery of lithospheric elastic thickness and crustal density.  相似文献   

5.
Inversion of time domain three-dimensional electromagnetic data   总被引:7,自引:0,他引:7  
We present a general formulation for inverting time domain electromagnetic data to recover a 3-D distribution of electrical conductivity. The forward problem is solved using finite volume methods in the spatial domain and an implicit method (Backward Euler) in the time domain. A modified Gauss–Newton strategy is employed to solve the inverse problem. The modifications include the use of a quasi-Newton method to generate a pre-conditioner for the perturbed system, and implementing an iterative Tikhonov approach in the solution to the inverse problem. In addition, we show how the size of the inverse problem can be reduced through a corrective source procedure. The same procedure can correct for discretization errors that inevidably arise. We also show how the inverse problem can be efficiently carried out even when the decay time for the conductor is significantly larger than the repetition time of the transmitter wave form. This requires a second processor to carry an additional forward modelling. Our inversion algorithm is general and is applicable for any electromagnetic field  ( E , H , d B / dt )  measured in the air, on the ground, or in boreholes, and from an arbitrary grounded or ungrounded source. Three synthetic examples illustrate the basic functionality of the algorithm, and a result from a field example shows applicability in a larger-scale field example.  相似文献   

6.
An iterative solution to the non-linear 3-D electromagnetic inverse problem is obtained by successive linearized model updates using the method of conjugate gradients. Full wave equation modelling for controlled sources is employed to compute model sensitivities and predicted data in the frequency domain with an efficient 3-D finite-difference algorithm. Necessity dictates that the inverse be underdetermined, since realistic reconstructions require the solution for tens of thousands of parameters. In addition, large-scale 3-D forward modelling is required and this can easily involve the solution of over several million electric field unknowns per solve. A massively parallel computing platform has therefore been utilized to obtain reasonable execution times, and results are given for the 1840-node Intel Paragon. The solution is demonstrated with a synthetic example with added Gaussian noise, where the data were produced from an integral equation forward-modelling code, and is different from the finite difference code embedded in the inversion algorithm  相似文献   

7.
All conventional stress inversion methods, when applied to earthquake focal mechanism data, suffer from uncertainty as to which plane is the true fault plane. This paper deals with several problems in stress inversion brought about by this uncertainty. Our analysis shows that the direction of shear stress on the auxiliary plane does not coincide with the hypothetical slip direction unless the B -axis is parallel to one of the three principal stress directions. Based on this simple fact, we propose a new algorithm dealing with the ambiguity in fault/auxiliary plane identification. We also propose a method to handle the inhomogeneity problem of data quality, which is common and unique for focal mechanism data. Different inversion methods and algorithms are applied to two sets of 'focal mechanism' data simulated from field fault-slip measurement data. The inversion results show that, among the four stress parameters inverted, the stress ratio suffers the most from the ambiguity in fault/auxiliary plane identity, whereas the solutions for the principal stress directions are surprisingly good. The errors in inversion solutions resulting from the fault/auxiliary plane ambiguity can be significantly reduced by controlling subjectively the sample variance of the measurement errors. Our results also suggest that the fault plane cannot be distinguished correctly from the auxiliary plane with a high probability on the basis of the stress inversion alone.  相似文献   

8.
A tomographic inversion technique that inverts traveltimes to obtain a model of the subsurface in terms of velocities and interfaces is presented. It uses a combination of refraction, wide-angle reflection and normal-incidence data, it simultaneously inverts for velocities and interface depths, and it is able to quantify the errors and trade-offs in the final model. The technique uses an iterative linearized approach to the non-linear traveltime inversion problem. The subsurface is represented as a set of layers separated by interfaces, across which the velocity may be discontinuous. Within each layer the velocity varies in two dimensions and has a continuous first derivative. Rays are traced in this medium using a technique based on ray perturbation theory, and two-point ray tracing is avoided by interpolating the traveltimes to the receivers from a roughly equidistant fan of rays. The calculated traveltimes are inverted by simultaneously minimizing the misfit between the data and calculated traveltimes, and the roughness of the model. This 'smoothing regularization' stabilizes the solution of the inverse problem. In practice, the first iterations are performed with a high level of smoothing. As the inversion proceeds, the level of smoothing is gradually reduced until the traveltime residual is at the estimated level of noise in the data. At this point, a minimum-feature solution is obtained, which should contain only those features discernible over the noise.
The technique is tested on a synthetic data set, demonstrating its accuracy and stability and also illustrating the desirability of including a large number of different ray types in an inversion.  相似文献   

9.
Summary . Born inverse methods give accurate and stable results when the source wavelet is impulsive. However, in many practical applications (reflection seismology) an impulsive source cannot be realized and the inversion needs to be generalized to include an arbitrary source function. In this paper, we present a Born solution to the seismic inverse problem which can accommodate an arbitrary source function and give accurate and stable results. It is shown that the form of the generalized inversion algorithm reduces to a Wiener shaping ***filter, which is solved efficiently using a Levinson recursion algorithm. Numerical examples of synthetic and real field data illustrate the validity of our method.  相似文献   

10.
Summary. The first non-trivial inverse problem for media with non-horizontal reflectors z = h ( x, y ) was set up for a model of the type: V = V ( z ), 0 ≤ z ≤ h ( x, y ), and the possibility of reconstructing the functions h ( x, y ) and V ( z ) at z ↦ (min h , max h ) was shown. In the alternative case, when h = constant and V = V ( x ) there is a unique solution. Only particular cases were considered for media with h = constant, v = V ( x, z ). In the second half of the 1970s, the conditional correctness of a number of inverse problems was proved and the important concept of a sufficient data system was proposed.
Over the last 20 yr much attention has been paid to layered homogeneous media with curved interfaces, which are reflectors and refractors at the same time. The task of continuing the eikonals second derivatives played a very important role in this problem. Using the connection between the second derivatives of the CDP travel-time curve and the eikonal from a phantom source at the base of the normal ray (V. Chernyak, S. Gritsenko, T. krey) there were obtained formulae of the Dix type.
Recently methods based on linearization using a small parameter were proposed for media with slightly curved interfaces. A number of iterative algorithms for optimization and inversion have been developed, which exploit advances in the solution of direct kinematical problems. The development of the theory of inverse problems and the statistical theory of interpretation has led to the creation of a general concept of multistep algorithms and their classification.  相似文献   

11.
The inversion of recent borehole temperatures has proved to be a successful tool to determine ancient ground surface temperature histories. To take into account heterogeneity of thermal properties and their non-linear dependence on temperature itself, a versatile 1-D inversion technique based on a finite-difference approach has been developed. Regularization of the generally ill-posed problem is obtained by an appropriate version of Tikhonov regularization of variable order. In this approach, a regularization parameter has to be determined, representing a trade-off between data fit and model smoothness. We propose to select this parameter by generalized cross-validation. The resulting technique is employed in case studies from the Kola ultradeep drilling site, and another borehole from northeastern Poland. Comparing the results from both sites corroborates the hypothesis that subglacial ground surface temperatures as met in Kola often are much higher than the ones in areas exposed to atmospheric conditions (Poland).  相似文献   

12.
Topographic databases normally contain areas of different land cover classes, commonly defining a planar partition, that is, gaps and overlaps are not allowed. When reducing the scale of such a database, some areas become too small for representation and need to be aggregated. This unintentionally but unavoidably results in changes of classes. In this article we present an optimisation method for the aggregation problem. This method aims to minimise changes of classes and to create compact shapes, subject to hard constraints ensuring aggregates of sufficient size for the target scale. To quantify class changes we apply a semantic distance measure. We give a graph theoretical problem formulation and prove that the problem is NP-hard, meaning that we cannot hope to find an efficient algorithm. Instead, we present a solution by mixed-integer programming that can be used to optimally solve small instances with existing optimisation software. In order to process large datasets, we introduce specialised heuristics that allow certain variables to be eliminated in advance and a problem instance to be decomposed into independent sub-instances. We tested our method for a dataset of the official German topographic database ATKIS with input scale 1:50,000 and output scale 1:250,000. For small instances, we compare results of this approach with optimal solutions that were obtained without heuristics. We compare results for large instances with those of an existing iterative algorithm and an alternative optimisation approach by simulated annealing. These tests allow us to conclude that, with the defined heuristics, our optimisation method yields high-quality results for large datasets in modest time.  相似文献   

13.
We have formulated a 3-D inverse solution for the magnetotelluric (MT) problem using the non-linear conjugate gradient method. Finite difference methods are used to compute predicted data efficiently and objective functional gradients. Only six forward modelling applications per frequency are typically required to produce the model update at each iteration. This efficiency is achieved by incorporating a simple line search procedure that calls for a sufficient reduction in the objective functional, instead of an exact determination of its minimum along a given descent direction. Additional efficiencies in the scheme are sought by incorporating preconditioning to accelerate solution convergence. Even with these efficiencies, the solution's realism and complexity are still limited by the speed and memory of serial processors. To overcome this barrier, the scheme has been implemented on a parallel computing platform where tens to thousands of processors operate on the problem simultaneously. The inversion scheme is tested by inverting data produced with a forward modelling code algorithmically different from that employed in the inversion algorithm. This check provides independent verification of the scheme since the two forward modelling algorithms are prone to different types of numerical error.  相似文献   

14.
Regularization is usually necessary in solving seismic tomographic inversion problems. In general the equation system of seismic tomography is very large, often making a suitable choice of the regularization parameter difficult. In this paper, we propose an algorithm for the practical choice of the regularization parameter in linear tomographic inversion. The algorithm is based on the types of statistical assumptions most commonly used in seismic tomography. We first transfer the system of equations into a Krylov subspace by using Lanczos bidiagonalization. In the transformed subspace, the system of equations is then changed into the form of a standard damped least squares normal equation. The solution to this normal equation can be written as an explicit function of the regularization parameter, which makes the choice of the regularization computationally convenient. Two criteria for the choice of the regularization parameter are investigated with the numerical simulations. If the dimensions of the transformed space are much less than that of the original model space, the algorithm can be very computationally efficient, which is practically useful in large seismic tomography problems.  相似文献   

15.
This paper presents a simple non-linear method of magnetotelluric inversion that accounts for the computation of depth averages of the electrical conductivity profile of the Earth. The method is not exact but it still preserves the non-linear character of the magnetotelluric inverse problem. The basic formula for the averages is derived from the well-known conductance equation, but instead of following the tradition of solving directly for conductivity, a solution is sought in terras of spatial averages of the conductivity distribution. Formulas for the variance and the resolution are then readily derived. In terms of Backus-Gilbert theory for linear appraisal, it is possible to inspect the classical trade-off curves between variance and resolution, but instead of resorting to linearized iterative methods the curves can be computed analytically. The stability of the averages naturally depends on their variance but this can be controlled at will. In general, the better the resolution the worse the variance. For the case of optimal resolution and worst variance, the formula for the averages reduces to the well-known Niblett-Bostick transformation. This explains why the transformation is unstable for noisy data. In this respect, the computation of averages leads naturally to a stable version of the Niblett-Bostick transformation. The performance of the method is illustrated with numerical experiments and applications to field data. These validate the formula as an approximate but useful tool for making inferences about the deep conductivity profile of the Earth, using no information or assumption other than the surface geophysical measurements.  相似文献   

16.
The post-seismic response of a viscoelastic Earth to a seismic dislocation can be computed analytically within the framework of normal-modes, based on the application of propagator methods. This technique, widely documented in the literature, suffers from several shortcomings; the main drawback is related to the numerical solution of the secular equation, whose degree increases linearly with the number of viscoelastic layers so that only coarse-layered models are practically solvable. Recently, a viable alternative to the standard normal-mode approach, based on the Post–Widder Laplace inversion formula, has been proposed in the realm of postglacial rebound models. The main advantage of this method is to bypass the explicit solution of the secular equation, while retaining the analytical structure of the propagator formalism. At the same time, the numerical computation is much simplified so that additional features such as linear non-Maxwell rheologies can be simply implemented. In this work, for the first time, we apply the Post–Widder Laplace inversion formula to a post-seismic rebound model. We test the method against the standard normal-mode solution and we perform various benchmarks aimed to tune the algorithm and to optimize computation performance while ensuring the stability of the solution. As an application, we address the issue of finding the minimum number of layers with distinct elastic properties needed to accurately describe the post-seismic relaxation of a realistic Earth model. Finally, we demonstrate the potentialities of our code by modelling the post-seismic relaxation after the 2004 Sumatra–Andaman earthquake comparing results based upon Maxwell and Burgers rheologies.  相似文献   

17.
Generalized Born scattering of elastic waves in 3-D media   总被引:1,自引:0,他引:1  
It is well known that when a seismic wave propagates through an elastic medium with gradients in the parameters which describe it (e.g. slowness and density), energy is scattered from the incident wave generating low-frequency partial reflections. Many approximate solutions to the wave equation, e.g. geometrical ray theory (GRT), Maslov theory and Gaussian beams, do not model these signals. The problem of describing partial reflections in 1-D media has been extensively studied in the seismic literature and considerable progress has been made using iterative techniques based on WKBJ, Airy or Langer type ansätze. In this paper we derive a first-order scattering formalism to describe partial reflections in 3-D media. The correction term describing the scattered energy is developed as a volume integral over terms dependent upon the first spatial derivatives (gradients) of the parameters describing the medium and the solution. The relationship we derive could, in principle, be used as the basis for an iterative scheme but the computational expense, particularly for elastic media, will usually prohibit this approach. The result we obtain is closely related to the usual Born approximation, but differs in that the scattering term is not derived from a perturbation to a background model, but rather from the error in an approximate Green's function. We examine analytically the relationship between the results produced by the new formalism and the usual Born approximation for a medium which has no long-wavelength heterogeneities. We show that in such a case the two methods agree approximately as expected, but that in a media with heterogeneities of all wavelengths the new gradient scattering formalism is superior. We establish analytically the connection between the formalism developed here and the iterative approach based on the WKBJ solution which has been used previously in 1-D media. Numerical examples are shown to illustrate the examples discussed.  相似文献   

18.
Spherical Slepian functions and the polar gap in geodesy   总被引:4,自引:0,他引:4  
The estimation of potential fields such as the gravitational or magnetic potential at the surface of a spherical planet from noisy observations taken at an altitude over an incomplete portion of the globe is a classic example of an ill-posed inverse problem. We show that this potential-field estimation problem has deep-seated connections to Slepian's spatiospectral localization problem which seeks bandlimited spherical functions whose energy is optimally concentrated in some closed portion of the unit sphere. This allows us to formulate an alternative solution to the traditional damped least-squares spherical harmonic approach in geodesy, whereby the source field is now expanded in a truncated Slepian function basis set. We discuss the relative performance of both methods with regard to standard statistical measures such as bias, variance and mean squared error, and pay special attention to the algorithmic efficiency of computing the Slepian functions on the region complementary to the axisymmetric polar gap characteristic of satellite surveys. The ease, speed, and accuracy of our method make the use of spherical Slepian functions in earth and planetary geodesy practical.  相似文献   

19.
Small-scale spatial events are situations in which elements or objects vary in such a way that temporal dynamics are intrinsic to their representation and explanation. Some of the clearest examples involve local movement, from conventional traffic modeling to disaster evacuation where congestion, crowding, panic, and related safety issues are key features. We propose that such events can be simulated using new variants of pedestrian model, which embody ideas about how behavior emerges from the accumulated interactions between small-scale objects. We present a model in which the event space is first explored by agents using ‘swarm intelligence’. Armed with information about the space, agents then move in unobstructed fashion to the event. Congestion and problems over safety are then resolved through introducing controls in an iterative fashion, rerunning the model until a ‘safe solution’ is reached. The model has been developed to simulate the effect of changing the route of the Notting Hill Carnival, an annual event held in west central London over 2 days in August each year. One of the key issues in using such simulation is how the process of modeling interacts with those who manage and control the event. As such, this changes the nature of the modeling problem from one where control and optimization is external to the model to one where it is intrinsic to the simulation.  相似文献   

20.
This paper presents a new derivative-free search method for finding models of acceptable data fit in a multidimensional parameter space. It falls into the same class of method as simulated annealing and genetic algorithms, which are commonly used for global optimization problems. The objective here is to find an ensemble of models that preferentially sample the good data-fitting regions of parameter space, rather than seeking a single optimal model. (A related paper deals with the quantitative appraisal of the ensemble.)
  The new search algorithm makes use of the geometrical constructs known as Voronoi cells to derive the search in parameter space. These are nearest neighbour regions defined under a suitable distance norm. The algorithm is conceptually simple, requires just two 'tuning parameters', and makes use of only the rank of a data fit criterion rather than the numerical value. In this way all difficulties associated with the scaling of a data misfit function are avoided, and any combination of data fit criteria can be used. It is also shown how Voronoi cells can be used to enhance any existing direct search algorithm, by intermittently replacing the forward modelling calculations with nearest neighbour calculations.
  The new direct search algorithm is illustrated with an application to a synthetic problem involving the inversion of receiver functions for crustal seismic structure. This is known to be a non-linear problem, where linearized inversion techniques suffer from a strong dependence on the starting solution. It is shown that the new algorithm produces a sophisticated type of 'self-adaptive' search behaviour, which to our knowledge has not been demonstrated in any previous technique of this kind.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号