首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 692 毫秒
1.
Regularization is usually necessary in solving seismic tomographic inversion problems. In general the equation system of seismic tomography is very large, often making a suitable choice of the regularization parameter difficult. In this paper, we propose an algorithm for the practical choice of the regularization parameter in linear tomographic inversion. The algorithm is based on the types of statistical assumptions most commonly used in seismic tomography. We first transfer the system of equations into a Krylov subspace by using Lanczos bidiagonalization. In the transformed subspace, the system of equations is then changed into the form of a standard damped least squares normal equation. The solution to this normal equation can be written as an explicit function of the regularization parameter, which makes the choice of the regularization computationally convenient. Two criteria for the choice of the regularization parameter are investigated with the numerical simulations. If the dimensions of the transformed space are much less than that of the original model space, the algorithm can be very computationally efficient, which is practically useful in large seismic tomography problems.  相似文献   

2.
We have developed a new geodetic inversion method for space–time distribution of fault slip velocity with time-varying smoothing regularization in order to reconstruct accurate time histories of aseismic fault slip transients. We introduce a temporal smoothing regularization on slip and slip velocity through a Bayesian state space approach in which the strength of regularization (temporal smoothness of slip velocity) is controlled by a hyperparameter. The time-varying smoothing regularization is realized by treating the hyperparameter as a time-dependent stochastic variable and adopting a hierarchical Bayesian state space model, in which a prior distribution on the hyperparameter is introduced in addition to a conventional Bayesian state space model. We have tested this inversion method on two synthetic data sets generated by simulated aseismic slip transients. Results show that our method reproduces well both rapid changes of slip velocity and steady-state velocity without significant oversmoothing and undersmoothing, which has been hard to overcome by the conventional Bayesian approach with time-independent smoothing regularization. Application of this method to transient deformation in 2002 caused by a silent earthquake off the Boso peninsula, Japan, also shows similar advantages of this method over the conventional approach.  相似文献   

3.
4.
A new algorithm is presented for the integrated 2-D inversion of seismic traveltime and gravity data. The algorithm adopts the 'maximum likelihood' regularization scheme. We construct a 'probability density function' which includes three kinds of information: information derived from gravity measurements; information derived from the seismic traveltime inversion procedure applied to the model; and information on the physical correlation among the density and the velocity parameters. We assume a linear relation between density and velocity, which can be node-dependent; that is, we can choose different relationships for different parts of the velocity–density grid. In addition, our procedure allows us to consider a covariance matrix related to the error propagation in linking density to velocity. We use seismic data to estimate starting velocity values and the position of boundary nodes. Subsequently, the sequential integrated inversion (SII) optimizes the layer velocities and densities for our models. The procedure is applicable, as an additional step, to any type of seismic tomographic inversion.
We illustrate the method by comparing the velocity models recovered from a standard seismic traveltime inversion with those retrieved using our algorithm. The inversion of synthetic data calculated for a 2-D isotropic, laterally inhomogeneous model shows the stability and accuracy of this procedure, demonstrates the improvements to the recovery of true velocity anomalies, and proves that this technique can efficiently overcome some of the limitations of both gravity and seismic traveltime inversions, when they are used independently.
An interpretation of field data from the 1994 Vesuvius test experiment is also presented. At depths down to 4.5 km, the model retrieved after a SII shows a more detailed structure than the model obtained from an interpretation of seismic traveltime only, and yields additional information for a further study of the area.  相似文献   

5.
The inversion of recent borehole temperatures has proved to be a successful tool to determine ancient ground surface temperature histories. To take into account heterogeneity of thermal properties and their non-linear dependence on temperature itself, a versatile 1-D inversion technique based on a finite-difference approach has been developed. Regularization of the generally ill-posed problem is obtained by an appropriate version of Tikhonov regularization of variable order. In this approach, a regularization parameter has to be determined, representing a trade-off between data fit and model smoothness. We propose to select this parameter by generalized cross-validation. The resulting technique is employed in case studies from the Kola ultradeep drilling site, and another borehole from northeastern Poland. Comparing the results from both sites corroborates the hypothesis that subglacial ground surface temperatures as met in Kola often are much higher than the ones in areas exposed to atmospheric conditions (Poland).  相似文献   

6.
1IntroductionSubsidence,asverticalmovementoftheeallof'USt,occurredinmanypartsoftheworld,Particularlyindenselypopulateddeltalcregions.Withocctirrenceofsurfacesubsidence,alotofdamagehasbeeninduced.Surfacesubsidencecanberesultedfromnabscauses.suchastectonicinchonandsealevelrise,fromman-madeinducedcauses,suchasexcessivewithdrawalofgroundwater,geothennalfluids.oilandgas,orextrachonofcoal,sulphur,goldandothersolidsthroughminingorUndergroundconstrUchon(t'Unnelling),orfromother"dxedcausessuchashydro…  相似文献   

7.
This paper presents a new derivative-free search method for finding models of acceptable data fit in a multidimensional parameter space. It falls into the same class of method as simulated annealing and genetic algorithms, which are commonly used for global optimization problems. The objective here is to find an ensemble of models that preferentially sample the good data-fitting regions of parameter space, rather than seeking a single optimal model. (A related paper deals with the quantitative appraisal of the ensemble.)
  The new search algorithm makes use of the geometrical constructs known as Voronoi cells to derive the search in parameter space. These are nearest neighbour regions defined under a suitable distance norm. The algorithm is conceptually simple, requires just two 'tuning parameters', and makes use of only the rank of a data fit criterion rather than the numerical value. In this way all difficulties associated with the scaling of a data misfit function are avoided, and any combination of data fit criteria can be used. It is also shown how Voronoi cells can be used to enhance any existing direct search algorithm, by intermittently replacing the forward modelling calculations with nearest neighbour calculations.
  The new direct search algorithm is illustrated with an application to a synthetic problem involving the inversion of receiver functions for crustal seismic structure. This is known to be a non-linear problem, where linearized inversion techniques suffer from a strong dependence on the starting solution. It is shown that the new algorithm produces a sophisticated type of 'self-adaptive' search behaviour, which to our knowledge has not been demonstrated in any previous technique of this kind.  相似文献   

8.
Summary. Due to the non-uniqueness of traveltime inversion of seismic data, it is more appropriate to determine a velocity-depth ( v-z ) envelope, rather than just a v-z function. Several methods of obtaining a v-z envelope by extremal inversion have been proposed, all of which invert the data primarily from either x-p , or T-p , or both domains. These extremal inversion methods may be divided into two groups: linear extremal and non-linear extremal. There is some debate whether the linearized perturbation techniques should be applied to the inherently non-linear problem of traveltime inversion. We have obtained a v-z envelope by extremal inversion in T-p with the constraint that the inversion paths also satisfy x-p observations. Thus we use data jointly in r-p and x-p , and yet avoid the linearity assumptions.
This joint, non-linear extremal inversion method has been applied to obtain a v-z envelope down to a depth of about 30 km in the Baltimore Canyon trough using x-t data from an Expanding Spread Profile acquired during the LASE project. We have found that the area enclosed by the v-z envelope is reduced by about 15 per cent using x-p control on the T-p inversion paths, compared to the inversion without x-p control.  相似文献   

9.
This paper re-evaluates the origin of some peculiar patterns of ground deformation in the Central Apennines, observed by space geodetic techniques during the two earthquakes of the Colfiorito seismic sequence on September 26th, 1997. The surface displacement field due to the fault dislocation, as modelled with the classic Okada elastic formulations, shows some areas with high residuals which cannot be attributed to non-simulated model complexities. The residuals were investigated using geomorphological analysis, recognising the geologic evidence of deep-seated gravitational slope deformations (DSGSD) of the block-slide type. The shape and direction of the co-seismic ground displacement observed in these areas are correlated with the expected pattern of movement produced by the reactivation of the identified DSGSD. At least a few centimetres of negative “Line of Sight” ground displacement was determined for the Costa Picchio, Mt. Pennino, and Mt. Prefoglio areas. A considerable horizontal component of movement in the Costa Picchio DSGSD is evident from a qualitative analysis of ascending and descending interferograms. The timing of the geodetic data indicates that the ground movement occurred during the seismic shaking, and that it did not progress appreciably during the following months. This work has verified the seismic triggering of DSGSD previously hypothesized by many researchers. A further implication is that in the assessment of DSGSD hazard seismic input needs to be considered as an important cause of accelerated deformation.  相似文献   

10.
A data space approach to magnetotelluric (MT) inversion reduces the size of the system of equations that must be solved from M × M , as required for a model space approach, to only N × N , where M is the number of model parameter and N is the number of data. This reduction makes 3-D MT inversion on a personal computer possible for modest values of M and N . However, the need to store the N × M sensitivity matrix J remains a serious limitation. Here, we consider application of conjugate gradient (CG) methods to solve the system of data space Gauss–Newton equations. With this approach J is not explicitly formed and stored, but instead the product of J with an arbitrary vector is computed by solving one forward problem. As a test of this data space conjugate gradient (DCG) algorithm, we consider the 2-D MT inverse problem. Computational efficiency is assessed and compared to the data space Occam's (DASOCC) inversion by counting the number of forward modelling calls. Experiments with synthetic data show that although DCG requires significantly less memory, it generally requires more forward problem solutions than a scheme such as DASOCC, which is based on a full computation of J .  相似文献   

11.
We investigate the use of general, non- l 2 measures of data misfit and model structure in the solution of the non-linear inverse problem. Of particular interest are robust measures of data misfit, and measures of model structure which enable piecewise-constant models to be constructed. General measures can be incorporated into traditional linearized, iterative solutions to the non-linear problem through the use of an iteratively reweighted least-squares (IRLS) algorithm. We show how such an algorithm can be used to solve the linear inverse problem when general measures of misfit and structure are considered. The magnetic stripe example of Parker (1994 ) is used as an illustration. This example also emphasizes the benefits of using a robust measure of misfit when outliers are present in the data. We then show how the IRLS algorithm can be used within a linearized, iterative solution to the non-linear problem. The relevant procedure contains two iterative loops which can be combined in a number of ways. We present two possibilities. The first involves a line search to determine the most appropriate value of the trade-off parameter and the complete solution, via the IRLS algorithm, of the linearized inverse problem for each value of the trade-off parameter. In the second approach, a schedule of prescribed values for the trade-off parameter is used and the iterations required by the IRLS algorithm are combined with those for the linearized, iterative inversion procedure. These two variations are then applied to the 1-D inversion of both synthetic and field time-domain electromagnetic data.  相似文献   

12.
Summary. For the determination of lateral velocity or absorption inhomogeneities, methods such as the generalized matrix inversion and its damped versions, for example the stochastic inverse, are usually applied in seismology to travel-time or amplitude anomalies. These methods are not appropriate for the solution of very extensive systems of equations. Reconstruction techniques as developed for computer tomography are suitable for operations with extremely large numbers of equations and unknown parameters. In this paper solutions obtained with the BPT (Back Projection Technique), ART (Algebraic Reconstruction Technique) and SIRT (Simultaneous Iterative Reconstruction Technique) are compared with those obtained from a damped version of the generalized inverse method. Data of 2-D model-seismic experiments are presented for demonstration.  相似文献   

13.
Oil exploration requires quantitative determination of structural geometry in sedimentary basins. This leads to back-and-forth use of geological methods, e.g. cross-section balancing and geophysical techniques, such as tomography, and the synthesis becomes tedious, especially in three dimensions. This suggests that they should be as much as possible quantitatively integrated into a single consistent framework. For this integration, we propose using inversion techniques, i.e. multicriteria optimization. We locally model a geological structure as a ( geometric) foliation , the leaves of which represent deposition isochrons. We consider a geological structure as a set of foliations joined along faults and unconformities. We propose five kinds of geological data to constrain structural geometry quantitatively: dip measurements that may be available along wells, developability and smoothness of deposition isochrons, the directions of fold axes, and layer parallelism. Using concepts of differential geometry, we formulate these data in terms of least-squares criteria. To solve the canonical non-uniqueness problem raised by the inversion of parametric representations of geometrical objects such as foliations (many parametrizations describe the same object), we introduce the additional criterion method which consists of adding an unphysical objective function to the physical objective function, so as to make the solution unique. Assuming well trajectories and borehole correlations to be known, we optimize, with respect to these criteria, several simple structures comprising one foliation, including a field example.  相似文献   

14.
Controlled-source electromagnetic (CSEM) surveys have the ability to provide tomo-graphic images of electrical conductivity within the Earth. the interpretation of such data sets has long been hampered by inadequate modelling and inversion techniques. In this paper, a subspace inversion technique is described that allows electric dipole-dipole data to be inverted for a 2-D electrical conductivity model more efficiently than with existing techniques. the subspace technique is validated by comparison with conventional inversion methods and by inverting data collected over the East Pacific Rise in 1989. A model study indicates that, with adequate data, a variety of possible mid-ocean-ridge conductivity models could be distinguished on the basis of a CSEM survey.  相似文献   

15.

The use of spontaneous potential (SP) anomalies is well known in the geophysical literatures because of its effectiveness and significance in solving many complex problems in mineral exploration. The inverse problem of self-potential data interpretation is generally ill-posed and nonlinear. Methods based on derivative analysis usually fail to reach the optimal solution (global minimum) and trapped in a local minimum. A new simple heuristic solution to SP anomalies due to 2D inclined sheet of infinite horizontal length is investigated in this study to solve these problems. This method is based on utilizing whale optimization algorithm (WOA) as an effective heuristic solution to the inverse problem of self-potential field due to a 2D inclined sheet. In this context, the WOA was applied first to synthetic example, where the effect of the random noise was examined and the method revealed good results using proper MATLAB code. The technique was then applied on several real field profiles from different localities aiming to determine the parameters of mineralized zones or the associated shear zones. The inversion parameters revealed that WOA detected accurately the unknown parameters and showed a good validation when compared with the published inversion methods.

  相似文献   

16.
Non-linear Bayesian joint inversion of seismic reflection coefficients   总被引:2,自引:0,他引:2  
Inversion of seismic reflection coefficients is formulated in a Bayesian framework. Measured reflection coefficients and model parameters are assigned statistical distributions based on information known prior to the inversion, and together with the forward model uncertainties are propagated into the final result. This enables a quantification of the reliability of the inversion. Quadratic approximations to the Zoeppritz equations are used as the forward model. Compared with the linear approximations the bias is reduced and the uncertainty estimate is more reliable. The differences when using the quadratic approximations and the exact expressions are minor. The solution algorithm is sampling based, and because of the non-linear forward model, the Metropolis–Hastings algorithm is used. To achieve convergence it is important to keep strict control of the acceptance probability in the algorithm. Joint inversion using information from both reflected PP waves and converted PS waves yields smaller bias and reduced uncertainty compared to using only reflected PP waves.  相似文献   

17.
Summary. Moment tensor inversion methods can be applied with success in the determination of source properties of simple earthquakes. However, these methods utilize the assumption of a point source, which is inadequate for modelling many complicated, shallow earthquakes. For complex earthquakes, an inversion using finite faulting models is desirable but the number of parameters involved requires that a good starting model be found or that independent constraints be placed on some of the parameters. A method is presented for low-pass filtering both the data and Green's functions, passing only signals with wavelengths greater than the dimension of the entire fault. The filter tends to smooth complications in the waveforms and allows application of the point source moment tensor inversion. This method is applied to body waves from the 1978 Thessaloniki, Greece, earthquake, the 1971 San Fernando earthquake and to a multiple-point source synthetic model of the San Fernando event. For the Thessaloniki event, although a multiple-source mechanism has been suggested, inversion results before and after filtering were essentially identical, indicating that a point source mechanism is sufficient in modelling the long-period, teleseismic body waves. In the case of the San Fernando earthquake, the point source Green's functions were incapable of simultaneously modelling the P - and SH -waves. Inversion of P -waves alone resulted in extreme parameter resolution problems, but allowed constraint in one axis of the moment tensor and suggested an overall source time function. Inversion of a synthetic San Fernando data set yielded similar results, but allowed an investigation of the shortcomings of the method under controlled circumstances. Although the results may require substantial interpretation, the method presented represents a simple first step in the analysis of complex earthquakes.  相似文献   

18.
植被覆盖度是监测生态系统及其功能的关键参数,如何提高大区域植被覆盖度的反演精度,对生态脆弱区环境可持续发展至关重要。本研究基于人工神经网络、支持向量回归和随机森林等机器学习方法,利用无人机、Worldview-2与Landsat 8 OLI遥感数据,对科尔沁沙地植被覆盖度进行多尺度反演。结果表明:随机森林模型比人工神经网络、支持向量回归模型表现佳,可在单元(试验区)、区域(研究区)尺度上较高精度地反演沙地的植被覆盖度,反演值与无人机实测值均在线性水平上呈显著相关(P<0.01);在单元、区域尺度上,构建的植被覆盖度反演模型测试集R2分别为0.84、0.80,MSE分别为0.0145、0.0370,一致性指数d分别为0.9576、0.8991。利用多源遥感数据和机器学习方法,通过局部区域的高精度反演逐步实现低空间分辨率遥感影像的大区域植被覆盖度反演,不仅可有效提高沙地植被覆盖度的反演精度(R2=0.78,大于0.63),也为区域生态环境监测与生态系统健康评价提供支持。  相似文献   

19.
Ecological optima and tolerances with respect to autumn pH were estimated for 63 diatom taxa in 47 Finnish lakes. The methods used were weighted averaging (WA), least squares (LS) and maximum likelihood (ML), the two latter methods assuming the Gaussian response model.WA produces optimum estimates which are necessarily within the observed lake pH range, whereas there is no such restriction in ML and LS. When the most extreme estimates of ML and LS were excluded, a reasonably close agreement among the results of different estimation methods was observed. When the species with unrealistic optima were excluded, the tolerance estimates were also rather similar, although the ML estimates were systematically greater.The parameter estimates were used to predict the autumn pH of 34 other lakes by weighted averaging. The ML and LS estimates including the extreme optima produced inferior predictions. A good prediction was obtained, however, when prediction with these estimates was additionally scaled with inverse squared tolerances, or when the extreme values were removed (censored). Tolerance downweighting was perhaps more efficient, and when it was used, no additional improvement was gained by censoring. The WA estimates produced good predictions without any manipulations, but these predictions tended to be biased towards the centroid of the observed range of pH values.At best, the average bias in prediction, as measured by mean difference between predicted and observed pH, was 0.082 pH units and the standard deviation of the differences, measuring the average random prediction error, was 0.256 pH units.  相似文献   

20.
Accrdingtothetertualresearchbyscholars,therehavebeenpatemcharts4o,oooyearsago.ThepotterymapsmadebytheBabyloniansthe26thcenturyB.C.havebeenwellpreservedtodayInChina,BaZhenandJiuDingaretheeariieStmaps.Atthattime,thepnmitivemeasunngtoolswereyardsticks,cords,dividersandrules.ThereisanaccoUntofmapin.theearlyZhouLi-DiGuan-T1lXun.Theterm"map"inEnglishisnotaptlyworded.asitmeansmap,skychartandStarchart.TheRussianword"Kapta"meansmap,piCtureandcard.ThemainmearungispiCtureandcard.Thispaperfocus…  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号