首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We compare the performances of four stochastic optimisation methods using four analytic objective functions and two highly non‐linear geophysical optimisation problems: one‐dimensional elastic full‐waveform inversion and residual static computation. The four methods we consider, namely, adaptive simulated annealing, genetic algorithm, neighbourhood algorithm, and particle swarm optimisation, are frequently employed for solving geophysical inverse problems. Because geophysical optimisations typically involve many unknown model parameters, we are particularly interested in comparing the performances of these stochastic methods as the number of unknown parameters increases. The four analytic functions we choose simulate common types of objective functions encountered in solving geophysical optimisations: a convex function, two multi‐minima functions that differ in the distribution of minima, and a nearly flat function. Similar to the analytic tests, the two seismic optimisation problems we analyse are characterised by very different objective functions. The first problem is a one‐dimensional elastic full‐waveform inversion, which is strongly ill‐conditioned and exhibits a nearly flat objective function, with a valley of minima extended along the density direction. The second problem is the residual static computation, which is characterised by a multi‐minima objective function produced by the so‐called cycle‐skipping phenomenon. According to the tests on the analytic functions and on the seismic data, genetic algorithm generally displays the best scaling with the number of parameters. It encounters problems only in the case of irregular distribution of minima, that is, when the global minimum is at the border of the search space and a number of important local minima are distant from the global minimum. The adaptive simulated annealing method is often the best‐performing method for low‐dimensional model spaces, but its performance worsens as the number of unknowns increases. The particle swarm optimisation is effective in finding the global minimum in the case of low‐dimensional model spaces with few local minima or in the case of a narrow flat valley. Finally, the neighbourhood algorithm method is competitive with the other methods only for low‐dimensional model spaces; its performance sensibly worsens in the case of multi‐minima objective functions.  相似文献   

2.
The interrelation between different variants of the method of linear integral representations in the spaces of an arbitrary dimension is considered. The combined approximations of the topography and geopotential fields allows the selection of the optimal parameters of the method in solving a wide range of inverse problems in geophysics and geomorphology, as well as a most thorough use of the a priori information about the elevations and elements of anomalous fields. A method for numerically solving an inverse problem on finding the equivalent, in terms of the external field, mass distributions in the ordinary three-dimensional (3D) space and in the four-dimensional (4D) space is described.  相似文献   

3.
一种新的地球物理反演方法——模拟原子跃迁反演法   总被引:17,自引:5,他引:12       下载免费PDF全文
详细研究了一般地球物理反问题的迭代优化求解过程与物理学中原子跃迁过程的对应关系,建立了反演问题中模型空间、初始模型、局部极值模型、最优化模型等与原子的态空间、定态、激发态、基态等的对应关系. 在此基础上,模拟了物理学中原子从激发态向基态跃迁的物理过程,建立了一种与原子跃迁过程相对应的非线性随机跃迁数学模型和模型解跃迁搜索准则,导出了适用于一般地球物理资料的模拟原子跃迁的非线性反演算法. 用理论测试函数对这种新的反演方法进行了数值试验,结果表明该方法具有解不依赖于初始模型、收敛速度快等优点.  相似文献   

4.
A new uncertainty estimation method, which we recently introduced in the literature, allows for the comprehensive search of model posterior space while maintaining a high degree of computational efficiency. The method starts with an optimal solution to an inverse problem, performs a parameter reduction step and then searches the resulting feasible model space using prior parameter bounds and sparse‐grid polynomial interpolation methods. After misfit rejection, the resulting model ensemble represents the equivalent model space and can be used to estimate inverse solution uncertainty. While parameter reduction introduces a posterior bias, it also allows for scaling this method to higher dimensional problems. The use of Smolyak sparse‐grid interpolation also dramatically increases sampling efficiency for large stochastic dimensions. Unlike Bayesian inference, which treats the posterior sampling problem as a random process, this geometric sampling method exploits the structure and smoothness in posterior distributions by solving a polynomial interpolation problem and then resampling from the resulting interpolant. The two questions we address in this paper are 1) whether our results are generally compatible with established Bayesian inference methods and 2) how does our method compare in terms of posterior sampling efficiency. We accomplish this by comparing our method for two electromagnetic problems from the literature with two commonly used Bayesian sampling schemes: Gibbs’ and Metropolis‐Hastings. While both the sparse‐grid and Bayesian samplers produce compatible results, in both examples, the sparse‐grid approach has a much higher sampling efficiency, requiring an order of magnitude fewer samples, suggesting that sparse‐grid methods can significantly improve the tractability of inference solutions for problems in high dimensions or with more costly forward physics.  相似文献   

5.
A major difficulty in inverting geodetic data for fault slip distribution is that measurement errors are mapped from the data space onto the solution space. The amplitude of this mapping is sensitive to the condition number of the inverse problem, i.e., the ratio between the largest and smallest singular value of the forward matrix. Thus, unless the problem is well-conditioned, slip inversions cannot reveal the actual fault slip distribution. In this study, we describe a new iterative algorithm that optimizes the condition of the slip inversion through discretization of InSAR data. We present a numerical example that demonstrates the effectiveness of our approach. We show that the condition number of the reconditioned data sets are not only much smaller than those of uniformly spaced data sets with the same dimension but are also much smaller than non-uniformly spaced data sets, with data density that increases towards the model fault.  相似文献   

6.
Optimization of Cell Parameterizations for Tomographic Inverse Problems   总被引:1,自引:0,他引:1  
—?We develop algorithms for the construction of irregular cell (block) models for parameterization of tomographic inverse problems. The forward problem is defined on a regular basic grid of non-overlapping cells. The basic cells are used as building blocks for construction of non-overlapping irregular cells. The construction algorithms are not computationally intensive and not particularly complex, and, in general, allow for grid optimization where cell size is determined from scalar functions, e.g., measures of model sampling or a priori estimates of model resolution. The link between a particular cell j in the regular basic grid and its host cell k in the irregular grid is provided by a pointer array which implicitly defines the irregular cell model. The complex geometrical aspects of irregular cell models are not needed in the forward or in the inverse problem. The matrix system of tomographic equations is computed once on the regular basic cell model. After grid construction, the basic matrix equation is mapped using the pointer array on a new matrix equation in which the model vector relates directly to cells in the irregular model. Next, the mapped system can be solved on the irregular grid. This approach avoids forward computation on the complex geometry of irregular grids. Generally, grid optimization can aim at reducing the number of model parameters in volumes poorly sampled by the data while elsewhere retaining the power to resolve the smallest scales warranted by the data. Unnecessary overparameterization of the model space can be avoided and grid construction can aim at improving the conditioning of the inverse problem. We present simple theory and optimization algorithms in the context of seismic tomography and apply the methods to Rayleigh-wave group velocity inversion and global travel-time tomography.  相似文献   

7.
8.
Inverse modeling is widely used to assist with forecasting problems in the subsurface. However, full inverse modeling can be time-consuming requiring iteration over a high dimensional parameter space with computationally expensive forward models and complex spatial priors. In this paper, we investigate a prediction-focused approach (PFA) that aims at building a statistical relationship between data variables and forecast variables, avoiding the inversion of model parameters altogether. The statistical relationship is built by first applying the forward model related to the data variables and the forward model related to the prediction variables on a limited set of spatial prior models realizations, typically generated through geostatistical methods. The relationship observed between data and prediction is highly non-linear for many forecasting problems in the subsurface. In this paper we propose a Canonical Functional Component Analysis (CFCA) to map the data and forecast variables into a low-dimensional space where, if successful, the relationship is linear. CFCA consists of (1) functional principal component analysis (FPCA) for dimension reduction of time-series data and (2) canonical correlation analysis (CCA); the latter aiming to establish a linear relationship between data and forecast components. If such mapping is successful, then we illustrate with several cases that (1) simple regression techniques with a multi-Gaussian framework can be used to directly quantify uncertainty on the forecast without any model inversion and that (2) such uncertainty is a good approximation of uncertainty obtained from full posterior sampling with rejection sampling.  相似文献   

9.
The root cause of the instability problem of the least-squares (LS) solution of the resistivity inverse problem is the ill-conditioning of the sensitivity matrix. To circumvent this problem a new LS approach has been investigated in this paper. At each iteration, the sensitivity matrix is weighted in multiple ways generating a set of systems of linear equations. By solving each system, several candidate models are obtained. As a consequence, the space of models is explored in a more extensive and effective way resulting in a more robust and stable LS approach to solving the resistivity inverse problem. This new approach is called the multiple reweighted LS method (MRLS). The problems encountered when using the L 1- or L 2-norm are discussed and the advantages of working with the MRLS method are highlighted. A five-layer earth model which generates an ill-conditioned matrix due to equivalence is used to generate a synthetic data set for the Schlumberger configuration. The data are randomly corrupted by noise and then inverted by using L 2, L 1 and the MRLS algorithm. The stabilized solutions, even though blurred, could only be obtained by using a heavy ridge regression parameter in L 2- and L 1-norms. On the other hand, the MRLS solution is stable without regression factors and is superior and clearer. For a better appraisal the same initial model was used in all cases. The MRLS algorithm is also demonstrated for a field data set: a stable solution is obtained.  相似文献   

10.
A common example of a large-scale non-linear inverse problem is the inversion of seismic waveforms. Techniques used to solve this type of problem usually involve finding the minimum of some misfit function between observations and theoretical predictions. As the size of the problem increases, techniques requiring the inversion of large matrices become very cumbersome. Considerable storage and computational effort are required to perform the inversion and to avoid stability problems. Consequently methods which do not require any large-scale matrix inversion have proved to be very popular. Currently, descent type algorithms are in widespread use. Usually at each iteration a descent direction is derived from the gradient of the misfit function and an improvement is made to an existing model based on this, and perhaps previous descent directions. A common feature in nearly all geophysically relevant problems is the existence of separate parameter types in the inversion, i.e. unknowns of different dimension and character. However, this fundamental difference in parameter types is not reflected in the inversion algorithms used. Usually gradient methods either mix parameter types together and take little notice of the individual character or assume some knowledge of their relative importance within the inversion process. We propose a new strategy for the non-linear inversion of multi-offset reflection data. The paper is entirely theoretical and its aim is to show how a technique which has been applied in reflection tomography and to the inversion of arrival times for 3D structure, may be used in the waveform case. Specifically we show how to extend the algorithm presented by Tarantola to incorporate the subspace scheme. The proposed strategy involves no large-scale matrix inversion but pays particular attention to different parameter types in the inversion. We use the formulae of Tarantola to state the problem as one of optimization and derive the same descent vectors. The new technique splits the descent vector so that each part depends on a different parameter type, and proceeds to minimize the misfit function within the sub-space defined by these individual descent vectors. In this way, optimal use is made of the descent vector components, i.e. one finds the combination which produces the greatest reduction in the misfit function based on a local linearization of the problem within the subspace. This is not the case with other gradient methods. By solving a linearized problem in the chosen subspace, at each iteration one need only invert a small well-conditioned matrix (the projection of the full Hessian on to the subspace). The method is a hybrid between gradient and matrix inversion methods. The proposed algorithm requires the same gradient vectors to be determined as in the algorithm of Tarantola, although its primary aim is to make better use of those calculations in minimizing the objective function.  相似文献   

11.
Introduction3-Dseismictomographyhasbeenappliedtovariousgeophysicalproblems.AkiandLee(1976)andHawleyetal.(1981)inverted3-Dmode...  相似文献   

12.
混合范数下的最优化反演方法   总被引:5,自引:1,他引:4       下载免费PDF全文
在求解地球物理反问题时,通常根据最小二乘准则构造目标函数进行反演,并在实践中得到了广泛的应用.为进一步增强反演的稳健性及减少多解性,不损失反演结果的分辨率,本文提出了混合范数下的最优化反演方法,它根据数据和模型可能服从不同的概率分布,对数据空间和模型空间采用不同的范数来构造目标函数.在给出目标函数的基础上,导出了混合范数下的线性反演方程.由于该线性反演方程的复杂性,我们采用混合范数下迭代再加权共轭梯度法进行求解.最后,通过对模拟的电阻率数据进行反演,验证了本文计算方法是可行的.  相似文献   

13.
Hopfield neural networks are massive parallel automata that support specific models and are adept in solving optimization problems. They suffer from a ‘rough’ solution space and convergence properties that are highly dependent on the starting model or prior. These detractions may be overcome by introducing regularization into the network in the form of local feedback smoothing. Application of regularized Hopfield networks to over 50 optimization test cases has yielded successful results, even with uniform (minimal information) priors. In particular, the non-linear, one- and two-dimensional magnetotelluric inverse problems have been solved by means of these regularized networks. The solutions compare favourably with those produced by other methods. Such regularized networks, with either hardware or programmed, parallel-computer implementation, can be extended to the problem of three-dimensional magnetotelluric inversion. Because neural networks are natural analog-to-digital converters, it is predicted that they will be the basic building blocks of future magnetotelluric instrumentation.  相似文献   

14.
We consider the iterative numerical method for solving two-dimensional (2D) inverse problems of magnetotelluric sounding, which significantly reduces the computational burden of the inverse problem solution in the class of quasi-layered models. The idea of the method is to replace the operator of the direct 2D problem of calculating the low-frequency electromagnetic field in a quasi-layered medium by a quasi-one dimensional operator at each observation point. The method is applicable for solving the inverse problems of magnetotellurics with either the E- and H-polarized fields and in the case when the inverse problem is simultaneously solved using the impedance values for the fields with both polarizations. We describe the numerical method and present the examples of its application to the numerical solution of a number of model inverse problems of magnetotelluric sounding.  相似文献   

15.
Electrical resistivity tomography is a non-linear and ill-posed geophysical inverse problem that is usually solved through gradient-descent methods. This strategy is computationally fast and easy to implement but impedes accurate uncertainty appraisals. We present a probabilistic approach to two-dimensional electrical resistivity tomography in which a Markov chain Monte Carlo algorithm is used to numerically evaluate the posterior probability density function that fully quantifies the uncertainty affecting the recovered solution. The main drawback of Markov chain Monte Carlo approaches is related to the considerable number of sampled models needed to achieve accurate posterior assessments in high-dimensional parameter spaces. Therefore, to reduce the computational burden of the inversion process, we employ the differential evolution Markov chain, a hybrid method between non-linear optimization and Markov chain Monte Carlo sampling, which exploits multiple and interactive chains to speed up the probabilistic sampling. Moreover, the discrete cosine transform reparameterization is employed to reduce the dimensionality of the parameter space removing the high-frequency components of the resistivity model which are not sensitive to data. In this framework, the unknown parameters become the series of coefficients associated with the retained discrete cosine transform basis functions. First, synthetic data inversions are used to validate the proposed method and to demonstrate the benefits provided by the discrete cosine transform compression. To this end, we compare the outcomes of the implemented approach with those provided by a differential evolution Markov chain algorithm running in the full, un-reduced model space. Then, we apply the method to invert field data acquired along a river embankment. The results yielded by the implemented approach are also benchmarked against a standard local inversion algorithm. The proposed Bayesian inversion provides posterior mean models in agreement with the predictions achieved by the gradient-based inversion, but it also provides model uncertainties, which can be used for penetration depth and resolution limit identification.  相似文献   

16.
Markov chain Monte Carlo algorithms are commonly employed for accurate uncertainty appraisals in non-linear inverse problems. The downside of these algorithms is the considerable number of samples needed to achieve reliable posterior estimations, especially in high-dimensional model spaces. To overcome this issue, the Hamiltonian Monte Carlo algorithm has recently been introduced to solve geophysical inversions. Different from classical Markov chain Monte Carlo algorithms, this approach exploits the derivative information of the target posterior probability density to guide the sampling of the model space. However, its main downside is the computational cost for the derivative computation (i.e. the computation of the Jacobian matrix around each sampled model). Possible strategies to mitigate this issue are the reduction of the dimensionality of the model space and/or the use of efficient methods to compute the gradient of the target density. Here we focus the attention to the estimation of elastic properties (P-, S-wave velocities and density) from pre-stack data through a non-linear amplitude versus angle inversion in which the Hamiltonian Monte Carlo algorithm is used to sample the posterior probability. To decrease the computational cost of the inversion procedure, we employ the discrete cosine transform to reparametrize the model space, and we train a convolutional neural network to predict the Jacobian matrix around each sampled model. The training data set for the network is also parametrized in the discrete cosine transform space, thus allowing for a reduction of the number of parameters to be optimized during the learning phase. Once trained the network can be used to compute the Jacobian matrix associated with each sampled model in real time. The outcomes of the proposed approach are compared and validated with the predictions of Hamiltonian Monte Carlo inversions in which a quite computationally expensive, but accurate finite-difference scheme is used to compute the Jacobian matrix and with those obtained by replacing the Jacobian with a matrix operator derived from a linear approximation of the Zoeppritz equations. Synthetic and field inversion experiments demonstrate that the proposed approach dramatically reduces the cost of the Hamiltonian Monte Carlo inversion while preserving an accurate and efficient sampling of the posterior probability.  相似文献   

17.
New methods for solving the three-dimensional inverse gravity problem in the class of contact surfaces are described. Based on the approach previously suggested by the authors, new algorithms are developed. Application of these algorithms significantly reduces the number of the iterations and computing time compared to the previous ones. The algorithms have been numerically implemented on the multicore processor. The example of solving the structural inverse gravity problem for a model of four-layer medium (with the use of gravity field measurements) is constructed.  相似文献   

18.
This work adopts a continuation approach, based on path tracking in model space, to solve the non-linear least-squares problem for discrimination of unexploded ordnance (UXO) using multi-receiver electromagnetic induction (EMI) data. The forward model corresponds to a stretched-exponential decay of eddy currents induced in a magnetic spheroid. We formulate an over-determined, or under-parameterized, inverse problem. An example using synthetic multi-receiver EMI responses illustrates the efficiency of the method. The fast inversion of actual field multi-receiver EMI responses of inert, buried ordnances is also shown. Software based on the continuation method could be installed within a multi-receiver EMI sensor and used for near-real-time UXO decision-making purposes without the need for a highly-trained operator.  相似文献   

19.
20.
A concept of environmental forecasting based on a variational approach is discussed. The basic idea is to augment the existing technology of modeling by a combination of direct and inverse methods. By this means, the scope of environmental studies can be substantially enlarged. In the concept, mathematical models of processes and observation data subject to some uncertainties are considered. The modeling system is derived from a specially formulated weak-constraint variational principle. A set of algorithms for implementing the concept is presented. These are: algorithms for the solution of direct, adjoint, and inverse problems; adjoint sensitivity algorithms; data assimilation procedures; etc. Methods of quantitative estimations of uncertainty are of particular interest since uncertainty functions play a fundamental role for data assimilation, assessment of model quality, and inverse problem solving. A scenario approach is an essential part of the concept. Some methods of orthogonal decomposition of multi-dimensional phase spaces are used to reconstruct the hydrodynamic background fields from available data and to include climatic data into long-term prognostic scenarios. Subspaces with informative bases are constructed to use in deterministic or stochastic-deterministic scenarios for forecasting air quality and risk assessment. The results of implementing example scenarios for the Siberian regions are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号