首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We compare the performances of four stochastic optimisation methods using four analytic objective functions and two highly non‐linear geophysical optimisation problems: one‐dimensional elastic full‐waveform inversion and residual static computation. The four methods we consider, namely, adaptive simulated annealing, genetic algorithm, neighbourhood algorithm, and particle swarm optimisation, are frequently employed for solving geophysical inverse problems. Because geophysical optimisations typically involve many unknown model parameters, we are particularly interested in comparing the performances of these stochastic methods as the number of unknown parameters increases. The four analytic functions we choose simulate common types of objective functions encountered in solving geophysical optimisations: a convex function, two multi‐minima functions that differ in the distribution of minima, and a nearly flat function. Similar to the analytic tests, the two seismic optimisation problems we analyse are characterised by very different objective functions. The first problem is a one‐dimensional elastic full‐waveform inversion, which is strongly ill‐conditioned and exhibits a nearly flat objective function, with a valley of minima extended along the density direction. The second problem is the residual static computation, which is characterised by a multi‐minima objective function produced by the so‐called cycle‐skipping phenomenon. According to the tests on the analytic functions and on the seismic data, genetic algorithm generally displays the best scaling with the number of parameters. It encounters problems only in the case of irregular distribution of minima, that is, when the global minimum is at the border of the search space and a number of important local minima are distant from the global minimum. The adaptive simulated annealing method is often the best‐performing method for low‐dimensional model spaces, but its performance worsens as the number of unknowns increases. The particle swarm optimisation is effective in finding the global minimum in the case of low‐dimensional model spaces with few local minima or in the case of a narrow flat valley. Finally, the neighbourhood algorithm method is competitive with the other methods only for low‐dimensional model spaces; its performance sensibly worsens in the case of multi‐minima objective functions.  相似文献   

2.
In conventional seismic exploration, especially in marine seismic exploration, shot gathers with missing near‐offset traces are common. Interferometric interpolation methods are one of a range of different methods that have been developed to solve this problem. Interferometric interpolation methods differ from conventional interpolation methods as they utilise information from multiples in the interpolation process. In this study, we apply both conventional interferometric interpolation (shot domain) and multi‐domain interferometric interpolation (shot and receiver domain) to a synthetic and a real‐towed marine dataset from the Baltic Sea with the primary aim of improving the image of the seabed by extrapolation of a near‐offset gap. We utilise a matching filter after interferometric interpolation to partially mitigate artefacts and coherent noise associated with the far‐field approximation and a limited recording aperture size. The results show that an improved image of the seabed is obtained after performing interferometric interpolation. In most cases, the results from multi‐domain interferometric interpolation are similar to those from conventional interferometric interpolation. However, when the source–receiver aperture is limited, the multi‐domain method performs better. A quantitative analysis for assessing the performance of interferometric interpolation shows that multi‐domain interferometric interpolation typically performs better than conventional interferometric interpolation. We also benchmark the interpolated results generated by interferometric interpolation against those obtained using sparse recovery interpolation.  相似文献   

3.
Using seismic attributes such as coherence and curvature to characterise faults not only can improve the efficiency of seismic interpretation but also can expand the capability to detect faults. The coherence and curvature have been widely applied to characterising faults for years. These two methods detect faults based on the similarity of seismic waveforms and shapes of the reflectors, respectively, and they are complementary to each other and both have advantages and disadvantages in fault characterisation. A recent development in fault characterisation based on reflector shapes has been the use of the rate of change of curvature. Through an application to the seismic data from Western Tazhong of the Tarim Basin, China, it was demonstrated that the rate of change of curvature is more capable of detecting subtle faults having quite small throws and heaves. However, there often exist multiple extreme values indicating the same fault when applying the rate of change of curvature, which significantly degrades the signal‐to‐noise ratio of the computation result for multiple extrema interfering with each other. To resolve this problem, we propose the use of a linear combination of arctangent and proportional functions as the directrix of a cylindrical surface to fit the fault model and calculate its third derivative, which can then be used to characterise the fault. Through an application to the 3D seismic data from Western Tazhong of the Tarim Basin, the results show that the proposed method not only retains the same capability to detect subtle faults having small throws as the curvature change rate but also greatly improves the signal‐to‐noise ratio of the calculated result.  相似文献   

4.
5.
To verify the importance of the non‐stationary frequency characteristic of seismic ground motion, a joint time–frequency analysis technique of time signals, called chirplet‐based signal approximation, is developed to extract the non‐stationary frequency information from the recorded data. The chirplet‐based signal approximation is clear in concept, similar to Fourier Transform in mathematical expressions but with different base functions. Case studies show that the chirplet‐based signal approximation can represent the joint time–frequency variation of seismic ground motion quite well. Both the random models of uniform modulating process and evolutionary process are employed to generate artificial seismic waves. The joint time–frequency modulating function in the random model of evolutionary process is determined by chirplet‐based signal approximation. Finally, non‐linear response analysis of a SODF system and a frame structure is performed based on the generated artificial seismic waves. The results show that the non‐stationary frequency characteristic of seismic ground motion can significantly change the non‐linear response characteristics of structures, particularly when a structure goes into collapse phase under seismic action. It is concluded that non‐stationary frequency characteristic of seismic ground motion should be considered for the assessment of seismic capacity of structures. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

6.
The phase‐shift‐plus‐interpolation and extended‐split‐step‐Fourier methods are wavefield‐continuation algorithms for seismic migration imaging. These two methods can be applied to regions with complex geological structures. Based on their unified separable formulas, we show that these two methods have the same kinematic characteristics by using the theory of pseudodifferential operators. Numerical tests on a Marmousi model demonstrate this conclusion. Another important aspect of these two methods is the selection of reference velocities and we explore the influence of the selection of reference velocities by comparing the geometric progression method and the statistical method. We show that the geometric progression method is simple but does not take into account the velocity distribution while the statistical approach is relatively complex but reflects the velocity distribution.  相似文献   

7.
We present a Gaussian packet migration method based on Gabor frame decomposition and asymptotic propagation of Gaussian packets. A Gaussian packet has both Gaussian‐shaped time–frequency localization and space–direction localization. Its evolution can be obtained by ray tracing and dynamic ray tracing. In this paper, we first briefly review the concept of Gaussian packets. After discussing how initial parameters affect the shape of a Gaussian packet, we then propose two Gabor‐frame‐based Gaussian packet decomposition methods that can sparsely and accurately represent seismic data. One method is the dreamlet–Gaussian packet method. Dreamlets are physical wavelets defined on an observation plane and can represent seismic data efficiently in the local time–frequency space–wavenumber domain. After decomposition, dreamlet coefficients can be easily converted to the corresponding Gaussian packet coefficients. The other method is the Gabor‐frame Gaussian beam method. In this method, a local slant stack, which is widely used in Gaussian beam migration, is combined with the Gabor frame decomposition to obtain uniform sampled horizontal slowness for each local frequency. Based on these decomposition methods, we derive a poststack depth migration method through the summation of the backpropagated Gaussian packets and the application of the imaging condition. To demonstrate the Gaussian packet evolution and migration/imaging in complex models, we show several numerical examples. We first use the evolution of a single Gaussian packet in media with different complexities to show the accuracy of Gaussian packet propagation. Then we test the point source responses in smoothed varying velocity models to show the accuracy of Gaussian packet summation. Finally, using poststack synthetic data sets of a four‐layer model and the two‐dimensional SEG/EAGE model, we demonstrate the validity and accuracy of the migration method. Compared with the more accurate but more time‐consuming one‐way wave‐equation‐based migration, such as beamlet migration, the Gaussian packet method proposed in this paper can correctly image the major structures of the complex model, especially in subsalt areas, with much higher efficiency. This shows the application potential of Gaussian packet migration in complicated areas.  相似文献   

8.
9.
Compressed Sensing has recently proved itself as a successful tool to help address the challenges of acquisition and processing seismic data sets. Compressed sensing shows that the information contained in sparse signals can be recovered accurately from a small number of linear measurements using a sparsity‐promoting regularization. This paper investigates two aspects of compressed sensing in seismic exploration: (i) using a general non‐convex regularizer instead of the conventional one‐norm minimization for sparsity promotion and (ii) using a frequency mask to additionally subsample the acquired traces in the frequency‐space () domain. The proposed non‐convex regularizer has better sparse recovery performance compared with one‐norm minimization and the additional frequency mask allows us to incorporate a priori information about the events contained in the wavefields into the reconstruction. For example, (i) seismic data are band‐limited; therefore one can use only a partial set of frequency coefficients in the range of reflections band, where the signal‐to‐noise ratio is high and spatial aliasing is low, to reconstruct the original wavefield, and (ii) low‐frequency characteristics of the coherent ground rolls allow direct elimination of them during reconstruction by disregarding the corresponding frequency coefficients (usually bellow 10 Hz) via a frequency mask. The results of this paper show that some challenges of reconstruction and denoising in seismic exploration can be addressed under a unified formulation. It is illustrated numerically that the compressed sensing performance for seismic data interpolation is improved significantly when an additional coherent subsampling is performed in the domain compared with the domain case. Numerical experiments from both simulated and real field data are included to illustrate the effectiveness of the presented method.  相似文献   

10.
Time reversal mirrors can be used to backpropagate and refocus incident wavefields to their actual source location, with the subsequent benefits of imaging with high‐resolution and super‐stacking properties. These benefits of time reversal mirrors have been previously verified with computer simulations and laboratory experiments but not with exploration‐scale seismic data. We now demonstrate the high‐resolution and the super‐stacking properties in locating seismic sources with field seismic data that include multiple scattering. Tests on both synthetic data and field data show that a time reversal mirror has the potential to exceed the Rayleigh resolution limit by factors of 4 or more. Results also show that a time reversal mirror has a significant resilience to strong Gaussian noise and that accurate imaging of source locations from passive seismic data can be accomplished with traces having signal‐to‐noise ratios as low as 0.001. Synthetic tests also demonstrate that time reversal mirrors can sometimes enhance the signal by a factor proportional to the square root of the product of the number of traces, denoted as N and the number of events in the traces. This enhancement property is denoted as super‐stacking and greatly exceeds the classical signal‐to‐noise enhancement factor of . High‐resolution and super‐stacking are properties also enjoyed by seismic interferometry and reverse‐time migration with the exact velocity model.  相似文献   

11.
In this paper, we discuss high‐resolution coherence functions for the estimation of the stacking parameters in seismic signal processing. We focus on the Multiple Signal Classification which uses the eigendecomposition of the seismic data to measure the coherence along stacking curves. This algorithm can outperform the traditional semblance in cases of close or interfering reflections, generating a sharper velocity spectrum. Our main contribution is to propose complexity‐reducing strategies for its implementation to make it a feasible alternative to semblance. First, we show how to compute the multiple signal classification spectrum based on the eigendecomposition of the temporal correlation matrix of the seismic data. This matrix has a lower order than the spatial correlation used by other methods, so computing its eigendecomposition is simpler. Then we show how to compute its coherence measure in terms of the signal subspace of seismic data. This further reduces the computational cost as we now have to compute fewer eigenvectors than those required by the noise subspace currently used in the literature. Furthermore, we show how these eigenvectors can be computed with the low‐complexity power method. As a result of these simplifications, we show that the complexity of computing the multiple signal classification velocity spectrum is only about three times greater than semblance. Also, we propose a new normalization function to deal with the high dynamic range of the velocity spectrum. Numerical examples with synthetic and real seismic data indicate that the proposed approach provides stacking parameters with better resolution than conventional semblance, at an affordable computational cost.  相似文献   

12.
Local seismic event slopes contain subsurface velocity information and can be used to estimate seismic stacking velocity. In this paper, we propose a novel approach to estimate the stacking velocity automatically from seismic reflection data using similarity‐weighted k‐means clustering, in which the weights are local similarity between each trace in common midpoint gather and a reference trace. Local similarity reflects the local signal‐to‐noise ratio in common midpoint gather. We select the data points with high signal‐to‐noise ratio to be used in the velocity estimation with large weights in mapped traveltime and velocity domain by similarity‐weighted k‐means clustering with thresholding. By using weighted k‐means clustering, we make clustering centroids closer to those data points with large weights, which are more reliable and have higher signal‐to‐noise ratio. The interpolation is used to obtain the whole velocity volume after we have got velocity points calculated by weighted k‐means clustering. Using the proposed method, one obtains a more accurate estimate of the stacking velocity because the similarity‐based weighting in clustering takes into account the signal‐to‐noise ratio and reliability of different data points in mapped traveltime and velocity domain. In order to demonstrate that, we apply the proposed method to synthetic and field data examples, and the resulting images are of higher quality when compared with the ones obtained using existing methods.  相似文献   

13.
Empirical mode decomposition aims to decompose the input signal into a small number of components named intrinsic mode functions with slowly varying amplitudes and frequencies. In spite of its simplicity and usefulness, however, empirical mode decomposition lacks solid mathematical foundation. In this paper, we describe a method to extract the intrinsic mode functions of the input signal using non‐stationary Prony method. The proposed method captures the philosophy of the empirical mode decomposition but uses a different method to compute the intrinsic mode functions. Having the intrinsic mode functions obtained, we then compute the spectrum of the input signal using Hilbert transform. Synthetic and field data validate that the proposed method can correctly compute the spectrum of the input signal and could be used in seismic data analysis to facilitate interpretation.  相似文献   

14.
Many natural phenomena, including geologic events and geophysical data, are fundamentally nonstationary ‐ exhibiting statistical variation that changes in space and time. Time‐frequency characterization is useful for analysing such data, seismic traces in particular. We present a novel time‐frequency decomposition, which aims at depicting the nonstationary character of seismic data. The proposed decomposition uses a Fourier basis to match the target signal using regularized least‐squares inversion. The decomposition is invertible, which makes it suitable for analysing nonstationary data. The proposed method can provide more flexible time‐frequency representation than the classical S transform. Results of applying the method to both synthetic and field data examples demonstrate that the local time‐frequency decomposition can characterize nonstationary variation of seismic data and be used in practical applications, such as seismic ground‐roll noise attenuation and multicomponent data registration.  相似文献   

15.
Spectral decomposition is a widely used technique in analysis and interpretation of seismic data. According to the uncertainty principle, there exists a lower bound for the joint time–frequency resolution of seismic signals. The highest temporal resolution is achieved by a matching pursuit approach which uses waveforms from a dictionary of functions (atoms). This method, in its pure mathematical form can result in atoms whose shape and phase have no relation to the seismic trace. The high‐definition frequency decomposition algorithm presented in this paper interleaves iterations of atom matching and optimization. It divides the seismic trace into independent sections delineated by envelope troughs, and simultaneously matches atoms to all peaks. Co‐optimization of overlapping atoms ensures that the effects of interference between them are minimized. Finally, a second atom matching and optimization phase is performed in order to minimize the difference between the original and the reconstructed trace. The fully reconstructed traces can be used as inputs for a frequency‐based reconstruction and red–green–blue colour blending. Comparison with the results of the original matching pursuit frequency decomposition illustrates that high‐definition frequency decomposition based colour blends provide a very high temporal resolution, even in the low‐energy parts of the seismic data, enabling a precise analysis of geometrical variations of geological features.  相似文献   

16.
Modelling and inversion of controlled‐source electromagnetic (CSEM) fields requires accurate interpolation of modelled results near strong resistivity contrasts. There, simple linear interpolation may produce large errors, whereas higher‐order interpolation may lead to oscillatory behaviour in the interpolated result. We propose to use the essentially non‐oscillatory, piecewise polynomial interpolation scheme designed for piecewise smooth functions that contains discontinuities in the function itself or in its first or higher derivatives. The scheme uses a non‐linear adaptive algorithm to select a set of interpolation points that represent the smoothest part of the function among the sets of neighbouring points. We present numerical examples to demonstrate the usefulness of the scheme. The first example shows that the essentially non‐oscillatory interpolation (ENO) scheme better captures an isolated discontinuity. In the second example, we consider the case of sampling the electric field computed by a finite‐volume CSEM code at a receiver location. In this example, the ENO interpolation performs quite well. However, the overall error is dominated by the discretization error. The other examples consider the comparison between sampling with essentially non‐oscillatory interpolation and existing interpolation schemes. In these examples, essentially non‐oscillatory interpolation provides more accurate results than standard interpolation, especially near discontinuities.  相似文献   

17.
We review the multifocusing method for traveltime moveout approximation of multicoverage seismic data. Multifocusing constructs the moveout based on two notional spherical waves at each source and receiver point, respectively. These two waves are mutually related by a focusing quantity. We clarify the role of this focusing quantity and emphasize that it is a function of the source and receiver location, rather than a fixed parameter for a given multicoverage gather. The focusing function can be designed to make the traveltime moveout exact in certain generic cases that have practical importance in seismic processing and interpretation. The case of a plane dipping reflector (planar multifocusing) has been the subject of all publications so far. We show that the focusing function can be generalized to other surfaces, most importantly to the spherical reflector (spherical multifocusing). At the same time, the generalization implies a simplification of the multifocusing method. The exact traveltime moveout on spherical surfaces is a very versatile and robust formula, which is valid for a wide range of offsets and locations of source and receiver, even on rugged topography. In two‐dimensional surveys, it depends on the same three parameters that are commonly used in planar multifocusing and the common‐reflection surface (CRS) stack method: the radii of curvature of the normal and normal‐incidence‐point waves and the emergence angle. In three dimensions the exact traveltime moveout on spherical surfaces depends on only one additional parameter, the inclination of the plane containing the source, receiver and reflection point. Comparison of the planar and spherical multifocusing with the CRS moveout expression for a range of reflectors with increasing curvature shows that the planar multifocusing can be remarkably accurate but the CRS becomes increasingly inaccurate. This can be attributed to the fact that the CRS formula is based on a Taylor expansion, whereas the multifocusing formulae are double‐square root formulae. As a result, planar and spherical multifocusing are better suited to model the moveout of diffracted waves.  相似文献   

18.
Three‐dimensional seismic survey design should provide an acquisition geometry that enables imaging and amplitude‐versus‐offset applications of target reflectors with sufficient data quality under given economical and operational constraints. However, in land or shallow‐water environments, surface waves are often dominant in the seismic data. The effectiveness of surface‐wave separation or attenuation significantly affects the quality of the final result. Therefore, the need for surface‐wave attenuation imposes additional constraints on the acquisition geometry. Recently, we have proposed a method for surface‐wave attenuation that can better deal with aliased seismic data than classic methods such as slowness/velocity‐based filtering. Here, we investigate how surface‐wave attenuation affects the selection of survey parameters and the resulting data quality. To quantify the latter, we introduce a measure that represents the estimated signal‐to‐noise ratio between the desired subsurface signal and the surface waves that are deemed to be noise. In a case study, we applied surface‐wave attenuation and signal‐to‐noise ratio estimation to several data sets with different survey parameters. The spatial sampling intervals of the basic subset are the survey parameters that affect the performance of surface‐wave attenuation methods the most. Finer spatial sampling will reduce aliasing and make surface‐wave attenuation easier, resulting in better data quality until no further improvement is obtained. We observed this behaviour as a main trend that levels off at increasingly denser sampling. With our method, this trend curve lies at a considerably higher signal‐to‐noise ratio than with a classic filtering method. This means that we can obtain a much better data quality for given survey effort or the same data quality as with a conventional method at a lower cost.  相似文献   

19.
Reverse‐time migration is a two‐way time‐domain finite‐frequency technique that accurately handles the propagation of complex scattered waves and produces a band‐limited representation of the subsurface structure that is conventionally assumed to be linear in the contrasts in model parameters. Because of this underlying linear single‐scattering assumption, most implementations of this method do not satisfy the energy conservation principle and do not optimally use illumination and model sensitivity of multiply scattered waves. Migrating multiply scattered waves requires preserving the non‐linear relation between the image and perturbation of model parameters. I modify the extrapolation of source and receiver wavefields to more accurately handle multiply scattered waves. I extend the concept of the imaging condition in order to map into the subsurface structurally coherent seismic events that correspond to the interaction of both singly and multiply scattered waves. This results in an imaging process referred to here as non‐linear reverse‐time migration. It includes a strategy that analyses separated contributions of singly and multiply scattered waves to a final non‐linear image. The goal is to provide a tool suitable for seismic interpretation and potentially migration velocity analysis that benefits from increased illumination and sensitivity from multiply scattered seismic waves. It is noteworthy that this method can migrate internal multiples, a clear advantage for imaging challenging complex subsurface features, e.g., in salt and basalt environments. The results of synthetic seismic imaging experiments, including a subsalt imaging example, illustrate the technique.  相似文献   

20.
Potential field data such as geoid and gravity anomalies are globally available and offer valuable information about the Earth's lithosphere especially in areas where seismic data coverage is sparse. For instance, non‐linear inversion of Bouguer anomalies could be used to estimate the crustal structures including variations of the crustal density and of the depth of the crust–mantle boundary, that is, Moho. However, due to non‐linearity of this inverse problem, classical inversion methods would fail whenever there is no reliable initial model. Swarm intelligence algorithms, such as particle swarm optimisation, are a promising alternative to classical inversion methods because the quality of their solutions does not depend on the initial model; they do not use the derivatives of the objective function, hence allowing the use of L1 norm; and finally, they are global search methods, meaning, the problem could be non‐convex. In this paper, quantum‐behaved particle swarm, a probabilistic swarm intelligence‐like algorithm, is used to solve the non‐linear gravity inverse problem. The method is first successfully tested on a realistic synthetic crustal model with a linear vertical density gradient and lateral density and depth variations at the base of crust in the presence of white Gaussian noise. Then, it is applied to the EIGEN 6c4, a combined global gravity model, to estimate the depth to the base of the crust and the mean density contrast between the crust and the upper‐mantle lithosphere in the Eurasia–Arabia continental collision zone along a 400 km profile crossing the Zagros Mountains (Iran). The results agree well with previously published works including both seismic and potential field studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号