首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Surface waves in seismic data are often dominant in a land or shallow‐water environment. Separating them from primaries is of great importance either for removing them as noise for reservoir imaging and characterization or for extracting them as signal for near‐surface characterization. However, their complex properties make the surface‐wave separation significantly challenging in seismic processing. To address the challenges, we propose a method of three‐dimensional surface‐wave estimation and separation using an iterative closed‐loop approach. The closed loop contains a relatively simple forward model of surface waves and adaptive subtraction of the forward‐modelled surface waves from the observed surface waves, making it possible to evaluate the residual between them. In this approach, the surface‐wave model is parameterized by the frequency‐dependent slowness and source properties for each surface‐wave mode. The optimal parameters are estimated in such a way that the residual is minimized and, consequently, this approach solves the inverse problem. Through real data examples, we demonstrate that the proposed method successfully estimates the surface waves and separates them out from the seismic data. In addition, it is demonstrated that our method can also be applied to undersampled, irregularly sampled, and blended seismic data.  相似文献   

2.
Common-reflection surface is a method to describe the shape of seismic events, typically the slopes (dip) and curvature portions (traveltime). The most systematic approach to estimate the common-reflection surface traveltime attributes is to employ a sequence of single-variable search procedures, inheriting the advantage of a low computational cost, but also the disadvantage of a poor estimation quality. A search strategy where the common-reflection surface attributes are globally estimated in a single stage may yield more accurate estimates. In this paper, we propose to use the bio-inspired global optimization algorithm differential evolution to estimate all the two-dimensional common-offset common-reflection surface attributes simultaneously. The differential evolution algorithm can provide accurate estimates for the common-reflection surface traveltime attributes, with the benefit of having a small set of input parameters to be configured. We apply the differential evolution algorithm to estimate the two-dimensional common-reflection surface attributes in the synthetic Marmousi data set, contaminated by noise, and in a land field data with a small fold. By analysing the stacked and coherence sections, we could see that the differential evolution based common-offset common-reflection surface approach presented significant signal-to-noise ratio enhancement.  相似文献   

3.
Linear prediction filters are an effective tool for reducing random noise from seismic records. Unfortunately, the ability of prediction filters to enhance seismic records deteriorates when the data are contaminated by erratic noise. Erratic noise in this article designates non‐Gaussian noise that consists of large isolated events with known or unknown distribution. We propose a robust fx projection filtering scheme for simultaneous erratic noise and Gaussian random noise attenuation. Instead of adopting the ?2‐norm, as commonly used in the conventional design of fx filters, we utilize the hybrid ‐norm to penalize the energy of the additive noise. The estimation of the prediction error filter and the additive noise sequence are performed in an alternating fashion. First, the additive noise sequence is fixed, and the prediction error filter is estimated via the least‐squares solution of a system of linear equations. Then, the prediction error filter is fixed, and the additive noise sequence is estimated through a cost function containing a hybrid ‐norm that prevents erratic noise to influence the final solution. In other words, we proposed and designed a robust M‐estimate of a special autoregressive moving‐average model in the fx domain. Synthetic and field data examples are used to evaluate the performance of the proposed algorithm.  相似文献   

4.
5.
Seismic attenuation compensation is a spectrum-broadening technique for enhancing the resolution of non-stationary seismic data. The single-trace attenuation compensation algorithms ignore the prior information that the seismic reflection events are generally continuous along seismic traces, thus, the compensated result may have poor spatial continuity and low signal-to-noise ratio. To address this problem, we extend the single-trace approaches to the multi-trace algorithms and furthermore propose a multi-trace attenuation compensation with a spatial constraint. The frequency-space prediction filters are the key to construct this spatial regularization. We test the effectiveness of the proposed spatially constrained attenuation compensation algorithm by applying both synthetic and field data examples. Synthetic data tests indicate that the proposed multi-trace attenuation compensation approach can provide a better compensated result than single-trace attenuation compensation algorithm in terms of suppressing noise amplification and guaranteeing structural continuities. Field data applications further confirm its stability and practicality to improve seismic resolution.  相似文献   

6.
We present a new approach to enhancing weak prestack reflection signals without sacrificing higher frequencies. As a first step, we employ known multidimensional local stacking to obtain an approximate ‘model of the signal’. Guided by phase spectra from this model, we can detect very weak signals and make them visible and coherent by ‘repairing’ corrupted phase of original data. Both presented approaches – phase substitution and phase sign corrections – show good performance on complex synthetic and field data suffering from severe near-surface scattering where conventional processing methods are rendered ineffective. The methods are mathematically formulated as a special case of time-frequency masking (common in speech processing) combined with the signal model from local stacking. This powerful combination opens the avenue for a completely new family of approaches for multi-channel seismic processing that can address seismic processing of land data with nodes and single sensors in the desert environment.  相似文献   

7.
平面波域反数据处理压制多次波方法研究   总被引:7,自引:2,他引:7       下载免费PDF全文
在地震勘探领域,尤其是海洋地震勘探中,多次波一直是影响地震处理与解释的主要因素之一.本文基于x-t域的反馈模型及多次波衰减的理论,详尽推导了x-t域反数据处理的方法;基于平面波域多次波的产生机制,借鉴x-t域反数据处理的方法,推导出在平面波域进行反数据处理的原理.文中给出一个有限差分的模拟数据进行测试,处理结果表明,本方法可以有效地衰减表层相关多次波,提高地震数据分析的精确性,在保护了一次波能量的同时,可以更加有效快捷地去除多次波.  相似文献   

8.
The tau‐p inversion algorithm is widely employed to generate starting models with many computer programs that implement refraction tomography. However, this algorithm can frequently fail to detect even major lateral variations in seismic velocities, such as a 50 m wide shear zone, which is the subject of this study. By contrast, the shear zone is successfully defined with the inversion algorithms of the generalized reciprocal method. The shear zone is confirmed with a 2D analysis of the head wave amplitudes, a spectral analysis of the refraction convolution section and with numerous closely spaced orthogonal seismic profiles recorded for a later 3D refraction investigation. Further improvements in resolution, which facilitate the recognition of additional zones with moderate reductions in seismic velocity, are achieved with a novel application of the Hilbert transform to the refractor velocity analysis algorithm. However, the improved resolution also requires the use of a lower average vertical seismic velocity, which accommodates a velocity reversal in the weathering. The lower seismic velocity is derived with the generalized reciprocal method, whereas most refraction tomography programs assume vertical velocity gradients as the default. Although all of the tomograms are consistent with the traveltime data, the resolution of each tomogram is comparable only with that of the starting model. Therefore, it is essential to employ inversion algorithms that can generate detailed starting models, where detailed lateral resolution is the objective. Non‐uniqueness can often be readily resolved with head wave amplitudes, attribute processing of the refraction convolution section and additional seismic traverses, prior to the acquisition of any borehole data. It is concluded that, unless specific measures are taken to address non‐uniqueness, the production of a single refraction tomogram that fits the traveltime data to sufficient accuracy does not necessarily demonstrate that the result is either correct, or even the most probable.  相似文献   

9.
Seismic data acquired along rugged topographic surfaces present well‐known problems in seismic imaging. In conventional seismic data processing, datum statics are approximated by the surface consistence assumption, which states that all seismic rays travel vertically in the top layer. Hence, the datum static for each single trace is constant. In case this assumption does not apply, non‐constant statics are required. The common reflection surface (CRS) stack for rugged surface topography provides the capability to deal with this non‐vertical static issue. It handles the surface elevation as a coordinate component and treats the elevation variation in the sense of directional datuming. In this paper I apply the CRS stack method to a synthetic data set that simulates the acquisition along an irregular surface topography. After the CRS stack, by means of the wavefield attributes, a simple algorithm for redatuming the CRS stack section to an arbitrarily chosen planar surface is performed. The redatumed section simulates a stack section whose acquisition surface is the chosen planar surface.  相似文献   

10.
Full‐waveform inversion is re‐emerging as a powerful data‐fitting procedure for quantitative seismic imaging of the subsurface from wide‐azimuth seismic data. This method is suitable to build high‐resolution velocity models provided that the targeted area is sampled by both diving waves and reflected waves. However, the conventional formulation of full‐waveform inversion prevents the reconstruction of the small wavenumber components of the velocity model when the subsurface is sampled by reflected waves only. This typically occurs as the depth becomes significant with respect to the length of the receiver array. This study first aims to highlight the limits of the conventional form of full‐waveform inversion when applied to seismic reflection data, through a simple canonical example of seismic imaging and to propose a new inversion workflow that overcomes these limitations. The governing idea is to decompose the subsurface model as a background part, which we seek to update and a singular part that corresponds to some prior knowledge of the reflectivity. Forcing this scale uncoupling in the full‐waveform inversion formalism brings out the transmitted wavepaths that connect the sources and receivers to the reflectors in the sensitivity kernel of the full‐waveform inversion, which is otherwise dominated by the migration impulse responses formed by the correlation of the downgoing direct wavefields coming from the shot and receiver positions. This transmission regime makes full‐waveform inversion amenable to the update of the long‐to‐intermediate wavelengths of the background model from the wide scattering‐angle information. However, we show that this prior knowledge of the reflectivity does not prevent the use of a suitable misfit measurement based on cross‐correlation, to avoid cycle‐skipping issues as well as a suitable inversion domain as the pseudo‐depth domain that allows us to preserve the invariant property of the zero‐offset time. This latter feature is useful to avoid updating the reflectivity information at each non‐linear iteration of the full‐waveform inversion, hence considerably reducing the computational cost of the entire workflow. Prior information of the reflectivity in the full‐waveform inversion formalism, a robust misfit function that prevents cycle‐skipping issues and a suitable inversion domain that preserves the seismic invariant are the three key ingredients that should ensure well‐posedness and computational efficiency of full‐waveform inversion algorithms for seismic reflection data.  相似文献   

11.
Modern regional airborne magnetic datasets, when acquired in populated areas, are inevitably degraded by cultural interference. In the United Kingdom context, the spatial densities of interfering structures and their complex spatial form severely limit our ability to successfully process and interpret the data. Deculturing procedures previously adopted have used semi‐automatic methods that incorporate additional geographical databases that guide manual assessment and refinement of the acquired database. Here we present an improved component of that procedure that guides the detection of localized responses associated with non‐geological perturbations. The procedure derives from a well‐established technique for the detection of kimberlite pipes and is a form of moving‐window correlation using grid‐based data. The procedure lends itself to automatic removal of perturbed data, although manual intervention to accept/reject outputs of the procedure is wise. The technique is evaluated using recently acquired regional United Kingdom survey data, which benefits from having an offshore component and areas of largely non‐magnetic granitic response. The methodology is effective at identifying (and hence removing) the isolated perturbations that form a persistent spatial noise background to the entire dataset. Probably in common with all such methods, the technique fails to isolate and remove amalgamated responses due to complex superimposed effects. The procedure forms an improved component of partial automation in the context of a wider deculturing procedure applied to United Kingdom aeromagnetic data.  相似文献   

12.
Coherent noise in land seismic data primarily consists in source‐generated surface‐wave modes. The component that is traditionally considered most relevant is the so‐called ground roll, consisting in surface‐wave modes propagating directly from sources to receivers. In many geological situations, near?surface heterogeneities and discontinuities, as well as topography irregularities, diffract the surface waves and generate secondary events, which can heavily contaminate records. The diffracted and converted surface waves are often called scattered noise and can be a severe problem particularly in areas with shallow or outcropping hard lithological formations. Conventional noise attenuation techniques are not effective with scattering: they can usually address the tails but not the apices of the scattered events. Large source and receiver arrays can attenuate scattering but only in exchange for a compromise to signal fidelity and resolution. We present a model?based technique for the scattering attenuation, based on the estimation of surface‐wave properties and on the prediction of surface waves with a complex path involving diffractions. The properties are estimated first, to produce surface?consistent volumes of the propagation properties. Then, for all gathers to filter, we integrate the contributions of all possible diffractors, building a scattering model. The estimated scattered wavefield is then subtracted from the data. The method can work in different domains and copes with aliased surface waves. The benefits of the method are demonstrated with synthetic and real data.  相似文献   

13.
Compressed Sensing has recently proved itself as a successful tool to help address the challenges of acquisition and processing seismic data sets. Compressed sensing shows that the information contained in sparse signals can be recovered accurately from a small number of linear measurements using a sparsity‐promoting regularization. This paper investigates two aspects of compressed sensing in seismic exploration: (i) using a general non‐convex regularizer instead of the conventional one‐norm minimization for sparsity promotion and (ii) using a frequency mask to additionally subsample the acquired traces in the frequency‐space () domain. The proposed non‐convex regularizer has better sparse recovery performance compared with one‐norm minimization and the additional frequency mask allows us to incorporate a priori information about the events contained in the wavefields into the reconstruction. For example, (i) seismic data are band‐limited; therefore one can use only a partial set of frequency coefficients in the range of reflections band, where the signal‐to‐noise ratio is high and spatial aliasing is low, to reconstruct the original wavefield, and (ii) low‐frequency characteristics of the coherent ground rolls allow direct elimination of them during reconstruction by disregarding the corresponding frequency coefficients (usually bellow 10 Hz) via a frequency mask. The results of this paper show that some challenges of reconstruction and denoising in seismic exploration can be addressed under a unified formulation. It is illustrated numerically that the compressed sensing performance for seismic data interpolation is improved significantly when an additional coherent subsampling is performed in the domain compared with the domain case. Numerical experiments from both simulated and real field data are included to illustrate the effectiveness of the presented method.  相似文献   

14.
In marine acquisition, reflections of sound energy from the water–air interface result in ghosts in the seismic data, both in the source side and the receiver side. Ghosts limit the bandwidth of the useful signal and blur the final image. The process to separate the ghost and primary signals, called the deghosting process, can fill the ghost notch, broaden the frequency band, and help achieve high‐resolution images. Low‐signal‐to‐noise ratio near the notch frequencies and 3D effects are two challenges that the deghosting process has to face. In this paper, starting from an introduction to the deghosting process, we present and compare two strategies to solve the latter. The first is an adaptive mechanism that adjusts the deghosting operator to compensate for 3D effects or errors in source/receiver depth measurement. This method does not include explicitly the crossline slowness component and is not affected by the sparse sampling in the same direction. The second method is an inversion‐type approach that does include the crossline slowness component in the algorithm and handles the 3D effects explicitly. Both synthetic and field data examples in wide azimuth acquisition settings are shown to compare the two strategies. Both methods provide satisfactory results.  相似文献   

15.
Despite being less general than 3D surface‐related multiple elimination (3D‐SRME), multiple prediction based on wavefield extrapolation can still be of interest, because it is less CPU and I/O demanding than 3D‐SRME and moreover it does not require any prior data regularization. Here we propose a fast implementation of water‐bottom multiple prediction that uses the Kirchhoff formulation of wavefield extrapolation. With wavefield extrapolation multiple prediction is usually obtained through the cascade of two extrapolation steps. Actually by applying the Fermat’s principle (i.e., minimum reflection traveltime) we show that the cascade of two operators can be replaced by a single approximated extrapolation step. The approximation holds as long as the water bottom is not too complex. Indeed the proposed approach has proved to work well on synthetic and field data when the water bottom is such that wavefront triplications are negligible, as happens in many practical situations.  相似文献   

16.
The reassignment method remaps the energy of each point in a time‐frequency spectrum to a new coordinate that is closer to the actual time‐frequency location. Two applications of the reassignment method are developed in this paper. We first describe time‐frequency reassignment as a tool for spectral decomposition. The reassignment method helps to generate more clear frequency slices of layers and therefore, it facilitates the interpretation of thin layers. The second application is to seismic data de‐noising. Through thresholding in the reassigned domain rather than in the Gabor domain, random noise is more easily attenuated since seismic events are more compactly represented with a relatively larger energy than the noise. A reconstruction process that permits the recovery of seismic data from a reassigned time‐frequency spectrum is developed. Two approaches of the reassignment method are used in this paper, one of which is referred to as the trace by trace time reassignment that is mainly used for seismic spectral decomposition and another that is the spatial reassignment that is mainly used for seismic de‐noising. Synthetic examples and two field data examples are used to test the proposed method. For comparison, the Gabor transform method, inversion‐based method and common deconvolution method are also used in the examples.  相似文献   

17.
We present a novel method to enhance seismic data for manual and automatic interpretation. We use a genetic algorithm to optimize a kernel that, when convolved with the seismic image, appears to enhance the internal characteristics of salt bodies and the sub‐salt stratigraphy. The performance of the genetic algorithm was validated by the use of test images prior to its application on the seismic data. We present the evolution of the resulting kernel and its convolved image. This image was analysed by a seismic interpreter, highlighting possible advantages over the original one. The effects of the kernel were also subject to an automatic interpretation technique based on principal component analysis. Statistical comparison of these results with those from the original image, by means of the Mann‐Whitney U‐test, proved the convolved image to be more appropriate for automatic interpretation.  相似文献   

18.
Selecting a seismic time‐to‐depth conversion method can be a subjective choice that is made by geophysicists, and is particularly difficult if the accuracy of these methods is unknown. This study presents an automated statistical approach for assessing seismic time‐to‐depth conversion accuracy by integrating the cross‐validation method with four commonly used seismic time‐to‐depth conversion methods. To showcase this automated approach, we use a regional dataset from the Cooper and Eromanga basins, Australia, consisting of 13 three‐dimensional (3D) seismic surveys, 73 two‐way‐time surface grids and 729 wells. Approximately 10,000 error values (predicted depth vs. measured well depth) and associated variables were calculated. The average velocity method was the most accurate overall (7.6 m mean error); however, the most accurate method and the expected error changed by several metres depending on the combination and value of the most significant variables. Cluster analysis tested the significance of the associated variables to find that the seismic survey location (potentially related to local geology (i.e. sedimentology, structural geology, cementation, pore pressure, etc.), processing workflow, or seismic vintage), formation (potentially associated with reduced signal‐to‐noise with increasing depth or the changes in lithology), distance to the nearest well control, and the spatial location of the predicted well relative to the existing well data envelope had the largest impact on accuracy. Importantly, the effect of these significant variables on accuracy were found to be more important than choosing between the four methods, highlighting the importance of better understanding seismic time‐to‐depth conversions, which can be achieved by applying this automated cross‐validation method.  相似文献   

19.
In this paper, we discuss high‐resolution coherence functions for the estimation of the stacking parameters in seismic signal processing. We focus on the Multiple Signal Classification which uses the eigendecomposition of the seismic data to measure the coherence along stacking curves. This algorithm can outperform the traditional semblance in cases of close or interfering reflections, generating a sharper velocity spectrum. Our main contribution is to propose complexity‐reducing strategies for its implementation to make it a feasible alternative to semblance. First, we show how to compute the multiple signal classification spectrum based on the eigendecomposition of the temporal correlation matrix of the seismic data. This matrix has a lower order than the spatial correlation used by other methods, so computing its eigendecomposition is simpler. Then we show how to compute its coherence measure in terms of the signal subspace of seismic data. This further reduces the computational cost as we now have to compute fewer eigenvectors than those required by the noise subspace currently used in the literature. Furthermore, we show how these eigenvectors can be computed with the low‐complexity power method. As a result of these simplifications, we show that the complexity of computing the multiple signal classification velocity spectrum is only about three times greater than semblance. Also, we propose a new normalization function to deal with the high dynamic range of the velocity spectrum. Numerical examples with synthetic and real seismic data indicate that the proposed approach provides stacking parameters with better resolution than conventional semblance, at an affordable computational cost.  相似文献   

20.
Least-squares reverse time migration has the potential to yield high-quality images of the Earth. Compared with acoustic methods, elastic least-squares reverse time migration can effectively address mode conversion and provide velocity/impendence and density perturbation models. However, elastic least-squares reverse time migration is an ill-posed problem and suffers from a lack of uniqueness; further, its solution is not stable. We develop two new elastic least-squares reverse time migration methods based on weighted L2-norm multiplicative and modified total-variation regularizations. In the proposed methods, the original minimization problem is divided into two subproblems, and the images and auxiliary variables are updated alternatively. The method with modified total-variation regularization solves the two subproblems, a Tikhonov regularization problem and an L2-total-variation regularization problem, via an efficient inversion workflow and the split-Bregman iterative method, respectively. The method with multiplicative regularization updates the images and auxiliary variables by the efficient inversion workflow and nonlinear conjugate gradient methods in a nested fashion. We validate the proposed methods using synthetic and field seismic data. Numerical results demonstrate that the proposed methods with regularization improve the resolution and fidelity of the migration profiles and exhibit superior anti-noise ability compared with the conventional method. Moreover, the modified-total-variation-based method has marginally higher accuracy than the multiplicative-regularization-based method for noisy data. The computational cost of the proposed two methods is approximately the same as that of the conventional least-squares reverse time migration method because no additional forward computation is required in the inversion of auxiliary variables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号