首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Despite being less general than 3D surface‐related multiple elimination (3D‐SRME), multiple prediction based on wavefield extrapolation can still be of interest, because it is less CPU and I/O demanding than 3D‐SRME and moreover it does not require any prior data regularization. Here we propose a fast implementation of water‐bottom multiple prediction that uses the Kirchhoff formulation of wavefield extrapolation. With wavefield extrapolation multiple prediction is usually obtained through the cascade of two extrapolation steps. Actually by applying the Fermat’s principle (i.e., minimum reflection traveltime) we show that the cascade of two operators can be replaced by a single approximated extrapolation step. The approximation holds as long as the water bottom is not too complex. Indeed the proposed approach has proved to work well on synthetic and field data when the water bottom is such that wavefront triplications are negligible, as happens in many practical situations.  相似文献   

2.
Scattered ground roll is a type of noise observed in land seismic data that can be particularly difficult to suppress. Typically, this type of noise cannot be removed using conventional velocity‐based filters. In this paper, we discuss a model‐driven form of seismic interferometry that allows suppression of scattered ground‐roll noise in land seismic data. The conventional cross‐correlate and stack interferometry approach results in scattered noise estimates between two receiver locations (i.e. as if one of the receivers had been replaced by a source). For noise suppression, this requires that each source we wish to attenuate the noise from is co‐located with a receiver. The model‐driven form differs, as the use of a simple model in place of one of the inputs for interferometry allows the scattered noise estimate to be made between a source and a receiver. This allows the method to be more flexible, as co‐location of sources and receivers is not required, and the method can be applied to data sets with a variety of different acquisition geometries. A simple plane‐wave model is used, allowing the method to remain relatively data driven, with weighting factors for the plane waves determined using a least‐squares solution. Using a number of both synthetic and real two‐dimensional (2D) and three‐dimensional (3D) land seismic data sets, we show that this model‐driven approach provides effective results, allowing suppression of scattered ground‐roll noise without having an adverse effect on the underlying signal.  相似文献   

3.
The reassignment method remaps the energy of each point in a time‐frequency spectrum to a new coordinate that is closer to the actual time‐frequency location. Two applications of the reassignment method are developed in this paper. We first describe time‐frequency reassignment as a tool for spectral decomposition. The reassignment method helps to generate more clear frequency slices of layers and therefore, it facilitates the interpretation of thin layers. The second application is to seismic data de‐noising. Through thresholding in the reassigned domain rather than in the Gabor domain, random noise is more easily attenuated since seismic events are more compactly represented with a relatively larger energy than the noise. A reconstruction process that permits the recovery of seismic data from a reassigned time‐frequency spectrum is developed. Two approaches of the reassignment method are used in this paper, one of which is referred to as the trace by trace time reassignment that is mainly used for seismic spectral decomposition and another that is the spatial reassignment that is mainly used for seismic de‐noising. Synthetic examples and two field data examples are used to test the proposed method. For comparison, the Gabor transform method, inversion‐based method and common deconvolution method are also used in the examples.  相似文献   

4.
Linear prediction filters are an effective tool for reducing random noise from seismic records. Unfortunately, the ability of prediction filters to enhance seismic records deteriorates when the data are contaminated by erratic noise. Erratic noise in this article designates non‐Gaussian noise that consists of large isolated events with known or unknown distribution. We propose a robust fx projection filtering scheme for simultaneous erratic noise and Gaussian random noise attenuation. Instead of adopting the ?2‐norm, as commonly used in the conventional design of fx filters, we utilize the hybrid ‐norm to penalize the energy of the additive noise. The estimation of the prediction error filter and the additive noise sequence are performed in an alternating fashion. First, the additive noise sequence is fixed, and the prediction error filter is estimated via the least‐squares solution of a system of linear equations. Then, the prediction error filter is fixed, and the additive noise sequence is estimated through a cost function containing a hybrid ‐norm that prevents erratic noise to influence the final solution. In other words, we proposed and designed a robust M‐estimate of a special autoregressive moving‐average model in the fx domain. Synthetic and field data examples are used to evaluate the performance of the proposed algorithm.  相似文献   

5.
In marine acquisition, reflections of sound energy from the water–air interface result in ghosts in the seismic data, both in the source side and the receiver side. Ghosts limit the bandwidth of the useful signal and blur the final image. The process to separate the ghost and primary signals, called the deghosting process, can fill the ghost notch, broaden the frequency band, and help achieve high‐resolution images. Low‐signal‐to‐noise ratio near the notch frequencies and 3D effects are two challenges that the deghosting process has to face. In this paper, starting from an introduction to the deghosting process, we present and compare two strategies to solve the latter. The first is an adaptive mechanism that adjusts the deghosting operator to compensate for 3D effects or errors in source/receiver depth measurement. This method does not include explicitly the crossline slowness component and is not affected by the sparse sampling in the same direction. The second method is an inversion‐type approach that does include the crossline slowness component in the algorithm and handles the 3D effects explicitly. Both synthetic and field data examples in wide azimuth acquisition settings are shown to compare the two strategies. Both methods provide satisfactory results.  相似文献   

6.
随机噪声是影响地震勘探有效信号的主要因素,其存在大大降低了地震记录的信噪比.在噪声压制方法不断被改进的同时,对随机噪声特性进行研究,了解噪声的产生机制是对其进行压制的先决条件,目前对噪声的研究主要是特性研究以寻找规律性,对其进行定性定量的分析还比较少.本文根据塔里木沙漠地区实际采集环境,考虑到噪声的连续性给计算带来的不便,假设各类噪声源以点源的形式分布在检波器周围,依据相应理论确定各类噪声源的源函数,其激发的噪声经由波动方程传播,将随机噪声作为各类噪声源共同作用的综合波场,建立随机噪声的理论模型.通过分析不同种噪声对地震记录的影响,选取合适的滤波方法对其进行压制,实验结果表明,通过建立沙漠地区随机噪声的理论模型,为选择有效的滤波方法,提高地震记录信噪比起到理论指导作用.  相似文献   

7.
Surface waves in seismic data are often dominant in a land or shallow‐water environment. Separating them from primaries is of great importance either for removing them as noise for reservoir imaging and characterization or for extracting them as signal for near‐surface characterization. However, their complex properties make the surface‐wave separation significantly challenging in seismic processing. To address the challenges, we propose a method of three‐dimensional surface‐wave estimation and separation using an iterative closed‐loop approach. The closed loop contains a relatively simple forward model of surface waves and adaptive subtraction of the forward‐modelled surface waves from the observed surface waves, making it possible to evaluate the residual between them. In this approach, the surface‐wave model is parameterized by the frequency‐dependent slowness and source properties for each surface‐wave mode. The optimal parameters are estimated in such a way that the residual is minimized and, consequently, this approach solves the inverse problem. Through real data examples, we demonstrate that the proposed method successfully estimates the surface waves and separates them out from the seismic data. In addition, it is demonstrated that our method can also be applied to undersampled, irregularly sampled, and blended seismic data.  相似文献   

8.
Tensor algebra provides a robust framework for multi-dimensional seismic data processing. A low-rank tensor can represent a noise-free seismic data volume. Additive random noise will increase the rank of the tensor. Hence, tensor rank-reduction techniques can be used to filter random noise. Our filtering method adopts the Candecomp/Parafac decomposition to approximates a N-dimensional seismic data volume via the superposition of rank-one tensors. Similar to the singular value decomposition for matrices, a low-rank Candecomp/Parafac decomposition can capture the signal and exclude random noise in situations where a low-rank tensor can represent the ideal noise-free seismic volume. The alternating least squares method is adopted to compute the Candecomp/Parafac decomposition with a provided target rank. This method involves solving a series of highly over-determined linear least-squares subproblems. To improve the efficiency of the alternating least squares algorithm, we uniformly randomly sample equations of the linear least-squares subproblems to reduce the size of the problem significantly. The computational overhead is further reduced by avoiding unfolding and folding large dense tensors. We investigate the applicability of the randomized Candecomp/Parafac decomposition for incoherent noise attenuation via experiments conducted on a synthetic dataset and field data seismic volumes. We also compare the proposed algorithm (randomized Candecomp/Parafac decomposition) against multi-dimensional singular spectrum analysis and classical prediction filtering. We conclude the proposed approach can achieve slightly better denoising performance in terms of signal-to-noise ratio enhancement than traditional methods, but with a less computational cost.  相似文献   

9.
We present a new approach to enhancing weak prestack reflection signals without sacrificing higher frequencies. As a first step, we employ known multidimensional local stacking to obtain an approximate ‘model of the signal’. Guided by phase spectra from this model, we can detect very weak signals and make them visible and coherent by ‘repairing’ corrupted phase of original data. Both presented approaches – phase substitution and phase sign corrections – show good performance on complex synthetic and field data suffering from severe near-surface scattering where conventional processing methods are rendered ineffective. The methods are mathematically formulated as a special case of time-frequency masking (common in speech processing) combined with the signal model from local stacking. This powerful combination opens the avenue for a completely new family of approaches for multi-channel seismic processing that can address seismic processing of land data with nodes and single sensors in the desert environment.  相似文献   

10.
Shake tables provide a direct means by which to evaluate structural performance under earthquake excitation. Because the entire structure is mounted on the base plate and subjected to the ground motion in real time, dynamic effects and rate‐dependent behavior can be accurately represented. Shake table control is not straightforward as the desired signal is an acceleration record, while most actuators operate in displacement feedback for stability. At the same time, the payload is typically large relative to the capacity of the actuator, leading to pronounced control‐structure interaction. Through this interaction, the dynamics of the specimen influence the dynamics of the shake table, which can be problematic when specimens change behavior because of damage or other nonlinearities. Moreover, shake tables are themselves inherently nonlinear, making it difficult to accurately recreate a desired acceleration record over a broad range of amplitudes and frequencies. A model‐based multi‐metric shake table control strategy is proposed to improve tracking of the desired acceleration of a uniaxial shake table, remaining robust to nonlinearities including changes in specimen condition. The proposed strategy is verified for the shake table testing of both linear and nonlinear structures. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
Wave‐equation based shot‐record migration provides accurate images but is computationally expensive because every shot must be migrated separately. Shot‐encoding migration, such as random shot‐encoding or plane‐wave migration, aims to reduce the computational cost of the imaging process by combining the original data into synthesized common‐source gathers. Random shot‐encoding migration and plane‐wave migration have different and complementary features: the first recovers the full spatial bandwidth of the image but introduces strong artefacts, which are due to the interference between the different shot wavefields; the second provides an image with limited spatial detail but is free of crosstalk noise. We design a hybrid scheme that combines linear and random shot‐encoding in order to limit the drawbacks and merge the advantages of these two techniques. We advocate mixed shot‐encoding migration through dithering of plane waves. This approach reduces the crosstalk noise relative to random shot‐encoding migration and increases the spatial bandwidth relative to conventional plane‐wave migration when the take‐off angle is limited to reduce the duration of the plane‐wave gather. In turn, this decreases the migration cost. Migration with dithered plane waves operates as a hybrid encoding scheme in‐between the end members represented by plane‐wave migration and random shot‐encoding. Migration with dithered plane waves has several advantages: every synthesized common‐source gather images in a larger aperture, the crosstalk noise is limited and higher spatial resolution is achievable compared to shot‐record migration, random shot‐encoding and linear shot‐encoding, respectively. Computational cost is also reduced relative to both random and linear shot‐encoding migration since fewer synthesized common‐source gathers are necessary to obtain a high signal‐to‐noise ratio and high spatial resolution in the final image.  相似文献   

12.
Modern regional airborne magnetic datasets, when acquired in populated areas, are inevitably degraded by cultural interference. In the United Kingdom context, the spatial densities of interfering structures and their complex spatial form severely limit our ability to successfully process and interpret the data. Deculturing procedures previously adopted have used semi‐automatic methods that incorporate additional geographical databases that guide manual assessment and refinement of the acquired database. Here we present an improved component of that procedure that guides the detection of localized responses associated with non‐geological perturbations. The procedure derives from a well‐established technique for the detection of kimberlite pipes and is a form of moving‐window correlation using grid‐based data. The procedure lends itself to automatic removal of perturbed data, although manual intervention to accept/reject outputs of the procedure is wise. The technique is evaluated using recently acquired regional United Kingdom survey data, which benefits from having an offshore component and areas of largely non‐magnetic granitic response. The methodology is effective at identifying (and hence removing) the isolated perturbations that form a persistent spatial noise background to the entire dataset. Probably in common with all such methods, the technique fails to isolate and remove amalgamated responses due to complex superimposed effects. The procedure forms an improved component of partial automation in the context of a wider deculturing procedure applied to United Kingdom aeromagnetic data.  相似文献   

13.
A set of algorithms combined with a substructure technique is proposed for an online hybrid test framework, in which the substructures are encapsulated by a standard interface that implements displacements and forces at the common substructure boundaries. A coordinator equipped with the proposed algorithms is designed to achieve boundary compatibility and equilibrium, thereby endowing the substructures the ability to behave as one piece. A model‐based predictor and corrector, and a noniterative procedure, characterize the set of algorithms. The coordinator solves the dynamics of the entire structure and updates the static boundary state simultaneously by a quasi‐Newton procedure, which gradually formulates the condensed stiffness matrix associated with corresponding degrees of freedom. With the condensed stiffness matrix and dynamic information, a condensed equation of motion is derived and then solved by a typical time integration algorithm. Three strategies for updating the condensed stiffness matrix are incorporated into the proposed algorithms. Each adopts different stiffness matrix during the predicting and correcting stage. These algorithms are validated by two numerical substructure simulations and a hybrid test. The effectiveness and feasibility are fully demonstrated. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Seismic attenuation compensation is a spectrum-broadening technique for enhancing the resolution of non-stationary seismic data. The single-trace attenuation compensation algorithms ignore the prior information that the seismic reflection events are generally continuous along seismic traces, thus, the compensated result may have poor spatial continuity and low signal-to-noise ratio. To address this problem, we extend the single-trace approaches to the multi-trace algorithms and furthermore propose a multi-trace attenuation compensation with a spatial constraint. The frequency-space prediction filters are the key to construct this spatial regularization. We test the effectiveness of the proposed spatially constrained attenuation compensation algorithm by applying both synthetic and field data examples. Synthetic data tests indicate that the proposed multi-trace attenuation compensation approach can provide a better compensated result than single-trace attenuation compensation algorithm in terms of suppressing noise amplification and guaranteeing structural continuities. Field data applications further confirm its stability and practicality to improve seismic resolution.  相似文献   

15.
Common-reflection surface is a method to describe the shape of seismic events, typically the slopes (dip) and curvature portions (traveltime). The most systematic approach to estimate the common-reflection surface traveltime attributes is to employ a sequence of single-variable search procedures, inheriting the advantage of a low computational cost, but also the disadvantage of a poor estimation quality. A search strategy where the common-reflection surface attributes are globally estimated in a single stage may yield more accurate estimates. In this paper, we propose to use the bio-inspired global optimization algorithm differential evolution to estimate all the two-dimensional common-offset common-reflection surface attributes simultaneously. The differential evolution algorithm can provide accurate estimates for the common-reflection surface traveltime attributes, with the benefit of having a small set of input parameters to be configured. We apply the differential evolution algorithm to estimate the two-dimensional common-reflection surface attributes in the synthetic Marmousi data set, contaminated by noise, and in a land field data with a small fold. By analysing the stacked and coherence sections, we could see that the differential evolution based common-offset common-reflection surface approach presented significant signal-to-noise ratio enhancement.  相似文献   

16.
We introduce the signal dependent time–frequency distribution, which is a time–frequency distribution that allows the user to optimize the tradeoff between joint time–frequency resolution and suppression of transform artefacts. The signal‐dependent time–frequency distribution, as well as the short‐time Fourier transform, Stockwell transform, and the Fourier transform are analysed for their ability to estimate the spectrum of a known wavelet used in a tuning wedge model. Next, the signal‐dependent time–frequency distribution, and fixed‐ and variable‐window transforms are used to estimate spectra from a zero‐offset synthetic seismogram. Attenuation is estimated from the associated spectral ratio curves, and the accuracy of the results is compared. The synthetic consisted of six pairs of strong reflections, based on real well‐log data, with a modeled intrinsic attenuation value of 1000/Q = 20. The signal‐dependent time–frequency distribution was the only time–frequency transform found to produce spectra that estimated consistent attenuation values, with an average of 1000/Q = 26±2; results from the fixed‐ and variable‐window transforms were 24±17 and 39±10, respectively. Finally, all three time–frequency transforms were used in a pre‐stack attenuation estimation method (the pre‐stack Q inversion algorithm) applied to a gather from a North Sea seismic dataset, to estimate attenuation between nine different strong reflections. In this case, the signal‐dependent time‐frequency distribution produced spectra more consistent with the constant‐Q model of attenuation assumed in the pre‐stack attenuation estimation algorithm: the average L1 residuals of the spectral ratio surfaces from the theoretical constant‐Q expectation for the signal‐dependent time‐frequency distribution, short‐time Fourier transform, and Stockwell transform were 0.12, 0.21, and 0.33, respectively. Based on the results shown, the signal‐dependent time‐frequency distribution is a time–frequency distribution that can provide more accurate and precise estimations of the amplitude spectrum of a reflection, due to a higher attainable time–frequency resolution.  相似文献   

17.
Three‐dimensional seismic survey design should provide an acquisition geometry that enables imaging and amplitude‐versus‐offset applications of target reflectors with sufficient data quality under given economical and operational constraints. However, in land or shallow‐water environments, surface waves are often dominant in the seismic data. The effectiveness of surface‐wave separation or attenuation significantly affects the quality of the final result. Therefore, the need for surface‐wave attenuation imposes additional constraints on the acquisition geometry. Recently, we have proposed a method for surface‐wave attenuation that can better deal with aliased seismic data than classic methods such as slowness/velocity‐based filtering. Here, we investigate how surface‐wave attenuation affects the selection of survey parameters and the resulting data quality. To quantify the latter, we introduce a measure that represents the estimated signal‐to‐noise ratio between the desired subsurface signal and the surface waves that are deemed to be noise. In a case study, we applied surface‐wave attenuation and signal‐to‐noise ratio estimation to several data sets with different survey parameters. The spatial sampling intervals of the basic subset are the survey parameters that affect the performance of surface‐wave attenuation methods the most. Finer spatial sampling will reduce aliasing and make surface‐wave attenuation easier, resulting in better data quality until no further improvement is obtained. We observed this behaviour as a main trend that levels off at increasingly denser sampling. With our method, this trend curve lies at a considerably higher signal‐to‐noise ratio than with a classic filtering method. This means that we can obtain a much better data quality for given survey effort or the same data quality as with a conventional method at a lower cost.  相似文献   

18.
We propose to adopt a deep learning based framework using generative adversarial networks for ground-roll attenuation in land seismic data. Accounting for the non-stationary properties of seismic data and the associated ground-roll noise, we create training labels using local time–frequency transform and regularized non-stationary regression. The basic idea is to train the network using a few shot gathers such that the network can learn the weights associated with noise attenuation for the training shot gathers. We then apply the learned weights to test ground-roll attenuation on shot gathers, that are not a part of training input to obtain the desired signal. This approach gives results similar to local time–frequency transform and regularized non-stationary regression but at a significantly reduced computational cost. The proposed approach automates the ground-roll attenuation process without requiring any manual input in picking the parameters for each shot gather other than in the training data. Tests on field-data examples verify the effectiveness of the proposed approach.  相似文献   

19.
We present a Gaussian packet migration method based on Gabor frame decomposition and asymptotic propagation of Gaussian packets. A Gaussian packet has both Gaussian‐shaped time–frequency localization and space–direction localization. Its evolution can be obtained by ray tracing and dynamic ray tracing. In this paper, we first briefly review the concept of Gaussian packets. After discussing how initial parameters affect the shape of a Gaussian packet, we then propose two Gabor‐frame‐based Gaussian packet decomposition methods that can sparsely and accurately represent seismic data. One method is the dreamlet–Gaussian packet method. Dreamlets are physical wavelets defined on an observation plane and can represent seismic data efficiently in the local time–frequency space–wavenumber domain. After decomposition, dreamlet coefficients can be easily converted to the corresponding Gaussian packet coefficients. The other method is the Gabor‐frame Gaussian beam method. In this method, a local slant stack, which is widely used in Gaussian beam migration, is combined with the Gabor frame decomposition to obtain uniform sampled horizontal slowness for each local frequency. Based on these decomposition methods, we derive a poststack depth migration method through the summation of the backpropagated Gaussian packets and the application of the imaging condition. To demonstrate the Gaussian packet evolution and migration/imaging in complex models, we show several numerical examples. We first use the evolution of a single Gaussian packet in media with different complexities to show the accuracy of Gaussian packet propagation. Then we test the point source responses in smoothed varying velocity models to show the accuracy of Gaussian packet summation. Finally, using poststack synthetic data sets of a four‐layer model and the two‐dimensional SEG/EAGE model, we demonstrate the validity and accuracy of the migration method. Compared with the more accurate but more time‐consuming one‐way wave‐equation‐based migration, such as beamlet migration, the Gaussian packet method proposed in this paper can correctly image the major structures of the complex model, especially in subsalt areas, with much higher efficiency. This shows the application potential of Gaussian packet migration in complicated areas.  相似文献   

20.
A new methodology that levels airborne magnetic data without orthogonal tie‐lines is presented in this study. The technique utilizes the low‐wavenumber content of the flight‐line data to construct a smooth representation of the regional field at a scale appropriate to the line lengths of the survey. Levelling errors are then calculated between the raw flight‐line data and the derived regional field through a least squares approach. Minimizing the magnitude of the error, with a first‐degree error function, results in significant improvements to the unlevelled data. The technique is tested and demonstrated using three recent airborne surveys.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号