首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The most common source of seismic energy onshore is a vertical impact on the earth's surface or an explosion at some depth in a borehole. These sources produce mainly compressional waves. Here it is shown how these sources can be used to generate shear waves either by conversion in the depth or in the immediate vicinity of the source itself. The use of theoretical seismograms can help to identify the individual onsets especially on the horizontal components of the earth's movement. Due to the more complex raypath, converted waves need a special data processing. The spectral behaviour can be improved by spectral balancing followed by a spiking deconvolution. As the CDP-concept is no longer applicable for converted PS-waves a special sorting for a common conversion point (CCP) is applied. The identification and analysis of the individual waves can be simplified by a detailed polarization analysis taking into account the full dynamic behaviour of the observed waves. Prestack depthmigration of converted PS waves allows to deduce additional information on the material properties of reflecting horizons. The depth-migration of individual shot gathers is achieved in the frequency-space domain. Thus the kinematic and dynamic aspects of these secondary waves is a valuable tool for a better understanding of the elastic behaviour of the subsurface.  相似文献   

2.
叠前地震数据特征波场分解、偏移成像与层析反演   总被引:2,自引:2,他引:0       下载免费PDF全文
本文提出了一套叠前地震数据稀疏表达(特征波场合成)、深度偏移成像和层析成像的处理流程.不同于传统的变换域中的数据稀疏表达理论,本文利用局部平面波的传播方向(慢度矢量),在中心炮检点处同时进行波束合成,从而将地震数据投影到局部平面波域(高维空间)中.由于波束合成后的地震数据描述了局部平面波的方向特征,因此称之为特征波场.然而波束合成算法需要估计局部平面波的慢度矢量.当地震数据受噪声干扰时,难以在常规τ-p谱中自动估计局部平面波的射线参数(慢度矢量).本文提出了基于反演理论的特征波场合成方法,可以同时反演局部平面波及其传播方向,从而提高特征波合成的自动化程度并保持方法的稳健性.通过特征波场合成,可以将地震数据分解为单独的震相(波形).这样的数据可以直接用来成像及反演.在局部平面波域中,由于局部平面波的入射与出射射线参数已知,传统的Kirchhoff叠前深度偏移(PSDM)和高斯束/控制束PSDM可以实现从"沿等时面的画弧"到"向反射点(段)的直接投影"的转变,叠前偏移的效率以及成像质量可以同时提高.此外,特征波场与地下反射点(段)的一对一映射关系使得叠前深度偏移与层析成像融为一体,可以极大地提高速度反演的效率.数值试验证明了特征波场合成、叠前深度成像以及层析反演的有效性.  相似文献   

3.
Conventional Kirchhoff prestack time migration based on the hyperbolic moveout can cause ambiguity in laterally inhomogeneous media, because the root mean square velocity corresponds to a one-dimensional model under the horizontal layer assumption; it does not include the lateral variations. The shot/receiver configuration with different offsets and azimuths should adopt different migration velocities as they contribute to a single image point. Therefore, we propose to use an offset-vector to describe the lateral variations through an offset-dependent velocity corresponding to the difference in offset from surface points to the image point. The offset-vector is decomposed into orthogonal directions along the in-line and cross-line directions so that the single velocity can be expressed as a series of actual velocities. We use a simple Snell's law-based ray tracing to calculate the travel time recorded at the image point and convert the travel time to an equivalent velocity corresponding to a pseudo-straight ray. The double-square-root equation using such an equivalent velocity in the offset-vector domain is non-hyperbolic and asymmetrical, which improves the accuracy of the migration. Numerical examples using the Marmousi model and a wide azimuth field data show that the proposed method can achieve reasonable accuracy and significantly enhances the imaging of complex structures.  相似文献   

4.
对稀疏/非规则采样或者低信噪比数据,射线束提取困难并伴随有假频产生,对叠加剖面和道集造成严重干扰.为了提升射线束偏移在稀疏和低信噪比地震数据采集中的成像效果,本文提出基于三角滤波的局部倾斜叠加波束形成偏移假频压制方法.射线束偏移首先将地震数据划分为超道集,经过部分NMO后转化为以射线束中心定义的共偏移距数据,倾斜叠加和反假频操作均在局部共中心点坐标上实现.时间域倾斜叠加是对地震数据的时移累加操作,三角低通滤波同样可以在时间域完成,在对地震数据进行因果和反因果积分后,亦为地震数据的时移累加.因此,三角低通滤波与倾斜叠加可在时间域结合同时完成,避免了频域滤波的正反傅里叶变换.本文在反假频公式中加入权重系数,用以对反假频的程度进行控制,达到分辨率和噪声压制的最佳折衷.以某海上三维实际数据为例,文中展示了反假频射线束形成对偏移叠加剖面和共成像点偏移距道集中的噪声进行了有效压制.  相似文献   

5.
We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green's function and a model‐based Green's function for a water‐layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up‐ and downgoing separation of both recorded and model‐based Green's functions can help in minimizing artefacts in a virtual shot gather. If the up‐ and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non‐stationary 1D multi‐channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f‐k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method.  相似文献   

6.
The τ-p transform is an invertible transformation of seismic shot records expressed as a function of time and offset into the τ (intercept time) and p (ray parameter) domain. The τ-p transform is derived from the solution of the wave equation for a point source in a three-dimensional, vertically non-homogeneous medium and therefore is a true amplitude process for the assumed model. The main advantage of this transformation is to present a point source shot record as a series of plane wave experiments. The asymptotic expansion of this transformation is found to be useful in reflection seismic data processing. The τ-p and frequency-wavenumber (or f-k) processes are closely related. Indeed, the τ-p process embodies the frequency-wavenumber transformation, so the use of this technique suffers the same limitations as the f-k technique. In particular, the wavefield must be sampled with sufficient spatial density to avoid wavenumber aliasing. The computation of this transform and its inverse transform consists of a two-dimensional Fast Fourier Transform followed by an interpolation, then by an inverse-time Fast Fourier Transform. This technique is extended from a vertically inhomogeneous three-dimensional medium to a vertically and laterally inhomogeneous three-dimensional medium. The τ-p transform may create artifacts (truncation and aliasing effects) which can be reduced by a finer spatial density of geophone groups by a balancing of the seismic data and by a tapering of the extremities of the seismic data. The τ-p domain is used as a temporary domain where the attack of coherent noise is well addressed; this technique can be viewed as ‘time-variant f-k filtering’. In addition, the process of deconvolution and multiple suppression in the τ-p domain is at least as well addressed as in the time-offset domain.  相似文献   

7.
深地震反射原始单炮数据是非平稳的弱能量反射信号,信噪比较低.如何提高信噪比一直是深地震反射数据前处理中的一大难题.S变换是一种适用于分析非平稳信号的时频变换方法.同其他分析时变信号的方法相比,S变换的基本小波不必满足小波在时间域均值为零的容许性条件,它的时频分辨率与分析信号的频率有关,且其在时间域的积分可以得到傅里叶频谱,其反变换也简单.因此,S变换容易表示深地震反射信号复杂的时频特性.本文在S变换的基础上,利用软阈值滤波方法对深地震反射数据进行处理,实验结果表明,该方法有效地提高了信噪比,压制了有效频带范围内的混频干扰,突出了弱反射信号,使得波组信息更加丰富,有利于连续追踪有效反射波组和识别薄地层,特别是提高了深部Moho界面反射层位的分辨率,为深地震反射剖面后续处理和准确解释奠定了基础.  相似文献   

8.
The implementation of a stacking filter involves the filtering of each trace with an individual filter and the subsequent summing of all outputs. The actual position of a trace in space as well as certain simultaneous shifts of traces and filter components in time do not influence the process. The resulting output is consequently invariant to various arbitrary coordinate transformations. For a certain useful class of ensembles of non-linear moveout arrival times for signals a particular transformation can be found which transforms a given ensemble into one consisting only of straight lines. It is thus possible to reduce, for instance, the analysis of a stacking filter designed for hyperbola-like moveout curves to the analysis of a velocity filter with linear moveout curves. As the (f—k) transform is a very useful concept to describe a velocity filter, it can consequently be applied to characterize a stacking filter in regard to its performance on input signals with non-linear moveout.  相似文献   

9.
Statistical deconvolution, as it is usually applied on a routine basis, designs an operator from the trace autocorrelation to compress the wavelet which is convolved with the reflectivity sequence. Under the assumption of a white reflectivity sequence (and a minimum-delay wavelet) this simple approach is valid. However, if the reflectivity is distinctly non-white, then the deconvolution will confuse the contributions to the trace spectral shape of the wavelet and reflectivity. Given logs from a nearby well, a simple two-parameter model may be used to describe the power spectral shape of the reflection coefficients derived from the broadband synthetic. This modelling is attractive in that structure in the smoothed spectrum which is consistent with random effects is not built into the model. The two parameters are used to compute simple inverse- and forward-correcting filters, which can be applied before and after the design and implementation of the standard predictive deconvolution operators. For whitening deconvolution, application of the inverse filter prior to deconvolution is unnecessary, provided the minimum-delay version of the forward filter is used. Application of the technique to seismic data shows the correction procedure to be fast and cheap and case histories display subtle, but important, differences between the conventionally deconvolved sections and those produced by incorporating the correction procedure into the processing sequence. It is concluded that, even with a moderate amount of non-whiteness, the corrected section can show appreciably better resolution than the conventionally processed section.  相似文献   

10.
Passive microseismic data are commonly buried in noise, which presents a significant challenge for signal detection and recovery. For recordings from a surface sensor array where each trace contains a time‐delayed arrival from the event, we propose an autocorrelation‐based stacking method that designs a denoising filter from all the traces, as well as a multi‐channel detection scheme. This approach circumvents the issue of time aligning the traces prior to stacking because every trace's autocorrelation is centred at zero in the lag domain. The effect of white noise is concentrated near zero lag; thus, the filter design requires a predictable adjustment of the zero‐lag value. Truncation of the autocorrelation is employed to smooth the impulse response of the denoising filter. In order to extend the applicability of the algorithm, we also propose a noise prewhitening scheme that addresses cases with coloured noise. The simplicity and robustness of this method are validated with synthetic and real seismic traces.  相似文献   

11.
The calculation of dip moveout involves spreading the amplitudes of each input trace along the source-receiver axis followed by stacking the results into a 3D zero-offset data cube. The offset-traveltime (x–t) domain integral implementation of the DMO operator is very efficient in terms of computation time but suffers from operator aliasing. The log-stretch approach, using a logarithmic transformation of the time axis to force the DMO operator to be time invariant, can avoid operator aliasing by direct implementation in the frequency-wavenumber (f–k) domain. An alternative technique for log-stretch DMO corrections using the anti-aliasing filters of the f–k approach in the x-log t domain will be presented. Conventionally, the 2D filter representing the DMO operator is designed and applied in the f–k domain. The new technique uses a 2D convolution filter acting in single input/multiple output trace mode. Each single input trace is passed through several 1D filters to create the overall DMO response of that trace. The resulting traces can be stacked directly in the 3D data cube. The single trace filters are the result of a filter design technique reducing the 2D problem to several ID problems. These filters can be decomposed into a pure time-delay and a low-pass filter, representing the kinematic and dynamic behaviour of the DMO operator. The low-pass filters avoid any incidental operator aliasing. Different types of low-pass filters can be used to achieve different amplitude-versus-offset characteristics of the DMO operator.  相似文献   

12.
In conventional seismic exploration, especially in marine seismic exploration, shot gathers with missing near‐offset traces are common. Interferometric interpolation methods are one of a range of different methods that have been developed to solve this problem. Interferometric interpolation methods differ from conventional interpolation methods as they utilise information from multiples in the interpolation process. In this study, we apply both conventional interferometric interpolation (shot domain) and multi‐domain interferometric interpolation (shot and receiver domain) to a synthetic and a real‐towed marine dataset from the Baltic Sea with the primary aim of improving the image of the seabed by extrapolation of a near‐offset gap. We utilise a matching filter after interferometric interpolation to partially mitigate artefacts and coherent noise associated with the far‐field approximation and a limited recording aperture size. The results show that an improved image of the seabed is obtained after performing interferometric interpolation. In most cases, the results from multi‐domain interferometric interpolation are similar to those from conventional interferometric interpolation. However, when the source–receiver aperture is limited, the multi‐domain method performs better. A quantitative analysis for assessing the performance of interferometric interpolation shows that multi‐domain interferometric interpolation typically performs better than conventional interferometric interpolation. We also benchmark the interpolated results generated by interferometric interpolation against those obtained using sparse recovery interpolation.  相似文献   

13.
The common depth point method of shooting in oil exploration provides a series of seismic traces which yield information about the substrata layers at one location. After normal moveout and static corrections have been applied, the traces are combined by horizontal stacking, or linear multichannel filtering, into a single record in which the primary reflections have been enhanced relative to the multiple reflections and random noise. The criterion used in optimum horizontal stacking is to maximize the signal to noise power ratio, where signal refers to the primary reflection sequence and noise includes the multiple reflections. It is shown when this criterion is equivalent to minimizing the mean square difference between the desired signal (primary reflection sequence) and the weighted horizontally stacked traces. If the seismic traces are combined by multichannel linear filtering, the primary reflection sequence will have undergone some phase and frequency distortion on the resulting record. The signal to noise power ratio then becomes less meaningful a criterion for designing the optimum linear multichannel filter, and the mean square criterion is adopted. In general, however, since more a priori information about the seismic traces is required to design the optimum linear multichannel filter than required for the optimum set of weights of the horizontal stacking process, the former will be an improvement over the latter. It becomes evident that optimum horizontal stacking is a restricted form of linear multichannel filtering.  相似文献   

14.
The common depth point method of shooting in oil exploration provides a series of seismic traces which yield information about the substrata layers at one location. After normal moveout and static corrections have been applied, the traces are combined by horizontal stacking, or linear multichannel filtering, into a single record in which the primary reflections have been enhanced relative to the multiple reflections and random noise. The criterion used in optimum horizontal stacking is to maximize the signal to noise power ratio, where signal refers to the primary reflection sequence and noise includes the multiple reflections. It is shown when this criterion is equivalent to minimizing the mean square difference between the desired signal (primary reflection sequence) and the weighted horizontally stacked traces. If the seismic traces are combined by multichannel linear filtering, the primary reflection sequence will have undergone some phase and frequency distortion on the resulting record. The signal to noise power ratio then becomes less meaningful a criterion for designing the optimum linear multichannel filter, and the mean square criterion is adopted. In general, however, since more a priori information about the seismic traces is required to design the optimum linear multichannel filter than required for the optimum set of weights of the horizontal stacking process, the former will be an improvement over the latter. It becomes evident that optimum horizontal stacking is a restricted form of linear multichannel filtering.  相似文献   

15.
In order to make 3D prestack depth migration feasible on modern computers it is necessary to use a target-oriented migration scheme. By limiting the output of the migration to a specific depth interval (target zone), the efficiency of the scheme is improved considerably. The first step in such a target-oriented approach is redatuming of the shot records at the surface to the upper boundary of the target zone. For this purpose, efficient non-recursive wavefield extrapolation operators should be generated. We propose a ray tracing method or the Gaussian beam method. With both methods operators can be efficiently generated for any irregular shooting geometry at the surface. As expected, the amplitude behaviour of the Gaussian beam method is better than that of the ray tracing based operators. The redatuming algorithm is performed per shot record, which makes the data handling very efficient. From the shot records at the surface‘genuine zero-offset data’are generated at the upper boundary of the target zone. Particularly in situations with a complicated overburden, the quality of target-oriented zero-offset data is much better than can be reached with a CMP stacking method at the surface. The target-oriented zero-offset data can be used as input to a full 3D zero-offset depth migration scheme, in order to obtain a depth section of the target zone.  相似文献   

16.
In vertical seismic profile's (VSP's) shot with a large source offset, rays from shot to receiver can have large angles of incidence. Shear waves generated by the source and by conversions at interfaces are likely to be recorded by both the vertical and the horizontal geophones. Varying angles of incidence may give strong variations in the recorded amplitudes. Separation of P- and SV-waves and recovery of their full amplitudes are important for proper processing and interpretation of the data. A P-S separation filter for three-component offset VSP data is presented which performs this operation. The separation filter is applied in the k-f domain and needs an estimate of the P- and S-velocities along the borehole as input. Implementation and stability aspects of the filter are considered. The filter was tested on an 1800 m offset VSP and appeared to be robust. Large velocity variations along the borehole could be handled and results were superior to those obtained by velocity filtering.  相似文献   

17.
A 2D reflection tomographic method is described, for the purpose of estimating an improved macrovelocity field for prestack depth migration. An event-oriented local approach of the ‘layer-stripping’ type has been developed, where each input event is defined by its traveltime and a traveltime derivative, taken with respect to one of four coordinates in the source/receiver and midpoint half-offset systems. Recent work has shown that the results of reflection tomography may be improved by performing event picking in a prestack depth domain. We adopt this approach and allow events to be picked either before or after prestack depth migration. Hence, if events have been picked in a depth domain, such as the common-shot depth domain or the common-offset depth domain, then a depth-time transformation is required before velocity estimation. The event transformation may, for example, be done by conventional kinematic ray tracingr and with respect to the original depth-migration velocity field. By this means, we expect the input events for velocity updating to become less sensitive to migration velocity errors. For the purpose of velocity estimation, events are subdivided into two categories; reference horizon events and individual events. The reference horizon events correspond to a fixed offset in order to provide basic information about reflector geometry, whereas individual events, corresponding to any offset, are supposed to provide the additional information needed for velocity estimation. An iterative updating approach is used, based on calculation of derivatives of event reflection points with respect to velocity. The event reflection points are obtained by ray-theoretical depth conversion, and reflection-point derivatives are calculated accurately and efficiently from information pertaining to single rays. A number of reference horizon events and a single individual event constitute the minimum information required to update the velocity locally, and the iterations proceed until the individual event reflection point is consistent with those of the reference horizon events. Normally, three to four iterations are sufficient to attain convergence. As a by-product of the process, we obtain so-called uncertainty amplification factors, which relate a picking error to the corresponding error in the estimated velocity or depth horizon position. The vector formulation of the updating relationship makes it applicable to smooth horizons having arbitrary dips and by applying velocity updating in combination with a flexible model-builder, very general macro-model structures can be obtained. As a first step in the evaluation of the new method, error-free traveltime events were generated by applying forward ray tracing within given macrovelocity models. When using such ‘perfect’ observations, the velocity estimation algorithm gave consistent reconstructions of macro-models containing interfaces with differential dip and curvature, a low-velocity layer and a layer with a laterally varying velocity function.  相似文献   

18.
A new method to suppress water-bottom multiples (water-bottom reverberations) uses the fact that in the domain of intercept time and ray parameter (τ–p domain) the water-bottom reverberations are strictly periodical for a horizontal flat sea bottom. Using this property a comb filter can be designed. The window of the filter should be approximately equal to the duration of a source pulse. The algorithm finds the maximum of the periodical energy throughout the τ–p domain and then designs the comb filter which eliminates the water bottom reverberations from each trace in the τ– p domain. This process can be repeated for higher order reverberations. Finally the τ–p domain with attenuated multiples is transformed back to the conventional x -- t space. The method is illustrated on a variety of synthetic data and on a set of real marine CMP data acquired in the North Sea near the Norwegian shore.  相似文献   

19.
The filter for wave-equation-based water-layer multiple suppression, developed by the authors in the x-t, the linear τ-p, and the f-k domains, is extended to the parabolic τ-2 domain. The multiple reject areas are determined automatically by comparing the energy on traces of the multiple model (which are generated by a wave-extrapolation method from the original data) and the original input data (multiples + primaries) in τ-p space. The advantage of applying the data-adaptive 2D demultiple filter in the parabolic τ-p domain is that the waves are well separated in this domain. The numerical examples demonstrate the effectiveness of such a dereverberation procedure. Filtering of multiples in the parabolic τ-p domain works on both the far-offset and the near-offset traces, while the filtering of multiples in the f-k domain is effective only for the far-offset traces. Tests on a synthetic common-shot-point (CSP) gather show that the demultiple filter is relatively immune to slight errors in the water velocity and water depth which cause arrival time errors of the multiples in the multiple model traces of less than the time dimension (about one quarter of the wavelet length) of the energy summation window of the filter. The multiples in the predicted multiple model traces do not have to be exact replicas of the multiples in the input data, in both a wavelet-shape and traveltime sense. The demultiple filter also works reasonably well for input data contaminated by up to 25% of random noise. A shallow water CSP seismic gather, acquired on the North West Shelf of Australia, demonstrates the effectiveness of the technique on real data.  相似文献   

20.
The widespread use of common depth point techniques has emphasized the need for accurate static corrections. Manual interpretation methods can give excellent results, but a computer technique is desirable because of the great volumn of data recorded in common depth point shooting. The redundancy inherent in common depth point data may be used to compute a statistical estimate of the static corrections. The corrections are assumed to be time-invarient, surface-consistent, and independent of frequency. Surface consistency implies that all traces from a particular shot will receive the same shot static correction and all traces from a particular receiver position will receive the same receiver correction. Time shifts are computed for all input traces using crosscorrelation functions between common depth point traces. The time shift for each trace is composed of a shot static, a receiver static, residual normal moveout if present, and noise. Estimates of the shot and receiver static corrections are obtained by averaging different sets of the measured time shifts. Time shifts which are greatly in error are detected and removed from the computations. The method is useful for data which has a moderate to good signal to noise ratio. Residual normal moveout should be corrected before estimating the statics. The program estimates the statics for correctly stacking common depth point traces but it is not sensitive to constant or very slowly changing static errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号