首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The common depth point method of shooting in oil exploration provides a series of seismic traces which yield information about the substrata layers at one location. After normal moveout and static corrections have been applied, the traces are combined by horizontal stacking, or linear multichannel filtering, into a single record in which the primary reflections have been enhanced relative to the multiple reflections and random noise. The criterion used in optimum horizontal stacking is to maximize the signal to noise power ratio, where signal refers to the primary reflection sequence and noise includes the multiple reflections. It is shown when this criterion is equivalent to minimizing the mean square difference between the desired signal (primary reflection sequence) and the weighted horizontally stacked traces. If the seismic traces are combined by multichannel linear filtering, the primary reflection sequence will have undergone some phase and frequency distortion on the resulting record. The signal to noise power ratio then becomes less meaningful a criterion for designing the optimum linear multichannel filter, and the mean square criterion is adopted. In general, however, since more a priori information about the seismic traces is required to design the optimum linear multichannel filter than required for the optimum set of weights of the horizontal stacking process, the former will be an improvement over the latter. It becomes evident that optimum horizontal stacking is a restricted form of linear multichannel filtering.  相似文献   

2.
The common depth point method of shooting in oil exploration provides a series of seismic traces which yield information about the substrata layers at one location. After normal moveout and static corrections have been applied, the traces are combined by horizontal stacking, or linear multichannel filtering, into a single record in which the primary reflections have been enhanced relative to the multiple reflections and random noise. The criterion used in optimum horizontal stacking is to maximize the signal to noise power ratio, where signal refers to the primary reflection sequence and noise includes the multiple reflections. It is shown when this criterion is equivalent to minimizing the mean square difference between the desired signal (primary reflection sequence) and the weighted horizontally stacked traces. If the seismic traces are combined by multichannel linear filtering, the primary reflection sequence will have undergone some phase and frequency distortion on the resulting record. The signal to noise power ratio then becomes less meaningful a criterion for designing the optimum linear multichannel filter, and the mean square criterion is adopted. In general, however, since more a priori information about the seismic traces is required to design the optimum linear multichannel filter than required for the optimum set of weights of the horizontal stacking process, the former will be an improvement over the latter. It becomes evident that optimum horizontal stacking is a restricted form of linear multichannel filtering.  相似文献   

3.
Different types of median-based methods can be used to improve multichannel seismic data, particularly at the stacking stage in processing. Different applications of the median concept are described and discussed. The most direct application is the Simple Median Stack (SMS), i.e. to use as output the median value of the input amplitudes at each reflection time. By the Alpha-Trimmed Mean (ATM) method it is possible to exclude an optional amount of the input amplitudes that differ most from the median value. A more novel use of the median concept is the Weighted Median Stack (WMS). This method is based on a long-gapped median filter. The implicit weighting, which is purely statistical in nature, is due to the edge effects that occur when the gapped filter is applied. By shifting the traces around before filtering, the maximum weight may be given to, for example, the far-offset traces. The fourth method is the Iterative Median Stack (IMS). This method, which also includes a strong element of weighting, consists of a repeated use of a gapped median filter combined with a gradual shortening of the filter after each pass. Examples show how the seismic data can benefit from the application of these methods.  相似文献   

4.
The so-called ‘enhanced migration’ which uses diffraction tomography as the ‘repair tool’ for correction of amplitudes (reflection coefficients) of migrated sections is discussed. As with any linearized procedure, diffraction tomography requires knowledge of the initial model. It is suggested that the initial model is taken as the migrated image. It will be demonstrated that diffraction tomography applied to the data residuals improves the amplitudes of the migrated images. Migration is redefined as the reconstruction of the wavefront sets of distributions (reflection interfaces), and the inversion process as tomographic correction of migrated images.  相似文献   

5.
The VLF filtering technique of Karous and Hjelt has been applied to fixed-loop step-response transient electromagnetic data. This allows the data measured in each channel to be converted to an equivalent current-density pseudosection. For a conductive half-space, the maximum value of the equivalent current density starts near the transmitter loop and migrates outwards as a function of delay time. The rate of migration tends to increase as a function of delay time, with the increase being faster for a surficial conductive layer than it is for a half-space. Theoretical and field examples show that the currents tend to be more persistent in the relatively conductive areas, so that a pseudosection which is the average of the current densities at all delay times will highlight the more conductive zones. In resistive ground, it is not so critical to average the pseudosections as a particular delay time may give a better idea of the conductivity structure. For example, the latest possible delay time will reveal the most conductive features.  相似文献   

6.
Elastic redatuming can be carried out before or after decomposition of the multicomponent data into independent PP, PS, SP, and SS responses. We argue that from a practical point of view, elastic redatuming is preferably applied after decomposition. We review forward and inverse extrapolation of decomposed P- and S-wavefields. We use the forward extrapolation operators to derive a model of discrete multicomponent seismic data. This forward model is fully described in terms of matrix manipulations. By applying these matrix manipulations in reverse order we arrive at an elastic processing scheme for multicomponent data in which elastic redatuming plays an essential role. Finally, we illustrate elastic redatuming with a controlled 2D example, consisting of simulated multicomponent seismic data.  相似文献   

7.
Seismic data often contain traces that are dominated by noise; these traces should be removed (edited) before multichannel filtering or stacking. Noise bursts and spikes should be edited before single channel filtering. Spikes can be edited using a running median filter with a threshold; noise bursts can be edited by comparing the amplitudes of each trace to those of traces that are nearby in offset-common midpoint space. Relative amplitude decay rates of traces are diagnostic of their signal-to-noise (S/N) ratios and can be used to define trace editing criteria. The relative amplitude decay rate is calculated by comparing the time-gated trace amplitudes to a control function that is the median trace amplitude as a function of time, offset, and common midpoint. The editing threshold is set using a data-adaptive procedure that analyses a histogram of the amplitude decay rates. A performance evaluation shows that the algorithm makes slightly fewer incorrect trace editing decisions than human editors. The procedure for threshold setting achieves a good balance between preserving the fold of the data and removing the noisiest traces. Tests using a synthetic seismic line show that the relative amplitude decay rates are diagnostic of the traces’S/N ratios. However, the S/N ratios cannot be accurately usefully estimated at the start of processing, where noisy-trace editing is most needed; this is the fundamental limit to the accuracy of noisy trace editing. When trace equalization is omitted from the processing flow (as in amplitude-versus-offset analysis), precise noisy-trace editing is critical. The S/N ratio of the stack is more sensitive to type 2 errors (failing to reject noisy traces) than it is to type 1 errors (rejecting good traces). However, as the fold of the data decreases, the S/N ratio of the stack becomes increasingly sensitive to type 1 errors.  相似文献   

8.
Three methods for least-squares inversion of receiver array-filtered seismic data are investigated: (1) point receiver inversion where array effects are neglected; (2) preprocessing of the data with an inverse array filter, followed by point receiver inversion; (3) array inversion, where the array effects are included in the forward modelling. The methods are tested on synthetic data generated using the acoustic wave equation and a horizontally stratified earth model. It is assumed that the group length and the group interval are identical. For arrays that are shorter than the minimum wavelength of the emitted wavefield, and when the data are appropriately muted, point receiver inversion (first method) gives satisfactory results. For longer arrays, array inversion (third method) should be used. The failure of the inverse array filter (second method) is due to aliasing problems in the data.  相似文献   

9.
大灰厂的三种形变资料的年变化显然与雨量及温度变化有关,后者在某种程度上干扰了曲线变化中的有效信息.唐山地震前1975年开始的明显变化是上述干扰因素所不能解释的.由于雨量和温度干扰因素的存在,确定异常的幅度及其起始时间是有困难的.我们用多道维纳(Wiener)预测滤波方法由输入(雨量、温度)来预测输出(短水准、连通管及伸缩仪资料).预测输出和实际资料之间的差别(预测偏差)即为我们所求的有效信息.大灰厂短水准异常变化开始于1975年4月,即开始于唐山地震前一年多.震前三个月左右观测到明显的短期异常.总异常幅度约为2毫米左右(短水准基线为26米).异常表明,唐山地震前八宝山断层活动特征具有逆冲性.   相似文献   

10.
The Karhunen-Loéve transform, which optimally extracts coherent information from multichannel input data in a least-squares sense, is used for two specific problems in seismic data processing. The first is the enhancement of stacked seismic sections by a reconstruction procedure which increases the signal-to-noise ratio by removing from the data that information which is incoherent trace-to-trace. The technique is demonstrated on synthetic data examples and works well on real data. The Karhunen-Loéve transform is useful for data compression for the transmission and storage of stacked seismic data. The second problem is the suppression of multiples in CMP or CDP gathers. After moveout correction with the velocity associated with the multiples, the gather is reconstructed using the Karhunen-Loéve procedure, and the information associated with the multiples omitted. Examples of this technique for synthetic and real data are presented.  相似文献   

11.
Model-based inversion of seismic reflection data is a global optimization problem when prior information is sparse. We investigate the use of an efficient, global, stochastic optimization method, that of simulated annealing, for determining the two-way traveltimes and the reflection coefficients. We exploit the advantage of an ensemble approach to the inversion of full-scale target zones on 2D seismic sections. In our ensemble approach, several copies of the model-algorithm system are run in parallel. In this way, estimation of true ensemble statistics for the process is made possible, and improved annealing schedules can be produced. It is shown that the method can produce reliable results efficiently in the 2D case, even when prior information is sparse.  相似文献   

12.
Depth migration consists of two different steps: wavefield extrapolation and imaging. The wave propagation is firmly founded on a mathematical frame-work, and is simulated by solving different types of wave equations, dependent on the physical model under investigation. In contrast, the imaging part of migration is usually based on ad hoc‘principles’, rather than on a physical model with an associated mathematical expression. The imaging is usually performed using the U/D concept of Claerbout (1971), which states that reflectors exist at points in the subsurface where the first arrival of the downgoing wave is time-coincident with the upgoing wave. Inversion can, as with migration, be divided into the two steps of wavefield extrapolation and imaging. In contrast to the imaging principle in migration, imaging in inversion follows from the mathematical formulation of the problem. The image with respect to the bulk modulus (or velocity) perturbations is proportional to the correlation between the time derivatives of a forward-propagated field and a backward-propagated residual field (Lailly 1984; Tarantola 1984). We assume a physical model in which the wave propagation is governed by the 2D acoustic wave equation. The wave equation is solved numerically using an efficient finite-difference scheme, making simulations in realistically sized models feasible. The two imaging concepts of migration and inversion are tested and compared in depth imaging from a synthetic offset vertical seismic profile section. In order to test the velocity sensitivity of the algorithms, two erroneous input velocity models are tested. We find that the algorithm founded on inverse theory is less sensitive to velocity errors than depth migration using the more ad hoc U/D imaging principle.  相似文献   

13.
First breaks of 2D deep reflection data were used to construct velocity-depth models for improved static corrections to a deeper datum level and for geological interpretations. The highly redundant traveltime data were automatically picked and transformed directly into a velocity-depth model by maximum depth methods such as the Giese- and the Slichter-method. Comparisons with the results of synthetic calculations and a tomographic approach using iterative inversion methods (ART, SIRT) showed that maximum depth methods provide reliable velocity models as a basis for the computation of static corrections. These methods can economically be applied during data acquisition in the field. They provide particularly long-period static anomalies, which are of the order of 20–40 ms (0.5-1 wavelength) within CMP gathers of an example of a deep reflection profile in SW-Germany sited on crystalline basement. Reprocessing of this profile, which was aimed at the comparison between the effects of the originally used and the new statics, did not result in dramatically improved stacking quality but showed a subtle influence on the detailed appearance of deep crustal events.  相似文献   

14.
Seismic stratigraphy and sedimentological studies of the Gemlik Gulf in the Sea of Marmara, Turkey, have been carried out. For this purpose, 19 lines totalling 189 km of excellent quality, high-resolution seismic data were recorded. Four major acoustic units were identified in the seismic profiles. Three were sedimentary units: irregular layered, cross-layered and well-layered; and the fourth was an acoustic basement which is probably composed of crystalline volcanic rocks. Some local areas in the Neogene formation contain gas accumulations. The formation of faults in E–W and N–S directions can be explained by the existence of shear stresses in the Gulf. The bathymetric map shows good accommodation with the shore line as does the tectonic map.  相似文献   

15.
The moveout of P-SV mode-converted seismic reflection events in a common-midpoint gather is non-hyperbolic. This is true even if the medium has constant P- and SV-wave velocities. Furthermore, reflection-point smear occurs even along horizontal reflectors. These effects reduce the resolution of the zero-offset stack. In such a medium, the generalization of the dip moveout transformation to P-SV data can be calculated analytically. The resulting P-SV dip moveout operators solve the problem of reflection-point smear, and image any reflector regardless of dip or depth. The viability of this technique is demonstrated on synthetic and field data.  相似文献   

16.
The development of crosshole seismic tomography as an imaging method for the subsurface has been hampered by the scarcity of real data. For boreholes in excess of a few hundred metres depth, crosshole seismic data acquisition is still a poorly developed and expensive technology. A partial solution to this relative lack of data has been achieved by the use of an ultrasonic seismic modelling system. Such ultrasonic data, obtained in the laboratory from physical models, provide a useful test of crosshole imaging software. In particular, ultrasonic data have been used to test the efficacy of a convolutional back-projection algorithm, designed for crosshole imaging. The algorithm is described and shown to be less susceptible to noise contamination than a Simultaneous Iterative Reconstruction Technique (SIRT) algorithm, and much more computationally efficient.  相似文献   

17.
Kalman滤波在沉降监测数据处理中的应用   总被引:1,自引:0,他引:1  
将卡尔曼滤波应用于建筑物变形监测数据分析,给出了离散卡尔曼滤波模型的建立思路及相应的精度评定公式,结合西安市某建筑物沉降观测数据,建立了相应的离散卡尔曼滤波数学模型,通过Matlab编程对观测数据进行处理,成果图像显示滤波值曲线与原始观测数据曲线的变化趋势基本一致。该模型较好地模拟了建筑物沉降的变化规律,对于改善沉降监测数据处理的精度效果非常理想。  相似文献   

18.
The Wiener prediction filter has been an effective tool for accomplishing dereverberation when the input data are stationary. For non-stationary data, however, the performance of the Wiener filter is often unsatisfactory. This is not surprising since it is derived under the stationarity assumption. Dereverberation of nonstationary seismic data is here accomplished with a difference equation model having time-varying coefficients. These time-varying coefficients are in turn expanded in terms of orthogonal functions. The kernels of these orthogonal functions are then determined according to the adaptive algorithm of Nagumo and Noda. It is demonstrated that the present adaptive predictive deconvolution method, which combines the time-varying difference equation model with the adaptive method of Nagumo and Noda, is a powerful tool for removing both the long- and short-period reverberations. Several examples using both synthetic and field data illustrate the application of adaptive predictive deconvolution. The results of applying the Wiener prediction filter and the adaptive predictive deconvolution on nonstationary data indicate that the adaptive method is much more effective in removing multiples. Furthermore, the criteria for selecting various input parameters are discussed. It has been found that the output trace from the adaptive predictive deconvolution is rather sensitive to some input parameters, and that the prediction distance is by far the most influential parameter.  相似文献   

19.
The one-dimensional seismic inverse problem consists of recovering the acoustic impedance (or reflectivity function) as a function of traveltime from the reflection response of a horizontally layered medium excited by a plane-wave impulsive source. Most seismic sources behave like point sources, and the data must be corrected for geometrical spreading before the inversion procedure is applied. This correction is usually not exact because the geometrical spreading is different for primary and multiple reflections. An improved algorithm is proposed which takes the geometrical spreading from a point source into account. The zero-offset reflection response from a stack of homogeneous layers of variable thickness is used to compute the thickness, velocity and density of each layer. This is possible because the geometrical spreading contains additional information about the velocities.  相似文献   

20.
用改进的维纳滤波处理地电阻率观测资料   总被引:1,自引:0,他引:1  
利用改进后的维纳滤波对青岛地电阻率资料进行了处理,排除了地下水位、温度、降雨和气压等因素对地电阻率的影响,提高了资料的可信度.处理后的地电阻率3个分向在黄海Ms5.3级地震前11~12天均出现了明显的短临异常,异常出现的时间也比较同步.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号