首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present a novel approach to automated volume extraction in seismic data and apply it to the detection of allochthonous salt bodies. Using a genetic algorithm, we determine the optimal size of volume elements that statistically, according to the U‐test, best characterize the contrast between the textures inside and outside of the salt bodies through a principal component analysis approach. This information was used to implement a seeded region growing algorithm to directly extract the bodies from the cube of seismic amplitudes. We present the resulting three‐dimensional bodies and compare our final results to those of an interpreter, showing encouraging results.  相似文献   

2.
Seismic sections used in interpretation are actually images. We often superimpose colour-coded pictures of seismic attributes on seismic sections. Thus, it seems straightforward to use image processing algorithms to enhance the quality of the seismic images. From an image processing point of view, seismic horizons can be thought of as edges on the seismic image. We present a novel approach to detecting seismic horizons, which includes the 2D median filtering of the instantaneous phase attribute and applying an edge detection algorithm. The resulting edge magnitude picture provides a skeletonized image of the seismic section, on which the structural and stratigraphic patterns can be better recognized.  相似文献   

3.
Seismic facies analysis is a well‐established technique in the workflow followed by seismic interpreters. Typically, huge volumes of seismic data are scanned to derive maps of interesting features and find particular patterns, correlating them with the subsurface lithology and the lateral changes in the reservoir. In this paper, we show how seismic facies analysis can be accomplished in an effective and complementary way to the usual one. Our idea is to translate the seismic data in the musical domain through a process called sonification, mainly based on a very accurate time–frequency analysis of the original seismic signals. From these sonified seismic data, we extract several original musical attributes for seismic facies analysis, and we show that they can capture and explain underlying stratigraphic and structural features. Moreover, we introduce a complete workflow for seismic facies analysis starting exclusively from musical attributes, based on state‐of‐the‐art machine learning computational techniques applied to the classification of the aforementioned musical attributes. We apply this workflow to two case studies: a sub‐salt two‐dimensional seismic section and a three‐dimensional seismic cube. Seismic facies analysis through musical attributes proves to be very useful in enhancing the interpretation of complicated structural features and in anticipating the presence of hydrocarbon‐bearing layers.  相似文献   

4.
5.
In seismic interpretation and seismic data analysis, it is of critical importance to effectively identify certain geologic formations from very large seismic data sets. In particular, the problem of salt characterization from seismic data can lead to important savings in time during the interpretation process if solved efficiently and in an automatic manner. In this work, we present a novel numerical approach that is able to automatically segmenting or identifying salt structures from a post‐stack seismic data set with a minimum intervention from the interpreter. The proposed methodology is based on the recent theory of sparse representation and consists in three major steps: first, a supervised learning assisted by the user which is performed only once, second a segmentation process via unconstrained ?1 optimization, and finally a post‐processing step based on signal separation. Furthermore, since the second step only depends upon local information at each time, the whole process greatly benefits from parallel computing platforms. We conduct numerical experiments in a synthetic 3D seismic data set demonstrating the viability of our method. More specifically, we found that the proposed approach matches up to 98.53% with respect to the corresponding 3D velocity model available in advance. Finally, in appendixes A and B, we present a convergence analysis providing theoretical guarantees for the proposed method.  相似文献   

6.
We present the chain of time‐reverse modeling, image space wavefield decomposition and several imaging conditions as a migration‐like algorithm called time‐reverse imaging. The algorithm locates subsurface sources in passive seismic data and diffractors in active data. We use elastic propagators to capitalize on the full waveforms available in multicomponent data, although an acoustic example is presented as well. For the elastic case, we perform wavefield decomposition in the image domain with spatial derivatives to calculate P and S potentials. To locate sources, the time axis is collapsed by extracting the zero‐lag of auto and cross‐correlations to return images in physical space. The impulse response of the algorithm is very dependent on acquisition geometry and needs to be evaluated with point sources before processing field data. Band‐limited data processed with these techniques image the radiation pattern of the source rather than just the location. We present several imaging conditions but we imagine others could be designed to investigate specific hypotheses concerning the nature of the source mechanism. We illustrate the flexible technique with synthetic 2D passive data examples and surface acquisition geometry specifically designed to investigate tremor type signals that are not easily identified or interpreted in the time domain.  相似文献   

7.
In many land seismic situations, the complex seismic wave propagation effects in the near‐surface area, due to its unconsolidated character, deteriorate the image quality. Although several methods have been proposed to address this problem, the negative impact of 3D complex near‐surface structures is still unsolved to a large extent. This paper presents a complete 3D data‐driven solution for the near‐surface problem based on 3D one‐way traveltime operators, which extends our previous attempts that were limited to a 2D situation. Our solution is composed of four steps: 1) seismic wave propagation from the surface to a suitable datum reflector is described by parametrized one‐way propagation operators, with all the parameters estimated by a new genetic algorithm, the self‐adjustable input genetic algorithm, in an automatic and purely data‐driven way; 2) surface‐consistent residual static corrections are estimated to accommodate the fast variations in the near‐surface area; 3) a replacement velocity model based on the traveltime operators in the good data area (without the near‐surface problem) is estimated; 4) data interpolation and surface layer replacement based on the estimated traveltime operators and the replacement velocity model are carried out in an interweaved manner in order to both remove the near‐surface imprints in the original data and keep the valuable geological information above the datum. Our method is demonstrated on a subset of a 3D field data set from the Middle East yielding encouraging results.  相似文献   

8.
The refraction convolution section (RCS) is a new method for imaging shallow seismic refraction data. It is a simple and efficient approach to full‐trace processing which generates a time cross‐section similar to the familiar reflection cross‐section. The RCS advances the interpretation of shallow seismic refraction data through the inclusion of time structure and amplitudes within a single presentation. The RCS is generated by the convolution of forward and reverse shot records. The convolution operation effectively adds the first‐arrival traveltimes of each pair of forward and reverse traces and produces a measure of the depth to the refracting interface in units of time which is equivalent to the time‐depth function of the generalized reciprocal method (GRM). Convolution also multiplies the amplitudes of first‐arrival signals. To a good approximation, this operation compensates for the large effects of geometrical spreading, with the result that the convolved amplitude is essentially proportional to the square of the head coefficient. The signal‐to‐noise (S/N) ratios of the RCS show much less variation than those on the original shot records. The head coefficient is approximately proportional to the ratio of the specific acoustic impedances in the upper layer and in the refractor. The convolved amplitudes or the equivalent shot amplitude products can be useful in resolving ambiguities in the determination of wave speeds. The RCS can also include a separation between each pair of forward and reverse traces in order to accommodate the offset distance in a manner similar to the XY spacing of the GRM. The use of finite XY values improves the resolution of lateral variations in both amplitudes and time‐depths. The use of amplitudes with 3D data effectively improves the spatial resolution of wave speeds by almost an order of magnitude. Amplitudes provide a measure of refractor wave speeds at each detector, whereas the analysis of traveltimes provides a measure over several detectors, commonly a minimum of six. The ratio of amplitudes obtained with different shot azimuths provides a detailed qualitative measure of azimuthal anisotropy and, in turn, of rock fabric. The RCS facilitates the stacking of refraction data in a manner similar to the common‐midpoint methods of reflection seismology. It can significantly improve S/N ratios.Most of the data processing with the RCS, as with the GRM, is carried out in the time domain, rather than in the depth domain. This is a significant advantage because the realities of undetected layers, incomplete sampling of the detected layers and inappropriate sampling in the horizontal rather than the vertical direction result in traveltime data that are neither a complete, an accurate nor a representative portrayal of the wave‐speed stratification. The RCS facilitates the advancement of shallow refraction seismology through the application of current seismic reflection acquisition, processing and interpretation technology.  相似文献   

9.
In certain seismic data processing and interpretation tasks such as spiking deconvolution, tuning analysis, impedance inversion, and spectral decomposition, it is commonly assumed that the vertical direction is normal to reflectors. This assumption is false in the case of dipping layers and may therefore lead to inaccurate results. To overcome this limitation, we propose a coordinate system in which geometry follows the shape of each reflector and the vertical direction corresponds to normal reflectivity. We call this coordinate system stratigraphic coordinates. We develop a constructive algorithm that transfers seismic images into the stratigraphic coordinate system. The algorithm consists of two steps. First, local slopes of seismic events are estimated by plane‐wave destruction; then structural information is spread along the estimated local slopes, and horizons are picked everywhere in the seismic volume by the predictive‐painting algorithm. These picked horizons represent level sets of the first axis of the stratigraphic coordinate system. Next, an upwind finite‐difference scheme is used to find the two other axes, which are perpendicular to the first axis, by solving the appropriate gradient equations. After seismic data are transformed into stratigraphic coordinates, seismic horizons should appear flat, and seismic traces should represent the direction normal to the reflectors. Immediate applications of the stratigraphic coordinate system are in seismic image flattening and spectral decomposition. Synthetic and real data examples demonstrate the effectiveness of stratigraphic coordinates.  相似文献   

10.
Optimization of sub-band coding method for seismic data compression   总被引:2,自引:0,他引:2  
Seismic data volumes, which require huge transmission capacities and massive storage media, continue to increase rapidly due to acquisition of 3D and 4D multiple streamer surveys, multicomponent data sets, reprocessing of prestack seismic data, calculation of post‐stack seismic data attributes, etc. We consider lossy compression as an important tool for efficient handling of large seismic data sets. We present a 2D lossy seismic data compression algorithm, based on sub‐band coding, and we focus on adaptation and optimization of the method for common‐offset gathers. The sub‐band coding algorithm consists of five stages: first, a preprocessing phase using an automatic gain control to decrease the non‐stationary behaviour of seismic data; second, a decorrelation stage using a uniform analysis filter bank to concentrate the energy of seismic data into a minimum number of sub‐bands; third, an iterative classification algorithm, based on an estimation of variances of blocks of sub‐band samples, to classify the sub‐band samples into a fixed number of classes with approximately the same statistics; fourth, a quantization step using a uniform scalar quantizer, which gives an approximation of the sub‐band samples to allow for high compression ratios; and fifth, an entropy coding stage using a fixed number of arithmetic encoders matched to the corresponding statistics of the classified and quantized sub‐band samples to achieve compression. Decompression basically performs the opposite operations in reverse order. We compare the proposed algorithm with three other seismic data compression algorithms. The high performance of our optimized sub‐band coding method is supported by objective and subjective results.  相似文献   

11.
基于小波多尺度分析的奇性指数:一种新地震属性   总被引:22,自引:6,他引:22  
传统的地震解释主要是观测地震振幅及相位的变化,但是地震振幅也会掩盖地下介质中真实的地质情况.在许多情况下,重要的地质信息可能是通过与振幅特征无关的奇异性参数来传递的.本文提出奇异性指数,即Hlder指数α(又被称为Lipschitz指数),可以作为一种新的地震属性,它能够准确反映数据中的奇点的位置和奇异性强度.Hlder指数α是对某一点上或某一点周围很小的范围内的奇异性强度的度量,大的α值表示低的奇异性(或高的正则性).研究表明α可以作为一种自然的能够精确刻画地层边界的地震属性.本文根据这一思想,对合成和实测的地震数据进行了小波基础上的多尺度分析研究,结果表明由本文的算法得到的α提高了对地层边界划分的能力,而这些地层界面在传统的地震振幅显示则不明显.  相似文献   

12.
Prestack wave‐equation migration has proved to be a very accurate shot‐by‐shot imaging tool. However, 3D imaging with this technique of a large field acquisition, especially one with hundreds of thousands of shots, is prohibitively costly. Simply adapting the technique to migrate many superposed shot‐gathers simultaneously would render 3D wavefield prestack migration cost‐effective but it introduces uncontrolled non‐physical interference among the shot‐gathers, making the final image useless. However, it has been observed that multishot signal interference can be kept under some control by averaging over many such images, if each multishot migration is modified by a random phase encoding of the frequency spectra of the seismic traces. In this article, we analyse this technique, giving a theoretical basis for its observed behaviour: that the error of the image produced by averaging over M phase encoded migrations decreases as M?1 . Furthermore, we expand the technique and define a general class of Monte‐Carlo encoding methods for which the noise variance of the average imaging condition decreases as M?1 ; these methods thus all converge asymptotically to the correct reflectivity map, without generating prohibitive costs. The theoretical asymptotic behaviour is illustrated for three such methods on a 2D test case. Numerical verification in 3D is then presented for one such method implemented with a 3D PSPI extrapolation kernel for two test cases: the SEG–EAGE salt model and a real test constructed from field data.  相似文献   

13.
刘洋  王典  刘财  刘殿秘  张鹏 《地球物理学报》2014,57(4):1177-1187
不连续地质体(如断层)的自动检测一直以来都是叠后地震数据解释中的关键问题之一,尤其在三维情况中尤为重要.然而,大多数边缘检测和相干算法都对随机噪声很敏感,随机噪声衰减是叠后地震数据解释的另一个主要问题.针对构造保护去噪和断层检测问题,本文基于非平稳相似性系数完善一种构造导向滤波方法并且提出一种自动断层检测方法,形成了一套匹配的处理技术.该构造导向滤波既能够有效地衰减随机噪声又可以很好地保护地震资料中的断层等信息不被破坏,增强地震剖面中弯曲、倾斜同相轴的连续性.根据地震数据局部倾角走向,利用相邻道构建当前地震道的预测,通过预测道的叠加得到参考道,计算预测道与参考道之间的非平稳相似性系数可以设计出数据驱动的加权中值滤波.另一方面,预测道与原始道之间的非平稳相似性系数能够用于带有断层指示性的相干分析.这两种方法都基于构造预测和非平稳相似性系数,但是使用不同的调节参数和处理方案.理论模型和实际数据的处理结果证明了本文提出构造导向滤波和断层检测方法的有效性.  相似文献   

14.
Improving seismic resolution is essential for obtaining more detailed structural and stratigraphic information. We present a new algorithm to increase seismic resolution with a minimum of user‐defined parameters. The algorithm inherits useful properties of both the short‐time Fourier transform and the cepstrum to smooth and broaden the frequency spectrum at each translation of the spectral decomposing window. The key idea is to replace the amplitude spectrum with its logarithm in each window of the short‐time Fourier transform. We describe the mathematical formulation of the algorithm and its testing on synthetic and real seismic data to obtain broader frequency spectra and thus enhance the seismic resolution.  相似文献   

15.
Understanding fracture orientations is important for optimal field development of fractured reservoirs because fractures can act as conduits for fluid flow. This is especially true for unconventional reservoirs (e.g., tight gas sands and shale gas). Using walkaround Vertical Seismic Profiling (VSP) technology presents a unique opportunity to identify seismic azimuthal anisotropy for use in mapping potential fracture zones and their orientation around a borehole. Saudi Aramco recently completed the acquisition, processing and analysis of a walkaround VSP survey through an unconventional tight gas sand reservoir to help characterize fractures. In this paper, we present the results of the seismic azimuthal anisotropy analysis using seismic traveltime, shear‐wave splitting and amplitude attenuation. The azimuthal anisotropy results are compared to the fracture orientations derived from dipole sonic and image logs. The image log interpretation suggests that an orthorhombic fracture system is present. VSP data show that the P‐wave traveltime anisotropy direction is NE to SW. This is consistent with the cemented fractures from the image log interpretation. The seismic amplitude attenuation anisotropy direction is NW to SE. This is consistent with one of the two orientations obtained using transverse to radial amplitude ratio analysis, with the dipole sonic and with open fracture directions interpreted from image log data.  相似文献   

16.
三维地震数据可视化原理及方法   总被引:5,自引:0,他引:5  
可视化是实现三维地震数据真三维解释的有效方法,本文介绍了三维地震数据可视化的基本原理,论述了用光线投射算法实现可视化的主要步骤,详细推导了合成可视化图像的计算公式,分析了阻光度曲线的物理意义及对可视化图像的调节作用,并用两个实际三维地震数据进行了试验.结果表明该方法能够由原始三维地震数据直接对地质体进行成像,能将分散的、孤立的信息互相联系起来,揭示隐藏在数据中的地质现象和规律.  相似文献   

17.
Seismic inversion plays an important role in reservoir modelling and characterisation due to its potential for assessing the spatial distribution of the sub‐surface petro‐elastic properties. Seismic amplitude‐versus‐angle inversion methodologies allow to retrieve P‐wave and S‐wave velocities and density individually allowing a better characterisation of existing litho‐fluid facies. We present an iterative geostatistical seismic amplitude‐versus‐angle inversion algorithm that inverts pre‐stack seismic data, sorted by angle gather, directly for: density; P‐wave; and S‐wave velocity models. The proposed iterative geostatistical inverse procedure is based on the use of stochastic sequential simulation and co‐simulation algorithms as the perturbation technique of the model parametre space; and the use of a genetic algorithm as a global optimiser to make the simulated elastic models converge from iteration to iteration. All the elastic models simulated during the iterative procedure honour the marginal prior distributions of P‐wave velocity, S‐wave velocity and density estimated from the available well‐log data, and the corresponding joint distributions between density versus P‐wave velocity and P‐wave versus S‐wave velocity. We successfully tested and implemented the proposed inversion procedure on a pre‐stack synthetic dataset, built from a real reservoir, and on a real pre‐stack seismic dataset acquired over a deep‐water gas reservoir. In both cases the results show a good convergence between real and synthetic seismic and reliable high‐resolution elastic sub‐surface Earth models.  相似文献   

18.
This paper illustrates the use of image processing techniques for separating seismic waves. Because of the non‐stationarity of seismic signals, the continuous wavelet transform is more suitable than the conventional Fourier transforms for the representation, and thus the analysis, of seismic processes. It provides a 2D representation, called a scalogram, of a 1D signal where the seismic events are well localized and isolated. Supervised methods based on this time‐scale representation have already been used to separate seismic events, but they require strong interactions with the geophysicist. This paper focuses on the use of the watershed algorithm to segment time‐scale representations of seismic signals, which leads to an automatic estimation of the wavelet representation of each wave separately. The computation of the inverse wavelet transform then leads to the reconstruction of the different waves. This segmentation, tracked over the different traces of the seismic profile, enables an accurate separation of the different wavefields. This method has been successfully validated on several real data sets.  相似文献   

19.
We present a new inversion method to estimate, from prestack seismic data, blocky P‐ and S‐wave velocity and density images and the associated sparse reflectivity levels. The method uses the three‐term Aki and Richards approximation to linearise the seismic inversion problem. To this end, we adopt a weighted mixed l2, 1‐norm that promotes structured forms of sparsity, thus leading to blocky solutions in time. In addition, our algorithm incorporates a covariance or scale matrix to simultaneously constrain P‐ and S‐wave velocities and density. This a priori information is obtained by nearby well‐log data. We also include a term containing a low‐frequency background model. The l2, 1 mixed norm leads to a convex objective function that can be minimised using proximal algorithms. In particular, we use the fast iterative shrinkage‐thresholding algorithm. A key advantage of this algorithm is that it only requires matrix–vector multiplications and no direct matrix inversion. The latter makes our algorithm numerically stable, easy to apply, and economical in terms of computational cost. Tests on synthetic and field data show that the proposed method, contrarily to conventional l2‐ or l1‐norm regularised solutions, is able to provide consistent blocky and/or sparse estimators of P‐ and S‐wave velocities and density from a noisy and limited number of observations.  相似文献   

20.
利用数字图像处理技术提高地震剖面图像信噪比   总被引:1,自引:2,他引:1  
提出了利用数字图像处理技术提高地震剖面信噪比的新方法,首先根据数字图像处理要求的格式,对地震剖面数据进行转换,得到地震剖面图像,分析了地震数据特点和初步地震图像的实验结果后,设计了新的预处理方法——“二维沿层滤波”,在此基础上,利用可以计算帧间运动速度及其变化都较大的改进的光流分析技术,计算出多幅地震剖面对应点的偏移量,然后应用图像积累技术对这多幅地震剖面进行积累,实现对三维地震数据体提高信噪比的处理,该方法充分利用了三维地震信息,不但可以提高整个数据体的信噪比,而且可以减少信号能量的损失,并保持原来的信号能量关系,使地震剖面的质量得到明显提高,为地震解释奠定良好的基础。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号