首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 982 毫秒
1.
This paper presents a new algorithm for estimating non‐minimum‐phase seismic wavelets by using the second‐ and higher‐order statistics (HOS) of the wavelets. In contrast to many, if not most, of the HOS‐based methods, the proposed method does not need to assume that subsurface seismic reflectivity is a non‐Gaussian, statistically independent and identically distributed random process. The amplitude and phase spectra of the wavelets are estimated, respectively, using the second‐order statistics (SOS) and third‐order moment (TOM) of the wavelets, which will, in turn, be derived from the HOS of the seismic traces. In our approach, the wavelets can be ‘calculated’ from seismic traces efficiently; no optimization or inversion is necessarily required. Very good results have been obtained by applying this method to both synthetic and real‐field data sets.  相似文献   

2.
This paper presents a linear predictor (LP)‐based lossless sensor data compression algorithm for efficient transmission, storage and retrieval of seismic data. Auto‐Regressive with eXogenous input (ARX) model is selected as the model structure of LP. Since earthquake ground motion is typically measured at the base of monitored structures, the ARX model parameters are calculated in a system identification framework using sensor network data and measured input signals. In this way, sensor data compression takes advantage of structural system information to maximize the sensor data compression performance. Numerical simulation results show that several factors including LP order, measurement noise, input and limited sensor number affect the performance of the proposed lossless sensor data compression algorithm concerned. Generally, the lossless data compression algorithm is capable of reducing the size of raw sensor data while causing no information loss in the sensor data. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
Scattered ground roll is a type of noise observed in land seismic data that can be particularly difficult to suppress. Typically, this type of noise cannot be removed using conventional velocity‐based filters. In this paper, we discuss a model‐driven form of seismic interferometry that allows suppression of scattered ground‐roll noise in land seismic data. The conventional cross‐correlate and stack interferometry approach results in scattered noise estimates between two receiver locations (i.e. as if one of the receivers had been replaced by a source). For noise suppression, this requires that each source we wish to attenuate the noise from is co‐located with a receiver. The model‐driven form differs, as the use of a simple model in place of one of the inputs for interferometry allows the scattered noise estimate to be made between a source and a receiver. This allows the method to be more flexible, as co‐location of sources and receivers is not required, and the method can be applied to data sets with a variety of different acquisition geometries. A simple plane‐wave model is used, allowing the method to remain relatively data driven, with weighting factors for the plane waves determined using a least‐squares solution. Using a number of both synthetic and real two‐dimensional (2D) and three‐dimensional (3D) land seismic data sets, we show that this model‐driven approach provides effective results, allowing suppression of scattered ground‐roll noise without having an adverse effect on the underlying signal.  相似文献   

4.
The local cosine/sine basis is a localized version of the cosine/sine basis with a window function which can have arbitrary smoothness. It has orthogonality and good time and frequency localization properties. The adaptive local cosine/sine basis is a best‐basis obtained from an overabundant library of cosine/sine packets based on a cost‐functional. We propose a 2D semi‐adaptive (time‐adaptive or space‐adaptive) local cosine transform (referred to as a 2D semi‐ALCT) and apply it to the SEG–EAEG salt model synthetic data set for compression. From the numerical results, we see that most of the important features of the data set can be well preserved even in the high compression ratio (CR=40:1) case. Using reconstructed data from the highly compressed ALCT coefficients (CR=40:1) for migration, we can still obtain a high‐quality image including subsalt structures. Furthermore, we find that the window partition, generated by the 2D semi‐ALCT, is well adapted to the characteristics of the seismic data set, and the compression capability of the 2D semi‐ALCT is greater than that of the 2D uniform local cosine transform (2D ULCT). We find also that a (32, 32) or (32, 64) minimum (time, space) window size can generate the best compression results for the SEG–EAEG salt data set.  相似文献   

5.
Interpreting a post‐stack seismic section is difficult due to the band‐limited nature of the seismic data even post deconvolution. Deconvolution is a process that is universally applied to extend the bandwidth of seismic data. However, deconvolution falls short of this task as low and high frequencies of the deconvolved data are either still missing or contaminated by noise. In this paper we use the autoregressive extrapolation technique to recover these missing frequencies, using the high signal‐to‐noise ratio (S/N) portions of the spectrum of deconvolved data. I introduce here an algorithm to extend the bandwidth of deconvolved data. This is achieved via an autoregressive extrapolation technique, which has been widely used to replace missing or corrupted samples of data in signal processing. This method is performed in the spectral domain. The spectral band to be extrapolated using autoregressive prediction filters is first selected from the part of the spectrum that has a high signal‐to‐noise ratio (S/N) and is then extended. As there can be more than one zone of good S/N in the spectrum, the results of prediction filter design and extrapolation from three different bands are averaged. When the spectrum of deconvolved data is extended in this way, the results show higher vertical resolution to a degree that the final seismic data closely resemble what is considered to be a reflectivity sequence of the layered medium. This helps to obtain acoustic impedance with inversion by stable integration. The results show that autoregressive spectral extrapolation highly increases vertical resolution and improves horizon tracking to determine continuities and faults. This increase in coherence ultimately yields a more interpretable seismic section.  相似文献   

6.
Exploration in the basalt covered areas of the Faroes offshore has always suffered from poor seismic imaging below the basalt. Long offset 2D and 3D seismic data were acquired and a significant improvement in the seismic image below top basalt has been achieved. Deep towing of the source and receiver cables helped by extending the seismic bandwidth towards lower frequencies. Bubble‐tuned rather than conventional peak‐tuned source arrays gave little, if any, incremental benefit. The improvement in the imaging comes primarily from the approach to processing the data. High frequencies (dominantly noise) are filtered out of the data early in the processing to concentrate on the low frequency data. Careful multiple removal is important with several passes of demultiple being applied to the data using both Surface‐Related Multiple Elimination (SRME) and Radon techniques. Velocity analysis is performed as an iterative process taking into account the geological model. Reprocessing legacy 2D surveys, acquired with wide‐ranging parameters, using these processing techniques improved these datasets significantly, indicating that sub‐basalt imaging seems to be more sensitive to processing than to the choice of acquisition parameters.  相似文献   

7.
3D seismic data are usually recorded and processed on rectangular grids, for which sampling requirements are generally derived from the usual 1D viewpoint. For a 3D data set, the band region (the region of the Fourier space in which the amplitude spectrum is not zero) can be approximated by a domain bounded by two cones. Considering the particular shape of this band region we can use the 3D sampling viewpoint, which leads to weaker sampling requirements than does the 1D viewpoint; i.e. fewer sample points are needed to represent data with the same degree of accuracy. The 3D sampling viewpoint considers regular nonrectangular sampling grids. The recording and processing of 3D seismic data on a hexagonal sampling grid is explored. The acquisition of 3D seismic data on a hexagonal sampling grid is an advantageous economic alternative because it requires 13.4% fewer sample points than a rectangular sampling grid. The hexagonal sampling offers savings in data storage and processing of 3D seismic data. A fast algorithm for 3D discrete spectrum evaluation and trace interpolation in the case of a 3D seismic data set sampled on a hexagonal grid is presented and illustrated by synthetic examples. It is shown that by using this algorithm the hexagonal sampling offers, approximately, the same advantage of saving 13.4% in data storage and computational time for 3D phase-shift migration.  相似文献   

8.
This paper illustrates the use of image processing techniques for separating seismic waves. Because of the non‐stationarity of seismic signals, the continuous wavelet transform is more suitable than the conventional Fourier transforms for the representation, and thus the analysis, of seismic processes. It provides a 2D representation, called a scalogram, of a 1D signal where the seismic events are well localized and isolated. Supervised methods based on this time‐scale representation have already been used to separate seismic events, but they require strong interactions with the geophysicist. This paper focuses on the use of the watershed algorithm to segment time‐scale representations of seismic signals, which leads to an automatic estimation of the wavelet representation of each wave separately. The computation of the inverse wavelet transform then leads to the reconstruction of the different waves. This segmentation, tracked over the different traces of the seismic profile, enables an accurate separation of the different wavefields. This method has been successfully validated on several real data sets.  相似文献   

9.
A fuzzy dynamic flood routing model (FDFRM) for natural channels is presented, wherein the flood wave can be approximated to a monoclinal wave. This study is based on modification of an earlier published work by the same authors, where the nature of the wave was of gravity type. Momentum equation of the dynamic wave model is replaced by a fuzzy rule based model, while retaining the continuity equation in its complete form. Hence, the FDFRM gets rid of the assumptions associated with the momentum equation. Also, it overcomes the necessity of calculating friction slope (Sf) in flood routing and hence the associated uncertainties are eliminated. The fuzzy rule based model is developed on an equation for wave velocity, which is obtained in terms of discontinuities in the gradient of flow parameters. The channel reach is divided into a number of approximately uniform sub‐reaches. Training set required for development of the fuzzy rule based model for each sub‐reach is obtained from discharge‐area relationship at its mean section. For highly heterogeneous sub‐reaches, optimized fuzzy rule based models are obtained by means of a neuro‐fuzzy algorithm. For demonstration, the FDFRM is applied to flood routing problems in a fictitious channel with single uniform reach, in a fictitious channel with two uniform sub‐reaches and also in a natural channel with a number of approximately uniform sub‐reaches. It is observed that in cases of the fictitious channels, the FDFRM outputs match well with those of an implicit numerical model (INM), which solves the dynamic wave equations using an implicit numerical scheme. For the natural channel, the FDFRM outputs are comparable to those of the HEC‐RAS model. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
The analysis of the recorded geophysical information shows that there is a large spread in probability of instrument readings at the moment of seismic activity. In order to lower demands for computer data resources and decrease the power consumption when applying autonomous computer-based systems, the coding algorithms with minimal requirements for the involvement of the computer central processor and, as a consequence, its power consumption are used. These include the Huffman static algorithm and algorithms using Elias, Rice, Golomb, and Fibonacci codes. The application of Fibonacci codes for coding this information leads to a gain in compression ratio of 10–30% relative to other coding methods.  相似文献   

11.
This paper presents a new methodology based on structural performance to determine uniform fragility design spectra, i.e., spectra with the same probability of exceedance of a performance level for a given seismic intensity. The design spectra calculated with this methodology provide directly the lateral strength, in terms of yield‐ pseudo‐accelerations, associated with the rate of exceedance of a specific ductility characterizing the performance level for which the structures will be designed. This procedure involves the assessment of the seismic hazard using a large enough number of seismic records of several magnitudes; these records are simulated with an improved empirical Green function method. The statistics of the performance of a single degree of freedom system are obtained using Monte Carlo simulation considering the seismic demand, the fundamental period, and the strength of the structure as uncertain variables. With these results, the conditional probability that a structure exceeds a specific performance level is obtained. The authors consider that the proposed procedure is a significant improvement to others considered in the literature and a useful research tool for the further development of uniform fragility spectra that can be used for the performance‐based seismic design and retrofit of structures.  相似文献   

12.
文中提出了应用双正交小波变换有损压缩地震数据的方法,该方法具有实现简单、在高压缩比情况下失真小等特点。以信噪比、剩余能量作为评价指标,分析了压缩引起的信号失真。结果表明压缩比在0~31范围内取值时,信噪比较高,剩余能量在99%以上,原始信号与重构信号有很好的相关度。即使在不太理想的情况下,将重构信号用于地震参数快速测定,对震级的计算误差最大为8.6×10-3,震中距的最大计算误差为3.69 km。  相似文献   

13.
Seismic inversion plays an important role in reservoir modelling and characterisation due to its potential for assessing the spatial distribution of the sub‐surface petro‐elastic properties. Seismic amplitude‐versus‐angle inversion methodologies allow to retrieve P‐wave and S‐wave velocities and density individually allowing a better characterisation of existing litho‐fluid facies. We present an iterative geostatistical seismic amplitude‐versus‐angle inversion algorithm that inverts pre‐stack seismic data, sorted by angle gather, directly for: density; P‐wave; and S‐wave velocity models. The proposed iterative geostatistical inverse procedure is based on the use of stochastic sequential simulation and co‐simulation algorithms as the perturbation technique of the model parametre space; and the use of a genetic algorithm as a global optimiser to make the simulated elastic models converge from iteration to iteration. All the elastic models simulated during the iterative procedure honour the marginal prior distributions of P‐wave velocity, S‐wave velocity and density estimated from the available well‐log data, and the corresponding joint distributions between density versus P‐wave velocity and P‐wave versus S‐wave velocity. We successfully tested and implemented the proposed inversion procedure on a pre‐stack synthetic dataset, built from a real reservoir, and on a real pre‐stack seismic dataset acquired over a deep‐water gas reservoir. In both cases the results show a good convergence between real and synthetic seismic and reliable high‐resolution elastic sub‐surface Earth models.  相似文献   

14.
Much research has been conducted for physics‐based ground‐motion simulation to reproduce seismic response of soil and structures precisely and to mitigate damages caused by earthquakes. We aimed at enabling physics‐based ground‐motion simulations of complex three‐dimensional (3D) models with multiple materials, such as a digital twin (high‐fidelity 3D model of the physical world that is constructed in cyberspace). To perform one case of such simulation requires high computational cost and it is necessary to perform a number of simulations for the estimation of parameters or consideration of the uncertainty of underground soil structure data. To overcome this problem, we proposed a fast simulation method using graphics processing unit computing that enables a simulation with small computational resources. We developed a finite‐element‐based method for large‐scale 3D seismic response analysis with small programming effort and high maintainability by using OpenACC, a directive‐based parallel programming model. A lower precision variable format was introduced to achieve further speeding up of the simulation. For an example usage of the developed method, we applied the developed method to soil liquefaction analysis and conducted two sets of simulations that compared the effect of countermeasures against soil liquefaction: grid‐form ground improvement to strengthen the earthquake resistance of existing houses and replacement of liquefiable backfill soil of river wharves for seismic reinforcement of the wharf structure. The developed method accelerates the simulation and enables us to quantitatively estimate the effect of countermeasures using the high‐fidelity 3D soil‐structure models on a small cluster of computers.  相似文献   

15.
Acoustic impedance is one of the best attributes for seismic interpretation and reservoir characterisation. We present an approach for estimating acoustic impedance accurately from a band‐limited and noisy seismic data. The approach is composed of two stages: inverting for reflectivity from seismic data and then estimating impedance from the reflectivity inverted in the first stage. For the first stage, we achieve a two‐step spectral inversion that locates the positions of reflection coefficients in the first step and determines the amplitudes of the reflection coefficients in the second step under the constraints of the positions located in the first step. For the second stage, we construct an iterative impedance estimation algorithm based on reflectivity. In each iteration, the iterative impedance estimation algorithm estimates the absolute acoustic impedance based on an initial acoustic impedance model that is given by summing the high‐frequency component of acoustic impedance estimated at the last iteration and a low‐frequency component determined in advance using other data. The known low‐frequency component is used to restrict the acoustic impedance variation tendency in each iteration. Examples using one‐ and two‐dimensional synthetic and field seismic data show that the approach is flexible and superior to the conventional spectral inversion and recursive inversion methods for generating more accurate acoustic impedance models.  相似文献   

16.
基于Dreamlet变换的地震数据压缩理论与方法   总被引:2,自引:1,他引:1       下载免费PDF全文
为达到更加有效地表示地震数据的目的,仅仅将地震数据当作普通的图像数据处理是远远不够的,地震数据中蕴含的地震波的运动学特性也应作为重要因素而被考虑到.本文讨论了利用Dreamlet变换方法实现地震数据压缩的方法,并针对地震数据本身所蕴含的频散关系特性进一步提出了多尺度Dreamlet变换压缩方法.Dreamlet变换由2个一维局部谐波变换的张量积构成,它在提供地震波场时间-空间局部化性质的同时可以保留波场的运动学特性.通过对二维SEG/EAGE叠前、叠后数据的算例说明了Dreamlet变换用于地震数据压缩的有效性.利用压缩后的数据进行成像的结果更表明,与Curvelet变换方法相比,Dreamlet与多尺度Dreamlet方法可以提供更高的压缩比;在相同压缩比的条件下,使用Dreamlet与多尺度Dreamlet方法压缩重建后的数据进行成像能更好地保留成像结果中的重要结构.  相似文献   

17.
耿瑜  吴如山  高静怀 《地球物理学报》2012,55(08):2705-2715
为达到更加有效地表示地震数据的目的,仅仅将地震数据当作普通的图像数据处理是远远不够的,地震数据中蕴含的地震波的运动学特性也应作为重要因素而被考虑到.本文讨论了利用Dreamlet变换方法实现地震数据压缩的方法,并针对地震数据本身所蕴含的频散关系特性进一步提出了多尺度Dreamlet变换压缩方法.Dreamlet变换由2个一维局部谐波变换的张量积构成,它在提供地震波场时间-空间局部化性质的同时可以保留波场的运动学特性.通过对二维SEG/EAGE叠前、叠后数据的算例说明了Dreamlet变换用于地震数据压缩的有效性.利用压缩后的数据进行成像的结果更表明,与Curvelet变换方法相比,Dreamlet与多尺度Dreamlet方法可以提供更高的压缩比;在相同压缩比的条件下,使用Dreamlet与多尺度Dreamlet方法压缩重建后的数据进行成像能更好地保留成像结果中的重要结构.  相似文献   

18.
Areal-timealgorithmforbroadbandhighdynamicseismicdatacompressionSha-BaiLI1)(李沙白);Qi-YuanLIU2)(刘启元)andLi-RenSHEN2)(沈立人)(Instit...  相似文献   

19.
Hard rock seismic exploration normally has to deal with rather complex geological environments. These types of environments are usually characterized by a large number of local heterogeneity (e.g., faults, fracture zones, and steeply dipping interfaces). The seismic data from such environments often have a poor signal‐to‐noise ratio because of the complexity of hard rock geology. To be able to obtain reliable images of subsurface structures in such geological conditions, processing algorithms that are capable of handling seismic data with a low signal‐to‐noise ratio are required for a reflection seismic exploration. In this paper, we describe a modification of the 3D Kirchhoff post‐stack migration algorithm that utilizes coherency attributes obtained by the diffraction imaging algorithm in 3D to steer the main Kirchhoff summation. The application to a 3D synthetic model shows the stability of the presented steered migration to the presence of high level of the random noise. A test on the 3D seismic volume, acquired on a mine site located in Western Australia, reveals the capability of the approach to image steep and sharp objects such as fracture and fault zones and lateral heterogeneity.  相似文献   

20.
We present a novel method to enhance seismic data for manual and automatic interpretation. We use a genetic algorithm to optimize a kernel that, when convolved with the seismic image, appears to enhance the internal characteristics of salt bodies and the sub‐salt stratigraphy. The performance of the genetic algorithm was validated by the use of test images prior to its application on the seismic data. We present the evolution of the resulting kernel and its convolved image. This image was analysed by a seismic interpreter, highlighting possible advantages over the original one. The effects of the kernel were also subject to an automatic interpretation technique based on principal component analysis. Statistical comparison of these results with those from the original image, by means of the Mann‐Whitney U‐test, proved the convolved image to be more appropriate for automatic interpretation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号