首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
Deconvolution and deringing are well known subjects and it is not necessary to state again their objectives nor the basical methods used to reach them. Let us just remember that, generally, among many others, the two following assumptions are made for simplification purposes:
  • —for deconvolution, it is assumed that the recorded seismic signal is constant, meaning that its shape is the same all along the time interval during which the trace is to be deconvolved;
  • —for de-ringing, it is assumed that the ringing period is constant and that the intensity of the ringing phenomenon is independant of the time.
With these two assumptions, a single constant operator can be applied for deconvolving, deringing or both. In most cases, the time variations of the signal or of the ringing are small enough and the error resulting of the application of a constant operator is acceptable. It results into a slight increase of the noise level or into a small residual ringing in the processed trace. When this noise or the residual ringing are too important, the assumption of a constant signal and ringing period must be rejected. This is the case that is examined here according to the following steps:
  • —short definition of the problem;
  • —fast evaluation of some possible solutions;
  • —the selected solution: resulting approximations and how to obviate them, computing method and a remark about the operators;
  • —theoretical example: the efficiency of the process used is evaluated on data in which the results aimed at are known; the influence of the selection of numerical values to be assigned to the parameters is examined;
  • —real cases: comparison of results obtained with the Protee process and with more conventional processes assuming a time invariance or including a weighted composition of several conventional processes each with a different operator.
  相似文献   

2.
Conventional time-space domain and frequency-space domain prediction filtering methods assume that seismic data consists of two parts, signal and random noise. That is, the so-called additive noise model. However, when estimating random noise, it is assumed that random noise can be predicted from the seismic data by convolving with a prediction error filter. That is, the source-noise model. Model inconsistencies, before and after denoising, compromise the noise attenuation and signal-preservation performances of prediction filtering methods. Therefore, this study presents an inversion-based time-space domain random noise attenuation method to overcome the model inconsistencies. In this method, a prediction error filter (PEF), is first estimated from seismic data; the filter characterizes the predictability of the seismic data and adaptively describes the seismic data’s space structure. After calculating PEF, it can be applied as a regularized constraint in the inversion process for seismic signal from noisy data. Unlike conventional random noise attenuation methods, the proposed method solves a seismic data inversion problem using regularization constraint; this overcomes the model inconsistency of the prediction filtering method. The proposed method was tested on both synthetic and real seismic data, and results from the prediction filtering method and the proposed method are compared. The testing demonstrated that the proposed method suppresses noise effectively and provides better signal-preservation performance.  相似文献   

3.
The application of homomorphic filtering in marine seismic reflection work is investigated with the aims to achieve the estimation of the basic wavelet, the wavelet deconvolution and the elimination of multiples. Each of these deconvolution problems can be subdivided into two parts: The first problem is the detection of those parts in the cepstrum which ought to be suppressed in processing. The second part includes the actual filtering process and the problem of minimizing the random noise which generally is enhanced during the homomorphic procedure. The application of homomorphic filters to synthetic seismograms and air-gun measurements shows the possibilities for the practical application of the method as well as the critical parameters which determine the quality of the results. These parameters are:
  • a) the signal-to-noise ratio (SNR) of the input data
  • b) the window width and the cepstrum components for the separation of the individual parts
  • c) the time invariance of the signal in the trace.
In the presence of random noise the power cepstrum is most efficient for the detection of wavelet arrival times. For wavelet estimation, overlapping signals can be detected with the power cepstrum up to a SNR of three. In comparison with this, the detection of long period multiples is much more complicated. While the exact determination of the water reverberation arrival times can be realized with the power cepstrum up to a multiples-to-primaries ratio of three to five, the detection of the internal multiples is generally not possible, since for these multiples this threshold value of detectibility and arrival time determination is generally not realized. For wavelet estimation, comb filtering of the complex cepstrum is most valuable. The wavelet estimation gives no problems up to a SNR of ten. Even in the presence of larger noise a reasonable estimation can be obtained up to a SNR of five by filtering the phase spectrum during the computation of the complex cepstrum. In contrast to this, the successful application of the method for the multiple reduction is confined to a SNR of ten, since the filtering of the phase spectrum for noise reduction cannot be applied. Even if the threshold results are empirical, they show the limits fór the successful application of the method.  相似文献   

4.
The purpose of deconvolution is to retrieve the reflectivity from seismic data. To do this requires an estimate of the seismic wavelet, which in some techniques is estimated simultaneously with the reflectivity, and in others is assumed known. The most popular deconvolution technique is inverse filtering. It has the property that the deconvolved reflectivity is band-limited. Band-limitation implies that reflectors are not sharply resolved, which can lead to serious interpretation problems in detailed delineation. To overcome the adverse effects of band-limitation, various alternatives for inverse filtering have been proposed. One class of alternatives is Lp-norm deconvolution, L1norm deconvolution being the best-known of this class. We show that for an exact convolutional forward model and statistically independent reflectivity and additive noise, the maximum likelihood estimate of the reflectivity can be obtained by Lp-norm deconvolution for a range of multivariate probability density functions of the reflectivity and the noise. The L-norm corresponds to a uniform distribution, the L2-norm to a Gaussian distribution, the L1-norm to an exponential distribution and the L0-norm to a variable that is sparsely distributed. For instance, if we assume sparse and spiky reflectivity and Gaussian noise with zero mean, the Lp-norm deconvolution problem is solved best by minimizing the L0-norm of the reflectivity and the L2-norm of the noise. However, the L0-norm is difficult to implement in an algorithm. From a practical point of view, the frequency-domain mixed-norm method that minimizes the L1norm of the reflectivity and the L2-norm of the noise is the best alternative. Lp-norm deconvolution can be stated in both time and frequency-domain. We show that both approaches are only equivalent for the case when the noise is minimized with the L2-norm. Finally, some Lp-norm deconvolution methods are compared on synthetic and field data. For the practical examples, the wide range of possible Lp-norm deconvolution methods is narrowed down to three methods with p= 1 and/or 2. Given the assumptions of sparsely distributed reflectivity and Gaussian noise, we conclude that the mixed L1norm (reflectivity) L2-norm (noise) performs best. However, the problems inherent to single-trace deconvolution techniques, for example the problem of generating spurious events, remain. For practical application, a greater problem is that only the main, well-separated events are properly resolved.  相似文献   

5.
The least squares estimation procedures used in different disciplines can be classified in four categories:
  • a. Wiener filtering,
  • b. b. Autoregressive estimation,
  • c. c. Kalman filtering,
  • d. d. Recursive least squares estimation.
The recursive least squares estimator is the time average form of the Kalman filter. Likewise, the autoregressive estimator is the time average form of the Wiener filter. Both the Kalman and the Wiener filters use ensemble averages and can basically be constructed without having a particular measurement realisation available. It follows that seismic deconvolution should be based either on autoregression theory or on recursive least squares estimation theory rather than on the normally used Wiener or Kalman theory. A consequence of this change is the need to apply significance tests on the filter coefficients. The recursive least squares estimation theory is particularly suitable for solving the time variant deconvolution problem.  相似文献   

6.
7.
Window‐based Euler deconvolution is commonly applied to magnetic and sometimes to gravity interpretation problems. For the deconvolution to be geologically meaningful, care must be taken to choose parameters properly. The following proposed process design rules are based partly on mathematical analysis and partly on experience.
    相似文献   

8.
In order to perform a good pulse compression, the conventional spike deconvolution method requires that the wavelet is stationary. However, this requirement is never reached since the seismic wave always suffers high‐frequency attenuation and dispersion as it propagates in real materials. Due to this issue, the data need to pass through some kind of inverse‐Q filter. Most methods attempt to correct the attenuation effect by applying greater gains for high‐frequency components of the signal. The problem with this procedure is that it generally boosts high‐frequency noise. In order to deal with this problem, we present a new inversion method designed to estimate the reflectivity function in attenuating media. The key feature of the proposed method is the use of the least absolute error (L1 norm) to define both the data and model error in the objective functional. The L1 norm is more immune to noise when compared to the usual L2 one, especially when the data are contaminated by discrepant sample values. It also favours sparse reflectivity when used to define the model error in regularization of the inverse problem and also increases the resolution, since an efficient pulse compression is attained. Tests on synthetic and real data demonstrate the efficacy of the method in raising the resolution of the seismic signal without boosting its noise component.  相似文献   

9.
10.
11.
Gravity data inversion can provide valuable information on the structure of the underlying distribution of mass. The solution of the inversion of gravity data is an ill-posed problem, and many methods have been proposed for solving it using various systematic techniques. The method proposed here is a new approach based on the collocation principle, derived from the Wiener filtering and prediction theory. The natural multiplicity of the solution of the inverse gravimetric problem can be overcome only by assuming a substantially simplified model, in this case a two-layer model, i.e. with one separation surface and one density contrast only. The presence of gravity disturbance and/or outliers in the upper layer is also taken into account. The basic idea of the method is to propagate the covariance structure of the depth function of the separation surface to the covariance structure of the gravity field measured on a reference plane. This can be done since the gravity field produced by the layers is a functional (linearized) of the depth. Furthermore, in this approach, it is possible to obtain the variance of the estimation error which indicates the precision of the computed solution. The method has proved to be effective on simulated data, fulfilling the a priori hypotheses. In real cases which display the required statistical homogeneity, good preliminary solutions, useful for a further quantitative interpretation, have also been derived. A case study is discussed.  相似文献   

12.
Pseudo-velocity-logs are tentative determinations of subsurface velocity variations with depth, using both information of seismic amplitude and reflection curvature. A rigorous theoretical method would consist in
  • a) deconvolving the seismic traces to remove the filtering effects of the ground and of the recording equipment
  • b) demultiplying the deconvolved traces by a complete desynthesization with convergence criteria
  • c) computing the velocities.
While this method works with synthetic examples, it is not generally applicable to field cases, one of the reasons being the poor reliability of desynthesization in the presence of noise. The present method is a compromise between a rigorous and a practical process: the complete desynthesization is not performed; deconvolution and demultiplication are done by more classical techniques using real amplitudes; absolute velocities are determined to fit both the reflection coefficients and the rms velocities. It leads to pseudovelocity-logs, accurate enough to show lithologic variations, smoothed enough to preserve the signal/noise ratio. Examples are shown of Flexichoc profiles recorded in 2500 m (8000–9000 feet) deep areas of the Mediterranean Sea. Pseudo-velocity-logs show 1000 m (3000 feet) of a velocity-increasing-with-depth Plio-pleistocene marl formation, overlying Miocene evaporites. Intercalations of high and low-velocity layers in the evaporites seem to indicate vertical facies variations. The Pseudo-velocity-log, associated with other lithologic determination processes, should become a geological tool for deep offshore exploration.  相似文献   

13.
地震偏移反演成像的迭代正则化方法研究   总被引:12,自引:7,他引:5       下载免费PDF全文
利用伴随算子L*,直接的偏移方法通常导致一个低分辨率或模糊的地震成像.线性化偏移反演方法需求解一个最小二乘问题.但直接的最小二乘方法的数值不稳定,为目视解译带来困难.本文建立约束正则化数学模型,研究了地震偏移反演成像问题的迭代正则化求解方法.首先对最小二乘问题施加正则化约束,接着利用梯度迭代法求解反演成像问题,特别是提出了共轭梯度方法的混合实现技巧.为了表征该方法的可实际利用性,分别对一维,二维和三维地震模型进行了数值模拟.结果表明该正则偏移反演成像方法是有效的,对于实际的地震成像问题有着良好的应用前景.  相似文献   

14.
The conventional nonstationary convolutional model assumes that the seismic signal is recorded at normal incidence. Raw shot gathers are far from this assumption because of the effects of offsets. Because of such problems, we propose a novel prestack nonstationary deconvolution approach. We introduce the radial trace (RT) transform to the nonstationary deconvolution, we estimate the nonstationary deconvolution factor with hyperbolic smoothing based on variable-step sampling (VSS) in the RT domain, and we obtain the high-resolution prestack nonstationary deconvolution data. The RT transform maps the shot record from the offset and traveltime coordinates to those of apparent velocity and traveltime. The ray paths of the traces in the RT better satisfy the assumptions of the convolutional model. The proposed method combines the advantages of stationary deconvolution and inverse Q filtering, without prior information for Q. The nonstationary deconvolution in the RT domain is more suitable than that in the space-time (XT) domain for prestack data because it is the generalized extension of normal incidence. Tests with synthetic and real data demonstrate that the proposed method is more effective in compensating for large-offset and deep data.  相似文献   

15.
GOCE引力梯度的频谱分析及滤波   总被引:7,自引:3,他引:4       下载免费PDF全文
GOCE卫星提供的梯度数据含有非常大的低频误差,如何处理这种误差是GOCE数据处理中最为关键的工作之一.本文根据GOCE卫星的运行情况,首先分析了梯度数据的频率特性,推导了频率与阶次的对应关系;并在此之上,介绍了针对低频误差的滤波方法,即移去恢复和向前向后滤波方法,前者可解决滤波中的低频信号损失问题,后者则主要解决了滤波中的相位漂移问题.最终结果表明:引力梯度的时间频谱与球谐展开中的阶次虽不是一一对应的,但各阶所对应的最大截止频率与阶次却有一定的显式表达.同时也表明,本文所采用的滤波方法是有效的,达到了消除低频误差但保留观测频段信号的目的.  相似文献   

16.
A review of the most significant mathematical properties of digital operators and an introduction to their important applications to seismic digital filtering is given. Basic definitions in the time-series field and the principles of digital filtering are introduced starting from the Z-transform domain. Predictive decomposition for stationary stochastic processes and inverse operators are also discussed. Applications of digital filtering to seismic signal concern the predictive deconvolution, characteristics of dispersive and recursive operators, matched filters, and multichannel operators. A brief discussion on frequency, wave number, and velocity filtering phylosophy is given at the end of the paper.  相似文献   

17.
基于ARMA模型非因果空间预测滤波(英文)   总被引:3,自引:1,他引:2  
常规频域预测滤波方法是建立在自回归(autoregressive,AR)模型基础上的,这导致滤波过程中前后假设的不一致,即首先利用源噪声的假设计算误差剖面,却又将其作为可加噪声而从原始剖面中减去来得到有效信号。本文通过建立自回归-滑动平均(autoregres sive/moving-average,ARMA)模型,首先求解非因果预测误差滤波算子,然后利用自反褶积形式投影滤波过程估计可加噪声,进而达到去除随机噪声目的。此过程有效避免了基于AR模型产生的不一致性。在此基础上,将一维ARMA模型扩展到二维空间域,实现了基于二维ARMA模型频域非因果空间预测滤波在三维地震资料随机噪声衰减中的应用。模型试验与实际资料处理表明该方法在很好保留反射信息同时,压制随机噪声更加彻底,明显优于常规频域预测去噪方法。  相似文献   

18.
为了恢复震动波能量在传播过程中产生的衰减损耗,提出基于褶积原理求取品质因子Q的方法与改进广义S变换相结合的反Q滤波法。通过震动波衰减补偿模型试验,对试验数据进行改进广义S变换的时频特性分析,得出了信号的能量分布情况以及时间频率对应关系;采用基于褶积原理求取品质因子的方法,得到时变Q值;对试验数据进行反Q滤波处理,使震动波能量得到了补偿。结果表明本文提出的反Q滤波法提高了对震动波能量衰减补偿的效果,拓宽了地震资料的频带,提高了地震资料分辨率,有利于进行高分辨率地震勘探、深部信号增强和油气藏预测工作的开展。  相似文献   

19.
The interpretation of magnetic anomalies on the basis of model bodies is preferably done by making use of “trial and error” methods. These manual methods are tedious and time consuming but they can be transferred to the computer by making the required adjustments by way of the method of least squares. The general principles of the method are described. Essential presumptions are the following:
  • 1 the assumption of definite model bodies
  • 2 the existence of approximation values of the unknown quantities (position, dip, magnetization, etc.)
  • 3 a sufficiently large number of measuring values, so that the process of adjustment can be carried out.
The advantages of the method are the following:
  • 1 substantial automatization and a quick procedure by using computers
  • 2 determination of the errors of the unknown quantities.
The method was applied to the interpretation of two-dimensional ΔZ- and ΔT-anomalies. Three types of model bodies are taken as basis of the computer program, viz. the thin dyke, infinite resp. finite in its extension downward and the circular cylinder. Only the measuring values are given to the computer. The interpretation proceeds in the following steps:
  • 1 calculation of approximation values
  • 2 determination of the model body of best fit
  • 3 iteration in the case of the model body of best fit.
The computer produces the end values of the unknown quantities, their mean errors, and the pertaining theoretical anomalies. These end results are given to a plotting machine, which draws the measured curve, the theoretical curve and the model bodies. Interpretation examples are given.  相似文献   

20.
Scaling geology and seismic deconvolution   总被引:1,自引:0,他引:1  
The reflection seismic signal observed at the surface is the convolution of a wavelet with a reflection sequence representing the geology. Deconvolution of the observations without prior knowledge of the wavelet can be done by making assumptions about the statistics of the reflection sequence. In particular, the widely used prediction error filter is obtained by assuming that the power spectra of reflection sequences are white. However, evidence from well logs suggests that the power spectra are in fact proportional to a power of the frequency,f, that is, tof , with equal approximately to 1.We have found a simple modification to the prediction error filter that markedly improves deconvolution for reflection sequences with such scaling behaviour. We have calculated three reflection sequences from sonic logs of a well off Newfoundland and two wells in Quebec. The three values of were 0.84, 0.95, and 1.20. We made artificial seismograms from the sequences and deconvolved them with the prediction error filter and our new filters. The errors between the known reflection sequences and the recovered ones for the prediction error filter were 20%, 26%, and 31%; for the new filters 0.5%, 2.0% and 0.5%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号