首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
This study employs an event location algorithm based on grid search to investigate the possibility of improving seismic event location accuracy by using non-Gaussian error models. The primary departure from the Gaussian error model that is considered is the explicit use of non-Gaussian probability distributions in defining optimal estimates of location parameters. Specifically, the class of generalized Gaussian distributions is considered, which leads to the minimization of Lp norms of arrival time residuals for arbitrary p≥1. The generalized Gaussian error models are implemented both with fixed standard errors assigned to the data and with an iterative reweighting of the data on a station/phase basis. An implicit departure from a Gaussian model is also considered, namely, the use of a simple outlier rejection criterion for disassociating arrivals during the grid-search process. These various mechanisms were applied to the ISC phase picks for the IWREF reference events, and the resulting grid-search solutions were compared to the GT locations of the events as well as the ISC solutions. The results indicate that event mislocations resulting from the minimization of Lp residual norms, with p near 1, are generally better than those resulting from the conventional L2 norm minimization (Gaussian error assumption). However, this result did not always hold for mislocations in event depth. Further, outlier rejection and iterative reweighting, applied with L2 minimization, performed nearly as well as L1 minimization in some cases. The results of this study suggest that ISC can potentially improve its location capability with the use of global search methods and non-Gaussian error models. However, given the limitations of this study, further research, including the investigation of other statistical and optimization techniques not addressed here, is needed to assess this potential more completely.  相似文献   

2.
Least-squares migration (LSM) is applied to image subsurface structures and lithology by minimizing the objective function of the observed seismic and reverse-time migration residual data of various underground reflectivity models. LSM reduces the migration artifacts, enhances the spatial resolution of the migrated images, and yields a more accurate subsurface reflectivity distribution than that of standard migration. The introduction of regularization constraints effectively improves the stability of the least-squares offset. The commonly used regularization terms are based on the L2-norm, which smooths the migration results, e.g., by smearing the reflectivities, while providing stability. However, in exploration geophysics, reflection structures based on velocity and density are generally observed to be discontinuous in depth, illustrating sparse reflectance. To obtain a sparse migration profile, we propose the super-resolution least-squares Kirchhoff prestack depth migration by solving the L0-norm-constrained optimization problem. Additionally, we introduce a two-stage iterative soft and hard thresholding algorithm to retrieve the super-resolution reflectivity distribution. Further, the proposed algorithm is applied to complex synthetic data. Furthermore, the sensitivity of the proposed algorithm to noise and the dominant frequency of the source wavelet was evaluated. Finally, we conclude that the proposed method improves the spatial resolution and achieves impulse-like reflectivity distribution and can be applied to structural interpretations and complex subsurface imaging.  相似文献   

3.
Using 116 earthquakes over M_L3.8 in the Inner Mongolia region from 2008 to 2015, the local earthquake magnitude M_L and surface wave magnitude M_S are remeasured. Based on norm linear regression(SR1 and SR2) and norm(OR) orthogonal regression method, we established the conversion relationship between M_L and M_S. The results were tested with Gaussian disturbance. The result shows that the orthogonal regression method(OR) result has the best fitting curve, and the conversion relation is M_S=0.96 M_L-0.10. The difference between our result and Guo Lücan's(M_S=1.13 M_L-1.08) may be caused by regional tectonic characteristics. M_(S Inner Mongolia) value is significantly higher than the M_(S empirical) value, with an average difference of 0.23, the difference distribution of empirical relation and the rectified relation is in the range of 0.2-0.3.  相似文献   

4.
A strategy for multiple removal consists of estimating a model of the multiples and then adaptively subtracting this model from the data by estimating shaping filters. A possible and efficient way of computing these filters is by minimizing the difference or misfit between the input data and the filtered multiples in a least‐squares sense. Therefore, the signal is assumed to have minimum energy and to be orthogonal to the noise. Some problems arise when these conditions are not met. For instance, for strong primaries with weak multiples, we might fit the multiple model to the signal (primaries) and not to the noise (multiples). Consequently, when the signal does not exhibit minimum energy, we propose using the L1‐norm, as opposed to the L2‐norm, for the filter estimation step. This choice comes from the well‐known fact that the L1‐norm is robust to ‘large’ amplitude differences when measuring data misfit. The L1‐norm is approximated by a hybrid L1/L2‐norm minimized with an iteratively reweighted least‐squares (IRLS) method. The hybrid norm is obtained by applying a simple weight to the data residual. This technique is an excellent approximation to the L1‐norm. We illustrate our method with synthetic and field data where internal multiples are attenuated. We show that the L1‐norm leads to much improved attenuation of the multiples when the minimum energy assumption is violated. In particular, the multiple model is fitted to the multiples in the data only, while preserving the primaries.  相似文献   

5.
In order to perform a good pulse compression, the conventional spike deconvolution method requires that the wavelet is stationary. However, this requirement is never reached since the seismic wave always suffers high‐frequency attenuation and dispersion as it propagates in real materials. Due to this issue, the data need to pass through some kind of inverse‐Q filter. Most methods attempt to correct the attenuation effect by applying greater gains for high‐frequency components of the signal. The problem with this procedure is that it generally boosts high‐frequency noise. In order to deal with this problem, we present a new inversion method designed to estimate the reflectivity function in attenuating media. The key feature of the proposed method is the use of the least absolute error (L1 norm) to define both the data and model error in the objective functional. The L1 norm is more immune to noise when compared to the usual L2 one, especially when the data are contaminated by discrepant sample values. It also favours sparse reflectivity when used to define the model error in regularization of the inverse problem and also increases the resolution, since an efficient pulse compression is attained. Tests on synthetic and real data demonstrate the efficacy of the method in raising the resolution of the seismic signal without boosting its noise component.  相似文献   

6.
Linear prediction filters are an effective tool for reducing random noise from seismic records. Unfortunately, the ability of prediction filters to enhance seismic records deteriorates when the data are contaminated by erratic noise. Erratic noise in this article designates non‐Gaussian noise that consists of large isolated events with known or unknown distribution. We propose a robust fx projection filtering scheme for simultaneous erratic noise and Gaussian random noise attenuation. Instead of adopting the ?2‐norm, as commonly used in the conventional design of fx filters, we utilize the hybrid ‐norm to penalize the energy of the additive noise. The estimation of the prediction error filter and the additive noise sequence are performed in an alternating fashion. First, the additive noise sequence is fixed, and the prediction error filter is estimated via the least‐squares solution of a system of linear equations. Then, the prediction error filter is fixed, and the additive noise sequence is estimated through a cost function containing a hybrid ‐norm that prevents erratic noise to influence the final solution. In other words, we proposed and designed a robust M‐estimate of a special autoregressive moving‐average model in the fx domain. Synthetic and field data examples are used to evaluate the performance of the proposed algorithm.  相似文献   

7.
Bussgang算法是针对褶积盲源分离问题提出的,本文将其用于地震盲反褶积处理.由于广义高斯概率密度函数具有逼近任意概率密度函数的能力,从反射系数序列的统计特征出发,引入广义高斯分布来体现反射系数序列超高斯分布特征.依据反射系数序列的统计特征和Bussgang算法原理,建立以Kullback-Leibler距离为非高斯性度量的目标函数,并导出算法中涉及到的无记忆非线性函数,最终实现了地震盲反褶积.模型试算和实际资料处理结果表明,该方法能较好地适应非最小相位系统,能够同时实现地震子波和反射系数估计,有效地提高地震资料分辨率.  相似文献   

8.
A numerical comparison of 2D resistivity imaging with 10 electrode arrays   总被引:9,自引:0,他引:9  
Numerical simulations are used to compare the resolution and efficiency of 2D resistivity imaging surveys for 10 electrode arrays. The arrays analysed include pole‐pole (PP), pole‐dipole (PD), half‐Wenner (HW), Wenner‐α (WN), Schlumberger (SC), dipole‐dipole (DD), Wenner‐β (WB), γ‐array (GM), multiple or moving gradient array (GD) and midpoint‐potential‐referred measurement (MPR) arrays. Five synthetic geological models, simulating a buried channel, a narrow conductive dike, a narrow resistive dike, dipping blocks and covered waste ponds, were used to examine the surveying efficiency (anomaly effects, signal‐to‐noise ratios) and the imaging capabilities of these arrays. The responses to variations in the data density and noise sensitivities of these electrode configurations were also investigated using robust (L1‐norm) inversion and smoothness‐constrained least‐squares (L2‐norm) inversion for the five synthetic models. The results show the following. (i) GM and WN are less contaminated by noise than the other electrode arrays. (ii) The relative anomaly effects for the different arrays vary with the geological models. However, the relatively high anomaly effects of PP, GM and WB surveys do not always give a high‐resolution image. PD, DD and GD can yield better resolution images than GM, PP, WN and WB, although they are more susceptible to noise contamination. SC is also a strong candidate but is expected to give more edge effects. (iii) The imaging quality of these arrays is relatively robust with respect to reductions in the data density of a multi‐electrode layout within the tested ranges. (iv) The robust inversion generally gives better imaging results than the L2‐norm inversion, especially with noisy data, except for the dipping block structure presented here. (v) GD and MPR are well suited to multichannel surveying and GD may produce images that are comparable to those obtained with DD and PD. Accordingly, the GD, PD, DD and SC arrays are strongly recommended for 2D resistivity imaging, where the final choice will be determined by the expected geology, the purpose of the survey and logistical considerations.  相似文献   

9.
A robust metric of data misfit such as the ?1‐norm is required for geophysical parameter estimation when the data are contaminated by erratic noise. Recently, the iteratively re‐weighted and refined least‐squares algorithm was introduced for efficient solution of geophysical inverse problems in the presence of additive Gaussian noise in the data. We extend the algorithm in two practically important directions to make it applicable to data with non‐Gaussian noise and to make its regularisation parameter tuning more efficient and automatic. The regularisation parameter in iteratively reweighted and refined least‐squares algorithm varies with iteration, allowing the efficient solution of constrained problems. A technique is proposed based on the secant method for root finding to concentrate on finding a solution that satisfies the constraint, either fitting to a target misfit (if a bound on the noise is available) or having a target size (if a bound on the solution is available). This technique leads to an automatic update of the regularisation parameter at each and every iteration. We further propose a simple and efficient scheme that tunes the regularisation parameter without requiring target bounds. This is of great importance for the field data inversion where there is no information about the size of the noise and the solution. Numerical examples from non‐stationary seismic deconvolution and velocity‐stack inversion show that the proposed algorithm is efficient, stable, and robust and outperforms the conventional and state‐of‐the‐art methods.  相似文献   

10.
Consider a lamina of ore of thickness 2t whose electrical resistivity p2 is much smaller than the resistivity p1 of the surrounding host rock. The induced polarization response of such an ore body is investigated under the assumption that it arises from the variation of p2 with the frequency of measurement. Let p2l and p2h be the resistivities of the ore-body for the low and high frequencies of measurement and L a length of the order of the distance between the transmitting electrodes. A theory is developed under the assumptions that each of the quantities t/L, p2l/p1, p2h/p1, Lp2l/2tp1, and Lp2h/tp1 is small. The main conclusion is that the frequency effect parameter P is given approximately by P=cL(p2l ? p2h)/2tp1, where the constant c is independent of t, p2l, p2h, and p1. Thus for a family of similar ore bodies having differing values of t, P will be the larger the smaller t. Detailed results are given for a semi-infinite submerged dipping dyke and the two dimensional Wenner array.  相似文献   

11.
Summary Solar and lunar geomagnetic tides inH at Alibag have been determined by spectral analysis of discrete Fourier transforms following the method of Black and the well-known Chapman-Miller method. The seasonal variation inL 2(H) is opposite to that inL 2(D) with maximum in thed season and minimum in thej season. In bothH andD the enhancement due to sunspot activity is larger in lunar tide than in solar tide. Surprisingly, the enhancement due to magnetic activity is greater inL 2(H) than inS 1(H), while the contrary is true for declination. It is inferred that there is a local time component of the storm time variation contrary to the view expressed by Green and Malin. The enhancements in amplitudesL 2 andS 1 inH andD, due to sunspot activity and due to magnetic activity, have been separated. The results show that the amplitude at zero sunspot number increases with magnetic activity in all the four parameters, while the enhancement due to sunspot activity at different levels of magnetic activity decreases with increase ofK p. But if bothK p andR are increasing, whenK p increases enhancement due toR decreases and whenR increases enhancement due toK p decreases.  相似文献   

12.
We present a new inversion method to estimate, from prestack seismic data, blocky P‐ and S‐wave velocity and density images and the associated sparse reflectivity levels. The method uses the three‐term Aki and Richards approximation to linearise the seismic inversion problem. To this end, we adopt a weighted mixed l2, 1‐norm that promotes structured forms of sparsity, thus leading to blocky solutions in time. In addition, our algorithm incorporates a covariance or scale matrix to simultaneously constrain P‐ and S‐wave velocities and density. This a priori information is obtained by nearby well‐log data. We also include a term containing a low‐frequency background model. The l2, 1 mixed norm leads to a convex objective function that can be minimised using proximal algorithms. In particular, we use the fast iterative shrinkage‐thresholding algorithm. A key advantage of this algorithm is that it only requires matrix–vector multiplications and no direct matrix inversion. The latter makes our algorithm numerically stable, easy to apply, and economical in terms of computational cost. Tests on synthetic and field data show that the proposed method, contrarily to conventional l2‐ or l1‐norm regularised solutions, is able to provide consistent blocky and/or sparse estimators of P‐ and S‐wave velocities and density from a noisy and limited number of observations.  相似文献   

13.
Seismic imaging is an important step for imaging the subsurface structures of the Earth. One of the attractive domains for seismic imaging is explicit frequency–space (fx) prestack depth migration. So far, this domain focused on migrating seismic data in acoustic media, but very little work assumed visco‐acoustic media. In reality, seismic exploration data amplitudes suffer from attenuation. To tackle the problem of attenuation, new operators are required, which compensates for it. We propose the weighted L 1 ‐error minimisation technique to design visco‐acoustic f – x wavefield extrapolators. The L 1 ‐error wavenumber responses provide superior extrapolator designs as compared with the previously designed equiripple L 4 ‐norm and L‐norm extrapolation wavenumber responses. To verify the new compensating designs, prestack depth migration is performed on the challenging Marmousi model dataset. A reference migrated section is obtained using non‐compensating fx extrapolators on an acoustic dataset. Then, both compensating and non‐compensating extrapolators are applied to a visco‐acoustic dataset, and both migrated sections are then compared. The final images show that the proposed weighted L 1 ‐error method enhances the resolution and results in practically stable images.  相似文献   

14.
15.
The use of relaxation mechanisms has recently made it possible to simulate viscoelastic (Q) effects accurately in time-domain numerical computations of seismic responses. As a result, seismograms may now be synthesized for models with arbitrary spatial variations in compressional- and shear-wave quality factors (Q9, and Qs, as well as in density (ρ) and compressional- and shear-wave velocities (Vp, and Vs). Reflections produced by Q contrasts alone may have amplitudes as large as those produced by velocity contrasts. Q effects, including their interaction with Vp, Vs and p, contribute significantly to the seismic response of reservoirs. For band-limited data at typical seismic frequencies, the effects of Q on reflectivity and attenuation are more visible than those on dispersion. Synthetic examples include practical applications to reservoir exploration, evaluation and monitoring. Q effects are clearly visible in both surface and offset vertical seismic profile data. Thus, AVO analyses that neglect Q may produce erroneous conclusions.  相似文献   

16.
The Pannonian Basin is a deep intra-continental basin that formed as part of the Alpine orogeny. In order to study the nature of the crustal basement we used the long-wavelength magnetic anomalies acquired by the CHAMP satellite. The anomalies were distributed in a spherical shell, some 107,927 data recorded between January 1 and December 31 of 2008. They covered the Pannonian Basin and its vicinity. These anomaly data were interpolated into a spherical grid of 0.5° × 0.5°, at the elevation of 324 km by the Gaussian weight function. The vertical gradient of these total magnetic anomalies was also computed and mapped to the surface of a sphere at 324 km elevation. The former spherical anomaly data at 425 km altitude continued downward to 324 km. To interpret these data at the elevation of 324 km we used an inversion method. A polygonal prism forward model was used for the inversion. The minimum problem was solved numerically by the Simplex and Simulated annealing methods; a L2 norm in the case of Gaussian distribution parameters and a L1 norm was used in the case of Laplace distribution parameters. We interpret that the magnetic anomaly was produced by several sources and the effect of the sable magnetization of the exsolution of hemo-ilmenite minerals in the upper crustal metamorphic rocks.  相似文献   

17.
Over the past several decades, different groundwater modeling approaches of various complexities and data use have been developed. A recently developed approach for mapping hydraulic conductivity (K) and specific storage (Ss) heterogeneity is hydraulic tomography, the performance of which has not been compared to other more “traditional” methods that have been utilized over the past several decades. In this study, we compare seven methods of modeling heterogeneity which are (1) kriging, (2) effective parameter models, (3) transition probability/Markov Chain geostatistics models, (4) geological models, (5) stochastic inverse models conditioned to local K data, (6) hydraulic tomography, and (7) hydraulic tomography conditioned to local K data using data collected in five boreholes at a field site on the University of Waterloo (UW) campus, in Waterloo, Ontario, Canada. The performance of each heterogeneity model is first assessed during model calibration. In particular, the correspondence between simulated and observed drawdowns is assessed using the mean absolute error norm, (L1), mean square error norm (L2), and correlation coefficient (R) as well as through scatterplots. We also assess the various models on their ability to predict drawdown data not used in the calibration effort from nine pumping tests. Results reveal that hydraulic tomography is best able to reproduce these tests in terms of the smallest discrepancy and highest correlation between simulated and observed drawdowns. However, conditioning of hydraulic tomography results with permeameter K data caused a slight deterioration in accuracy of drawdown predictions which suggests that data integration may need to be conducted carefully.  相似文献   

18.
Methods of minimum entropy deconvolution (MED) try to take advantage of the non-Gaussian distribution of primary reflectivities in the design of deconvolution operators. Of these, Wiggins’(1978) original method performs as well as any in practice. However, we present examples to show that it does not provide a reliable means of deconvolving seismic data: its operators are not stable and, instead of whitening the data, they often band-pass filter it severely. The method could be more appropriately called maximum kurtosis deconvolution since the varimax norm it employs is really an estimate of kurtosis. Its poor performance is explained in terms of the relation between the kurtosis of a noisy band-limited seismic trace and the kurtosis of the underlying reflectivity sequence, and between the estimation errors in a maximum kurtosis operator and the data and design parameters. The scheme put forward by Fourmann in 1984, whereby the data are corrected by the phase rotation that maximizes their kurtosis, is a more practical method. This preserves the main attraction of MED, its potential for phase control, and leaves trace whitening and noise control to proven conventional methods. The correction can be determined without actually applying a whole series of phase shifts to the data. The application of the method is illustrated by means of practical and synthetic examples, and summarized by rules derived from theory. In particular, the signal-dominated bandwidth must exceed a threshold for the method to work at all and estimation of the phase correction requires a considerable amount of data. Kurtosis can estimate phase better than other norms that are misleadingly declared to be more efficient by theory based on full-band, noise-free data.  相似文献   

19.
Simultaneous estimation of velocity gradients and anisotropic parameters from seismic reflection data is one of the main challenges in transversely isotropic media with a vertical symmetry axis migration velocity analysis. In migration velocity analysis, we usually construct the objective function using the l2 norm along with a linear conjugate gradient scheme to solve the inversion problem. Nevertheless, for seismic data this inversion scheme is not stable and may not converge in finite time. In order to ensure the uniform convergence of parameter inversion and improve the efficiency of migration velocity analysis, this paper develops a double parameterized regularization model and gives the corresponding algorithms. The model is based on the combination of the l2 norm and the non‐smooth l1 norm. For solving such an inversion problem, the quasi‐Newton method is utilized to make the iterative process stable, which can ensure the positive definiteness of the Hessian matrix. Numerical simulation indicates that this method allows fast convergence to the true model and simultaneously generates inversion results with a higher accuracy. Therefore, our proposed method is very promising for practical migration velocity analysis in anisotropic media.  相似文献   

20.
In this study, observed seismic attributes from shot gather 11 of the SAREX experiment are used to derive a preliminary velocity and attenuation model for the northern end of the profile in southern Alberta. Shot gather 11 was selected because of its prominent Pn arrivals and good signal to noise ratio. The 2-D Gaussian beam method was used to perform the modeling of the seismic attributes including travel times, peak envelope amplitudes and pulse instantaneous frequencies for selected phases. The preliminary model was obtained from the seismic attributes from shot gather 11 starting from prior tomographic results. The amplitudes and instantaneous frequencies were used to constrain the velocity and attenuation structure, with the amplitudes being more sensitive to the velocity gradients and the instantaneous frequencies more sensitive to the attenuation structure. The resulting velocity model has a velocity discontinuity between the upper and lower crust, and lower velocity gradients in the upper and lower crust compared to earlier studies. The attenuation model has Q p -1 values between 0.011 and 0.004 in the upper crust, 0.0019 in the lower crust and a laterally variable Q p -1 in the upper mantle. The Q p -1 values are similar to those found in Archean terranes from other studies. Although the results from a single gather are non-unique, the initial model derived here provides a self-consistent starting point for a more complete seismic attribute inversion for the velocity and attenuation structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号