首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Potential, potential field and potential‐field gradient data are supplemental to each other for resolving sources of interest in both exploration and solid Earth studies. We propose flexible high‐accuracy practical techniques to perform 3D and 2D integral transformations from potential field components to potential and from potential‐field gradient components to potential field components in the space domain using cubic B‐splines. The spline techniques are applicable to either uniform or non‐uniform rectangular grids for the 3D case, and applicable to either regular or irregular grids for the 2D case. The spline‐based indefinite integrations can be computed at any point in the computational domain. In our synthetic 3D gravity and magnetic transformation examples, we show that the spline techniques are substantially more accurate than the Fourier transform techniques, and demonstrate that harmonicity is confirmed substantially better for the spline method than the Fourier transform method and that spline‐based integration and differentiation are invertible. The cost of the increase in accuracy is an increase in computing time. Our real data examples of 3D transformations show that the spline‐based results agree substantially better or better with the observed data than do the Fourier‐based results. The spline techniques would therefore be very useful for data quality control through comparisons of the computed and observed components. If certain desired components of the potential field or gradient data are not measured, they can be obtained using the spline‐based transformations as alternatives to the Fourier transform techniques.  相似文献   

2.
光滑处理使得单界面成为非均匀薄层,界面反射转变为层反射.为了探讨光滑处理的影响,以平面波作为入射波场,首先利用过渡层反射系数推导了反射信号的理论公式,然后就非均匀薄层下反射系数的计算问题,给出了具体的实现算法,并通过与经典Epstein过渡层反射系数解析结果的对比说明了算法的精度.最后在单界面及其被光滑后界面的对比分析中,得出了几点重要结论:随着光滑次数的增加,反射信号的到时基本保持不变,而反射信号的主频与能量呈减少趋势,其中信号能量在低光滑次数的衰减速率明显大于高光滑次数.  相似文献   

3.
We have developed a novel method for missing seismic data interpolation using f‐x‐domain regularised nonstationary autoregression. f‐x regularised nonstationary autoregression interpolation can deal with the events that have space‐varying dips. We assume that the coefficients of f‐x regularised nonstationary autoregression are smoothly varying along the space axis. This method includes two steps: the estimation of the coefficients and the interpolation of missing traces using estimated coefficients. We estimate the f‐x regularised nonstationary autoregression coefficients for the completed data using weighted nonstationary autoregression equations with smoothing constraints. For regularly missing data, similar to Spitz f‐x interpolation, we use autoregression coefficients estimated from low‐frequency components without aliasing to obtain autoregression coefficients of high‐frequency components with aliasing. For irregularly missing or gapped data, we use known traces to establish nonstationary autoregression equations with regularisation to estimate the f‐x autoregression coefficients of the complete data. We implement the algorithm by iterated scheme using a frequency‐domain conjugate gradient method with shaping regularisation. The proposed method improves the calculation efficiency by applying shaping regularisation and implementation in the frequency domain. The applicability and effectiveness of the proposed method are examined by synthetic and field data examples.  相似文献   

4.
传统CSEM一般只提取主频信号,或以谐波与主频的振幅比为依据提取部分低阶谐波信号,但缺乏判断标准,实际操作中存在很大的不确定性.本文基于小波变换和希尔伯特解析包络提出一种新的CSEM信号噪声评价方法,首先在时间域中基于混合基快速傅里叶变换获得原始信号准确功率谱;其次在频率域中根据CSEM频率位置相邻频率幅值进行频谱预处理,基于离散小波变换将预处理后的频谱分成低频部分和高频部分,基于希尔伯特变换识别高频部分的上包络线,并与低频部分重构得到频谱的整体上包络线;最后根据包络线与对应CSEM频率振幅的比值估计噪声的影响幅度,根据阈值筛选出高信噪比的主频和谐波信号.本方法不需增加野外工作量即可提取大量的频率信号,特别是高阶谐波信号,实现频率加密,提高CSEM的纵向分辨能力和能源利用率.  相似文献   

5.
In order to establish a reliable chronology for lacustrine sediments of the Frickenhauser See (central Germany) different dating methods have been applied. A total of 17 AMS 14C dates, all within the last 2000 years, were supplemented with 137Cs/210Pb dating and varve counting of the uppermost sediments (131 years). The age–depth model for the Frickenhauser See has to cope with highly variable sedimentation rates and overlapping probability distributions of calibrated 14C dates. The uncertainty of calibrated 14C dates could be considerably reduced by including the stratigraphic relationship of the dated samples, the age information derived from short-lived isotopes and varve counting as well as an upper and lower limit of realistic sedimentation rates as ‘a priori’ information in the calibration procedure. Sets of possible age combinations obtained by repeated sampling from the modified probability distributions were used to calculate continuous age–depth relationships based on monotonic smoothing splines. The obtained age–depth model for the sediment record of the Frickenhauser See represents the average of over 16,000 such model runs and suggests a drastic increase in sedimentation rates from around 1–2 mm a−1 (200–1000 AD) to over 25 mm a−1 for the period between 1100 and 1300 AD. From then on, sedimentation rates exhibit relatively stable values around 3–9 mm a−1. ‘Conventional’ age–depth models such as general polynomial regression or cubic splines either do not include the obtained age-information in a satisfying manner (the model being too “stiff”) or exhibit “swings” causing age-reversals in the model. Although the age–depth relationships obtained for monotonic smoothing splines and mixed-effect regression are generally very similar, they differ in their respective sedimentation rates as well as in their uncertainties. Mixed-effect regression resulted in much higher sedimentation rates of more than 37 mm a−1. These results suggest that monotonic smoothing splines give better control of the age–depth model characteristics and are well suited in situations, where the integrity of 14C dates is high, i.e. the dated material represents the age of the respective layer.  相似文献   

6.
Planar waves events recorded in a seismic array can be represented as lines in the Fourier domain. However, in the real world, seismic events usually have curvature or amplitude variability, which means that their Fourier transforms are no longer strictly linear but rather occupy conic regions of the Fourier domain that are narrow at low frequencies but broaden at high frequencies where the effect of curvature becomes more pronounced. One can consider these regions as localised “signal cones”. In this work, we consider a space–time variable signal cone to model the seismic data. The variability of the signal cone is obtained through scaling, slanting, and translation of the kernel for cone‐limited (C‐limited) functions (functions whose Fourier transform lives within a cone) or C‐Gaussian function (a multivariate function whose Fourier transform decays exponentially with respect to slowness and frequency), which constitutes our dictionary. We find a discrete number of scaling, slanting, and translation parameters from a continuum by optimally matching the data. This is a non‐linear optimisation problem, which we address by a fixed‐point method that utilises a variable projection method with ?1 constraints on the linear parameters and bound constraints on the non‐linear parameters. We observe that slow decay and oscillatory behaviour of the kernel for C‐limited functions constitute bottlenecks for the optimisation problem, which we partially overcome by the C‐Gaussian function. We demonstrate our method through an interpolation example. We present the interpolation result using the estimated parameters obtained from the proposed method and compare it with those obtained using sparsity‐promoting curvelet decomposition, matching pursuit Fourier interpolation, and sparsity‐promoting plane‐wave decomposition methods.  相似文献   

7.
8.
This paper attempts to show analytically that the energy-input spectra of damped SDOF systems and undamped MDOF systems excited by an earthquake motion can be predicted by smoothing the Fourier amplitude spectrum of the base acceleration. The spectral window for smoothing in the frequency domain for a damped SDOF system is identical with the probability density function of the time-variant or instantaneous vibration frequency resulting from non-linear hysteresis. The spectral window for an undamped MDOF system is identical with the set of squared participation factors associated with vibration modes. It was found that the increase in damping factor and the increase in participation of higher modes provide wider spectral windows, resulting in more flattened or unaltered energy-input spectra due to enhanced smoothing effects.  相似文献   

9.
10.
王贵宣 《地震》1993,(4):63-71
本文根据数字滤波器压制干扰的原理,指出利用数字滤波器的频率响应、优选数字处理方法和确定最佳滤波参数的具体原则。对于那些未给出频率响应函数或周期选择性函数的数字处理方法,可以直接根据计算方法的权系数算出它们的频率响应。计算递归滤波器的频率响应式子,也可以用来计算其它非递归滤波器的频率响应。本文利用重力固体潮汐分析中计算中心点的零点飘移值若干方法的系数和它们的周期选择性函数,分别计算了它们的频率响应,从两者的数值上比较,结果是一致的。本文还计算了目前大家常用的一些数字处理方法,如五日均值、一阶差分、二阶差分、多点数字平滑等方法的频率响应值。并分析了它们的处理效果和方法的局限性。还根据最佳数字滤波器和DAI数字滤波器的频率响应曲线,说明选择数字滤波器和确定最佳滤波参数的具体方法。  相似文献   

11.
Based on an average‐derivative method and optimization techniques, a 27‐point scheme for a 3D frequency‐domain scalar wave equation is developed. Compared to the rotated‐coordinate approach, the average‐derivative optimal method is not only concise but also applies to equal and unequal directional sampling intervals. The resulting 27‐point scheme uses a 27‐point operator to approximate spatial derivatives and the mass acceleration term. The coefficients are determined by minimizing phase velocity dispersion errors and the resultant optimal coefficients depend on ratios of directional sampling intervals. Compared to the classical 7‐point scheme, the number of grid points per shortest wavelength is reduced from approximately 13 to approximately 4 by this 27‐point optimal scheme for equal directional sampling intervals and unequal directional sampling intervals as well. Two numerical examples are presented to demonstrate the theoretical analysis. The average‐derivative algorithm is also extended to a 3D frequency‐domain viscous scalar wave equation.  相似文献   

12.
Optimum design of structures for earthquake is achieved by simulated annealing. To reduce the computational work, a fast wavelet transform is used by means of which the number of points in the earthquake record is decreased. The record is decomposed into two parts. One part contains the low frequency of the record, and the other contains the high frequency of the record. The low‐frequency content is the effective part, since most of the energy of the record is contained in this part of the record. Thus, the low‐frequency part of the record is used for dynamic analysis. Then, using a wavelet neural network, the dynamic responses of the structures are approximated. By such approximation, the dynamic analysis of the structure becomes unnecessary in the process of optimization. The wavelet neural networks have been employed as a general approximation tool for the time history dynamic analysis. A number of structures are designed for optimal weight and the results are compared to those corresponding to the exact dynamic analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, the analytical layer-element method is utilized to analyze the plane strain dynamic response of a transversely isotropic multilayered half-plane due to a moving load. We assume that the studied system moves synchronously with the moving load, then the moving load relative to the moving system is considered to be motionless. Therefore, the vertical stress and the vertical displacement under the moving load need not update for the variation of the load position. Based on the governing equations of motion in the moving system, the analytical layer-element solutions for a finite layer and a half-plane in the Fourier transform domain are derived by using the algebraic operations in Ref. [7]. The global matrix of the problem can be obtained by assembling the analytical layer-elements of all layers. The corresponding solution in the frequency domain is further recovered by the inverse Fourier transform. Several examples are given to confirm the accuracy of the proposed method and to illustrate the influence of material properties.  相似文献   

14.
介绍了由声测井资料计算反射系数序列中的多分辨率逼近理论,详细分析了由高分辨率的声测井资料计算低分辨率的反射系数序列的地质要求,进而在理论上讨论了此类问题中多分辨逼近的选择,并比较了由Haar系与由三次样条函数构造的多分辨率逼近这两种典型的多分辨逼近。最后利用理论模型及实际资料算例验证了选择恰当的多分辨率逼近在解决这类问题中的有效性。  相似文献   

15.
基于频域衰减的时域全波形反演   总被引:1,自引:1,他引:0       下载免费PDF全文
郭雪豹  刘洪  石颖 《地球物理学报》2016,59(10):3777-3787
时域全波形反演由于采用了全频段信息,因此在迭代过程中不同波长的信息不能由低到高的逐步重建,极易陷入局部极小值.本文通过分频段的方式,对地震数据做正反傅里叶变换,利用频域指数衰减的方法逐级分离出地震数据中的高频成分,在时域上实现由低频向高频的波形反演,从而降低了反演的非线性,使不同波长的信息得到稳步恢复.同时,在高频成分衰减的过程中,后至波的能量也被削弱,由此也降低了深层反射在初始反演过程中的干扰.整个反演仅增加对数据做正反傅里叶变换过程,相较于混合域反演,无需提取全部波场的相应频率成分.在计算效率方面,利用GPU进行加速,并采用CUDA自带函数库中cufft来提高计算效率.通过对Marmousi模型测试,验证了所述方法的有效性.  相似文献   

16.
Ground roll attenuation using the S and x-f-k transforms   总被引:2,自引:0,他引:2  
Ground roll, which is characterized by low frequency and high amplitude, is an old seismic data processing problem in land‐based seismic acquisition. Common techniques for ground roll attenuation are frequency filtering, f‐k or velocity filtering and a type of f‐k filtering based on the time‐offset windowed Fourier transform. These techniques assume that the seismic signal is stationary. In this study we utilized the S, x‐f‐k and t‐f‐k transforms as alternative methods to the Fourier transform. The S transform is a type of time‐frequency transform that provides frequency‐dependent resolution while maintaining a direct relationship with the Fourier spectrum. Application of a filter based on the S transform to land seismic shot records attenuates ground roll in a time‐frequency domain. The t‐f‐k and x‐f‐k transforms are approaches to localize the apparent velocity panel of a seismic record in time and offset domains, respectively. These transforms provide a convenient way to define offset or time‐varying reject zones on the separate f‐k panel at different offsets or times.  相似文献   

17.
A high‐resolution method to image the horizontal boundaries of gravity and magnetic sources is presented (the enhanced horizontal derivative (EHD) method). The EHD is formed by taking the horizontal derivative of a sum of vertical derivatives of increasing order. The location of EHD maxima is used to outline the source boundaries. While for gravity anomalies the method can be applied immediately, magnetic anomalies should be previously reduced to the pole. We found that working on reduced‐to‐the‐pole magnetic anomalies leads to better results than those obtainable by working on magnetic anomalies in dipolar form, even when the magnetization direction parameters are not well estimated. This is confirmed also for other popular methods used to estimate the horizontal location of potential fields source boundaries. The EHD method is highly flexible, and different conditions of signal‐to‐noise ratios and depths‐to‐source can be treated by an appropriate selection of the terms of the summation. A strategy to perform high‐order vertical derivatives is also suggested. This involves both frequency‐ and space‐domain transformations and gives more stable results than the usual Fourier method. The high resolution of the EHD method is demonstrated on a number of synthetic gravity and magnetic fields due to isolated as well as to interfering deep‐seated prismatic sources. The resolving power of this method was tested also by comparing the results with those obtained by another high‐resolution method based on the analytic signal. The success of the EHD method in the definition of the source boundary is due to the fact that it conveys efficiently all the different boundary information contained in any single term of the sum. Application to a magnetic data set of a volcanic area in southern Italy helped to define the probable boundaries of a calderic collapse, marked by a number of magmatic intrusions. Previous interpretations of gravity and magnetic fields suggested a subcircular shape for this caldera, the boundaries of which are imaged with better detail using the EHD method.  相似文献   

18.
Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple–primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.  相似文献   

19.
Broadband constant-coefficient propagators   总被引:4,自引:1,他引:4  
The phase error between the real phase shift and the Gazdag background phase shift, due to lateral velocity variations about a reference velocity, can be decomposed into axial and paraxial phase errors. The axial phase error depends only on velocity perturbations and hence can be completely removed by the split‐step Fourier method. The paraxial phase error is a cross function of velocity perturbations and propagation angles. The cross function can be approximated with various differential operators by allowing the coefficients to vary with velocity perturbations and propagation angles. These variable‐coefficient operators require finite‐difference numerical implementation. Broadband constant‐coefficient operators may provide an efficient alternative that approximates the cross function within the split‐step framework and allows implementation using Fourier transforms alone. The resulting migration accuracy depends on the localization of the constant‐coefficient operators. A simple broadband constant‐coefficient operator has been designed and is tested with the SEG/EAEG salt model. Compared with the split‐step Fourier method that applies to either weak‐contrast media or at small propagation angles, this operator improves wavefield extrapolation for large to strong lateral heterogeneities, except within the weak‐contrast region. Incorporating the split‐step Fourier operator into a hybrid implementation can eliminate the poor performance of the broadband constant‐coefficient operator in the weak‐contrast region. This study may indicate a direction of improving the split‐step Fourier method, with little loss of efficiency, while allowing it to remain faster than more precise methods such as the Fourier finite‐difference method.  相似文献   

20.
In this study, we formulate an improved finite element model‐updating method to address the numerical difficulties associated with ill conditioning and rank deficiency. These complications are frequently encountered model‐updating problems, and occur when the identification of a larger number of physical parameters is attempted than that warranted by the information content of the experimental data. Based on the standard bounded variables least‐squares (BVLS) method, which incorporates the usual upper/lower‐bound constraints, the proposed method (henceforth referred to as BVLSrc) is equipped with novel sensitivity‐based relative constraints. The relative constraints are automatically constructed using the correlation coefficients between the sensitivity vectors of updating parameters. The veracity and effectiveness of BVLSrc is investigated through the simulated, yet realistic, forced‐vibration testing of a simple framed structure using its frequency response function as input data. By comparing the results of BVLSrc with those obtained via (the competing) pure BVLS and regularization methods, we show that BVLSrc and regularization methods yield approximate solutions with similar and sufficiently high accuracy, while pure BVLS method yields physically inadmissible solutions. We further demonstrate that BVLSrc is computationally more efficient, because, unlike regularization methods, it does not require the laborious a priori calculations to determine an optimal penalty parameter, and its results are far less sensitive to the initial estimates of the updating parameters. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号