首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
The purpose of this study is to develop a technique to discriminate artificial explosions from local small earthquakes ( M ≤ 4.0) in the time–frequency domain. In order to obtain spectral features of artificial explosions and earthquakes, 3-D spectrograms (frequency, time and amplitude) have been used. They represent a useful tool for studying the frequency content of entire seismic waveforms observed at local and regional distances (Kim, Simpson & Richards 1994). P and S(L g ) waves from quarry blasts show that the frequency content associated with the dominant amplitude appears above 10  Hz and Rg phases are observed at close distances. P and S(L g ) waves from the Tongosan earthquake have strong amplitudes below 10  Hz. For the Munkyong earthquake, however, a broader frequency content up to 20  Hz is found.
  For discrimination between small earthquakes and explosions, Pg/L g spectral ratios are used below 10  Hz, and through spectrogram analysis we can see different frequency contents of explosions and earthquakes. Unfortunately, because explosion data recorded at KSRS array are digitized at 20  sps, we cannot avoid analysing below 10  Hz because of the Nyquist frequency. In order to select time windows, the group velocity was computed using multiple-filter analysis (MFA), and free-surface effects have been removed from all three-component data in order to improve data quality. Using FFT, a log-average spectral amplitude is calculated over seven frequency bands: 0.5 to 3, 2 to 4, 3 to 5, 4 to 6, 5 to 7, 6 to 8 and 8 to 10  Hz. The best separation between explosions and earthquakes is observed from 6 to 8  Hz. In this frequency band we can separate explosions with log ( Pg/L g ) above −0.5, except EXP1 recorded at SIHY1-1, and earthquakes below −0.5, except the Munkyong earthquake record at station KMH.  相似文献   

2.
Inversion of seismic attributes for velocity and attenuation structure   总被引:1,自引:0,他引:1  
We have developed an inversion formuialion for velocity and attenuation structure using seismic attributes, including envelope amplitude, instantaneous frequency and arrival times of selected seismic phases. We refer to this approach as AFT inversion for amplitude, (instantaneous) frequency and time. Complex trace analysis is used to extract the different seismic attributes. The instantaneous frequency data are converted to t * using a matching procedure that approximately removes the effects of the source spectra. To invert for structure, ray-perturbation methods are used to compute the sensitivity of the seismic attributes to variations in the model. An iterative inversion procedure is then performed from smooth to less smooth models that progressively incorporates the shorter-wavelength components of the model. To illustrate the method, seismic attributes are extracted from seismic-refraction data of the Ouachita PASSCAL experiment and used to invert for shallow crustal velocity and attenuation structure. Although amplitude data are sensitive to model roughness, the inverted velocity and attenuation models were required by the data to maintain a relatively smooth character. The amplitude and t * data were needed, along with the traveltimes, at each step of the inversion in order to fit all the seismic attributes at the final iteration.  相似文献   

3.
By means of the region–time–length (RTL) algorithm, which is widely used for investigating the precursory seismicity changes in China, Italy, Japan, Russia and Turkey, we examine the precursory seismic activity occurred prior to the 1999, M w = 7.6, Chi-Chi earthquake around its epicentre. Based on our calculation of the RTL values, the epicentral area has been found to strongly exhibit the signature of anomalous activity, associated with the seismic quiescence and activation, before the main shock. Also proposed in this study is a helpful method for determining two important parameters used in the RTL analysis, the characteristic time and distance. Such method will largely reduce the ambiguity in the original RTL algorithm.  相似文献   

4.
Summary. A new method is presented for the direct inversion of seismic refraction data in dipping planar structure. Three recording geometries, each consisting of two common-shot profiles, are considered: reversed, split, and roll-along profiles. Inversion is achieved via slant stacking the common-shot wavefield to obtain a delay time–slowness (tau– p ) wavefield. The tau– p curves from two shotpoints describing the critical raypath of refracted and post-critically reflected arrivals are automatically picked using coherency measurements and the two curves are jointly used to calculate velocity and dip of isovelocity lines iteratively, thereby obtaining the final two-dimensional velocity model.
This procedure has been successfully applied to synthetic seismograms calculated for a dipping structure and to field data from central California. The results indicate that direct inversion of closely-spaced refraction/wide-aperture reflection data can practically be achieved in laterally inhomogeneous structures.  相似文献   

5.
Summary. Recently, time domain methods have been shown to be advantageous in determining the quality of observations of terrestrial eigenvibrations. Complex demodulation has been used in estimating amplitude, period, decay constant Q and phase. Non-linear regression in the frequency domain yields formal uncertainties in each estimate. A crucial problem arises in isolating neighbouring modes of nearly the same frequency and comparable amplitude. Complex demodulation displays can be used to identify interferences such as beating. In such cases, the assumption of stationarity is violated in the regression analysis yielding biased results. Improved resolution has now been obtained by formulating the regression scheme to estimate simultaneously small sets of neighbouring spectral peaks so that stationarity is not violated. Under the assumption of stationarity this procedure becomes optimal in a statistical sense. A general analysis of variance serves to indicate the amount of correlation that exists between estimated parameters of neighbouring modes as well as the relative information that is supplied by each Fourier transform point to the overall system of equations. The covariance matrix indicates that there is no linear correlation between spectral peaks when their frequencies are separated by 5 per cent. There are, however, significant correlations when two frequencies differ by 0.5 per cent. The result is that overtone modes very near in frequency to fundamental modes can, under certain conditions, be resolved with a non-linear regression technique, although parameter uncertainties are underestimated in general.  相似文献   

6.
We present an approximate method to estimate the resolution, covariance and correlation matrix for linear tomographic systems Ax = b that are too large to be solved by singular value decomposition. An explicit expression for the approximate inverse matrix A is found using one-step backprojections on the Penrose condition AA ≈ I , from which we calculate the statistical properties of the solution. The computation of A can easily be parallelized, each column being constructed independently.
The method is validated on small systems for which the exact covariance can still be computed with singular value decomposition. Though A is not accurate enough to actually compute the solution x , the qualitative agreement obtained for resolution and covariance is sufficient for many purposes, such as rough assessment of model precision or the reparametrization of the model by the grouping of correlating parameters. We present an example for the computation of the complete covariance matrix of a very large (69 043 × 9610) system with 5.9 × 106 non-zero elements in A . Computation time is proportional to the number of non-zero elements in A . If the correlation matrix is computed for the purpose of reparametrization by combining highly correlating unknowns x i , a further gain in efficiency can be obtained by neglecting the small elements in A , but a more accurate estimation of the correlation requires a full treatment of even the smaller A ij . We finally develop a formalism to compute a damped version of A .  相似文献   

7.
A new algorithm is presented for the integrated 2-D inversion of seismic traveltime and gravity data. The algorithm adopts the 'maximum likelihood' regularization scheme. We construct a 'probability density function' which includes three kinds of information: information derived from gravity measurements; information derived from the seismic traveltime inversion procedure applied to the model; and information on the physical correlation among the density and the velocity parameters. We assume a linear relation between density and velocity, which can be node-dependent; that is, we can choose different relationships for different parts of the velocity–density grid. In addition, our procedure allows us to consider a covariance matrix related to the error propagation in linking density to velocity. We use seismic data to estimate starting velocity values and the position of boundary nodes. Subsequently, the sequential integrated inversion (SII) optimizes the layer velocities and densities for our models. The procedure is applicable, as an additional step, to any type of seismic tomographic inversion.
We illustrate the method by comparing the velocity models recovered from a standard seismic traveltime inversion with those retrieved using our algorithm. The inversion of synthetic data calculated for a 2-D isotropic, laterally inhomogeneous model shows the stability and accuracy of this procedure, demonstrates the improvements to the recovery of true velocity anomalies, and proves that this technique can efficiently overcome some of the limitations of both gravity and seismic traveltime inversions, when they are used independently.
An interpretation of field data from the 1994 Vesuvius test experiment is also presented. At depths down to 4.5 km, the model retrieved after a SII shows a more detailed structure than the model obtained from an interpretation of seismic traveltime only, and yields additional information for a further study of the area.  相似文献   

8.
Summary The single channel scalar deconvolution method presented by Oldenburg has been extended to include N channels of data and vector models of the form     ( t ) = ( m 1( t ), m 2( t ), …, m α( t ))T. The solution has its foundation in the linear inverse theory of Backus & Gilbert and is effected by computing a set of N filters, which, when convolved with the data, yield unique averages of one of the scalar functions of the model. Those averages are the summation of the scalar model convolved with a primary averaging function plus contamination from secondary averaging functions convolved with other model components. It is shown how a set of suitably selected weights can annihilate these secondary averaging functions and thereby greatly simplify the interpretation. The computations are efficiently carried out in the frequency domain and require the inversion of an N × N Hermitian matrix at each frequency. As a type example, we have shown how the time varying elements of a seismic moment tensor might be computed from a set of seismograms.  相似文献   

9.
中国地震发生频率与烈度的空间分布   总被引:8,自引:1,他引:7  
考虑不同区域地震记录具有时间长度不等的特点,对“震中分布分震级网格点密集值”算法进行改进,结合 GIS 的空间分析方法将地震目录中的点数据空间化为能反映地震发生频率的栅格数据;依据地震震级和烈度的关 系以及地震烈度在空间上的椭圆衰减模型,选择逼近和近似的计算手段,并结合空间插值方法得到中国地震烈度 分布的栅格图。从地震频率分布结果上看,大致以宁夏、甘肃、四川和云南为界,中国西部地区3 级以上的地震发生 频率要高于东部地区;从地震烈度分布结果看,中国甘肃、陕西、宁夏、山西、河北、四川、云南等位于地震带内的区 域在发生地震时产生的烈度较高。  相似文献   

10.
A fundamental geologic problem in the Steam-Assisted Gravity Drainage (SAGD) heavy oil developments in the McMurray Formation of Northern Alberta is to determine the location of shales in the reservoirs that may interfere with the steaming or recovery process. Petrophysical analysis shows that a key acoustic indicator of the presence of shale is bulk density. In theory, density can be derived from seismic data using Amplitude Versus Offset (AVO) analysis of conventional or multicomponent seismic data, but this is not widely accepted in practice. However, with billions of dollars slated for SAGD developments in the upcoming years, this technology warrants further investigation. In addition, many attributes can be investigated using modern tools like neural networks; so, the density extracted from seismic using AVO can be compared and combined with more conventional attributes in solving this problem. Density AVO attributes are extracted and correlated with “density synthetics” created from the logs just as the seismic stack correlates to conventional synthetics. However, multiattribute tests show that more than density is required to best predict the volume proportion of shale (Vsh). Vsh estimates are generated by passing seismic attributes derived from conventional PP, and multicomponent PS seismic, AVO and inversion from an arbitrary line following the pilot SAGD wells through a neural network. This estimate shows good correlation to shale proportions estimated from core. The results have encouraged the application of the method to the entire 3D.  相似文献   

11.
In Part I of this paper, we derived a set of data tapers designed to minimize the spectral leakage of decaying sinusoids immersed in white noise. Multiplying a long-period seismic record by K of these tapers creates K time series. A decaying sinusoid is fit to these K time series in the frequency domain at a number of chosen frequencies by a least-squares procedure. The fit is tested at each frequency using a statistical F -test. In Part I, we demonstrated that the multiple-taper method is a more sensitive detector of decaying sinusoids than the conventional direct spectral-estimate.
In this paper, we present a number of extensions to the multiple-taper method. We explain how the technique can be modified to estimate the harmonic components of records containing gaps. We discuss how sinusoids at frequencies between FFT bin frequencies can be detected, and how this method can be combined with conventional multi-station stacking procedures.  相似文献   

12.
We propose a method to evaluate the existence of spatial variability in the covariance structure in a geographically weighted principal components analysis (GWPCA). The method, that is extensive to locally weighted principal components analysis, is based on performing a statistical hypothesis test using the eigenvectors of the PCA scores covariance matrix. The application of the method to simulated data shows that it has a greater statistical power than the current statistical test that uses the eigenvalues of the raw data covariance matrix. Finally, the method was applied to a real problem whose objective is to find spatial distribution patterns in a set of soil pollutants. The results show the utility of GWPCA versus PCA.  相似文献   

13.
Array techniques are particularly well‐suited for detecting and quantifying the complex seismic wavefields associated with volcanic activity such as volcanic tremor and long‐period events. The methods based on the analysis of the signal in the frequency domain, or spectral methods, have the main advantages of both resolving closely spaced sources and reducing the necessary computer time, but may severely fail in the analysis of monochromatic, non‐stationary signals. Conversely, the time‐domain methods, based on the maximization of a multichannel coherence estimate, can be applied even for short‐duration pulses. However, for both the time and the frequency domain approaches, an exhaustive definition of the errors associated with the slowness vector estimate is not yet available. Such a definition become crucial once the slowness vector estimates are used to infer source location and extent. In this work we develop a method based on a probabilistic formalism, which allows for a complete definition of the uncertainties associated with the estimate of frequency–slowness power spectra from measurement of the zero‐lag cross‐correlation. The method is based on the estimate of the theoretical frequency–slowness power spectrum, which is expressed as the convolution of the true signal slowness with the array response pattern. Using a Bayesian formalism, the a posteriori probability density function for signal slowness is expressed as the difference, in the least‐squares sense, between the model spectrum and that derived from application of the zero‐lag cross‐correlation technique. The method is tested using synthetic waveforms resembling the quasi‐monochromatic signals often associated with the volcanic activity. Examples of application to data from Stromboli volcano, Italy, allow for the estimate of source location and extent of the explosive activity.  相似文献   

14.
王聪  黄宁  杨保 《地理科学》2014,34(2):237-241
气候重建研究中,重建数据有限的特点对研究造成很大影响。对于解决这个问题,区域优化平均法是一个很有效的重建方法。区域优化平均法可以通过最优权值和有限的温度数据计算目标区域平均温度的一种方法。应用区域优化平均法时,首先利用均方差最小化的优化加权机制和拉格朗日乘子法计算得到最优权值,然后最优权值结合温度数据计算得到区域平均温度。现阶段的区域优化平均在计算大范围区域的平均温度时有其自身弱点。为克服这一弱点,使其可以计算大范围区域的平均温度,例如北半球平均温度,本文对区域优化平均法做如下改进:不再使用网格划分求和的方式求解协方差模式,利用Haar小波函数和矩阵算子求得协方差模式;利用全选主元高斯消去法求解线性代数方程组得到最优权值。结果表明,Haar小波函数和矩阵算子用于计算中,使协方差模式的计算结果更精确。计算所用数据源于气候研究中心(CRU),CRU被认为是最权威的数据来源之一。以计算北半球1961~1990年平均温度为例,发现改进后的区域优化平均法的计算所得结果与CRU已有结果的相关性较改进之前有所提高。因此,针对古气候重建过程中代用数据记录有限的问题,改进后的区域优化平均法提供了一个更为合理可行的计算方法。  相似文献   

15.
Multicomponent near-surface correction for land VSP data   总被引:1,自引:0,他引:1  
Multicomponent seismic data collected using directional sources are degraded by the wave excitation process due to inaccurate control of the ground motion. unequal activation strengths or ground couplings between differently oriented sources, and misalignment of the pad. These acquisition uncertainties are exacerbated by the complicated near-surface scattering present in most seismic areas. Neither group of effects should be neglected in multicomponent analyses that make use of relative wavefield attributes derived from compressional and shear waves. These effects prevent analysis of the direct and reflected waves using procedures based on standard scalar techniques or a prima facia interpretation of the vector wavefield properties, even for the seemingly straightforward case of a near-offset vertical seismic profile (VSP). Near-surface correction, using a simple matrix operator designed from the shallowest recordings, alleviates many of these interpretational difficulties in near-offset VSP data. Results from application of this technique to direct waves from a nine-component VSP shot at the Conoco test-site facility, Oklahoma, are encouraging. The technique corrects for unexpected compressional-wave energy from shear-wave vibrators and collapses near-surface multiples, thus facilitating further processing for the upgoing wavefield. The method provides a simple and effective processing step for routine application to near-offset VSP analyses.  相似文献   

16.
Rank estimation by canonical correlation analysis in multivariate statistics has been proposed as analternative approach for estimating the number of components in a multicomponent mixture.Amethodological turning point of this new approach is that it focuses on the difference in structure ratherthan in magnitude in characterizing the difference between the signal and the noise.This structuraldifference is quantified through the analysis of canonical correlation,which is a well-established datareduction technique in multivariate statistics.Unfortunately,there is a price to be paid for having thisstructural difference:at least two replicate data matrices are needed to carry out the analysis.In this paper we continue to explore the potential and to extend the scope of the canonical correlationtechnique.In particular,we propose a bootstrap resampling method which makes it possible to performthe canonical correlation analysis on a single data matrix.Since a robust estimator is introduced to makeinference about the rank,the procedure may be applied to a wide range of data without any restrictionon the noise distribution.Results from real as well as simulated mixture samples indicate that when usedin conjunction with this resampling method,canonical correlation analysis of a single data matrix isequally efficient as of replicate data matrices.  相似文献   

17.
Clustering of temporal event processes   总被引:1,自引:0,他引:1  
A temporal point process is a sequence of points, each representing the occurrence time of an event. Each temporal point process is related to the behavior of an entity. As a result, clustering of temporal point processes can help differentiate between entities, thereby revealing patterns of behaviors. This study proposes a hierarchical cluster method for clustering temporal point processes based on the discrete Fréchet (DF) distance. The DF cluster method is divided into four steps: (1) constructing a DF similarity matrix between temporal point processes; (2) constructing a complete linkage hierarchical tree based on the DF similarity matrix; (3) clustering the point processes with a threshold determined by locating the local maxima on the curve of the pseudo-F statistic (an index which measures the separability between clusters and the compactness in clusters); and (4) identifying inner patterns for each cluster formed by a series of dense intervals, each of which contains at least one event of all processes of the cluster. The contributions of the article are: (1) the proposed DF cluster method can cluster temporal point processes into different groups and (2) more importantly, it can identify the inner pattern of each cluster. Two synthetic data sets were created to illustrate the DF distance between temporal point process clusters (the first data set) and validate the proposed DF cluster method (the second data set), respectively. An experiment and a comparison with a method based on dynamic time warping show that DF cluster successfully identifies the preconfigured patterns in the second synthetic data set. The cluster method was then applied to a population migration history data set for the Northern Plains of the United States, revealing some interesting population migration patterns.  相似文献   

18.
This paper describes an efficient approach for computing the frequency response of seismic waves propagating in 2- and 3-D earth models within which the magnitude and phase are required at many locations. The approach consists of running an explicit finite difference time domain (TD) code with a time harmonic source out to steady-state. The magnitudes and phases at locations in the model are computed using phase sensitive detection (PSD). PSD does not require storage of time-series (unlike a fast Fourier transform), reducing its memory requirements. Additionally, the response from multiple sources can be obtained from a single finite difference run by encoding each source with a different frequency. For 2-D models with many sources, this time domain phase sensitive detection (TD–PSD) approach has a higher arithmetic complexity than direct solution of the finite difference frequency domain (FD) equations using nested dissection re-ordering (FD–ND). The storage requirements for 2-D finite difference TD–PSD are lower than FD–ND. For 3-D finite difference models, TD–PSD has significantly lower arithmetic complexity and storage requirements than FD–ND, and therefore, may prove useful for computing the frequency response of large 3-D earth models.  相似文献   

19.
Surface-wave polarization data and global anisotropic structure   总被引:1,自引:0,他引:1  
In the past few years, seismic tomography has begun to provide detailed images of seismic velocity in the Earth's interior which, for the first time, give direct observational constraints on the mechanisms of heat and mass transfer. The study of surface waves has led to quite detailed maps of upper-mantle structure, and the current global models agree reasonably well down to wavelengths of approximately 2000 km. Usually, the models contain only elastic isotropic structure, which provides an excellent fit to the data in most cases. For example, the variance reduction for minor and major arc phase data in the frequency range 7–15 mHz is typically 65–92 per cent and the data are fit to within 1–2 standard deviations. The fit to great-circle phase data, which are not subject to bias from unknown source or instrument effects, is even better. However, there is clear evidence for seismic anisotropy in various places on the globe. This study demonstrates how much (or little) the fit to the data is improved by including anisotropy in the modelling process. It also illuminates some of the trade-offs between isotropic and anisotropic structure and gives an estimate of how much bias is introduced by neglecting anisotropy. Finally, we show that the addition of polarization data has the potential for improving recovery of anisotropic structure by diminishing the trade-offs between isotropic and anisotropic effects.  相似文献   

20.
We construct a catalogue of all the possible elementary point sources of seismic waves. There are three general classes of sources, two spheroidal and one toroidal. We consider excitation functions for these point-like sources as well as for sources of finite size in far-, intermediate- and near-field for an infinite homogeneous isotropic medium. The sources corresponding to seismic-moment tensors for the second-, third- and fourth-ranks are considered in more detail; we identify 10 different seismic sources in this range: one monopole, two or three dipoles, three quadrupoles, etc. For the step-function of the scalar seismic-moment release, the amplitude spectrum for the third-rank sources is proportional to the angular frequency ω in the region below the corner frequency ω cr. The fourth-rank sources have an ω 2 spectrum in the same range. The possibility of separate and simultaneous inversion of seismic body-wave data and static deformation data for sources of different order is discussed. Some equivalent-force moment higher-rank sources are 'shielded' by lower-rank sources of the same order; the former sources cannot be inverted from seismic data without additional assumptions. Because of their simple radiation pattern, the lower order multipoles, i.e. the monopole and dipoles, are the first sources other than the double-couple which should be considered for inversion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号