首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
通过分析地震数据EDAS Event格式与SEED格式在文件头与数据部分存储的特点与规律,利用Visual Basic语言研发地震波形数据处理软件.利用此软件,可以完成地震记录数据的格式转换,以及数据的抽取与合并,保证数据记录的连续与衔接,提高数据的连续性和利用价值.  相似文献   

2.
手工收集地震数据费时费力,容易出错。利用 VB 软件开发环境,结合 Windows Socket 网络编程技术,研制地震事件波形及震相文件抽取软件,可以降低人为出错概率,减小工作量,提高测震数据的利用率。  相似文献   

3.
国家数字地震台网中心应用地震波形数据格式及转换   总被引:2,自引:0,他引:2  
黄金莉  顾小虹 《地震》2001,21(4):60-65
SAC是广泛应用于地震学分析研究中的工具软件, SAC格式也成为地震波形数据的通用格式。国家数字地震台网中心向用户提供47个国家数字地震台站该格式的事件波形资料。文中详细介绍数据格式的结构、特点和内容,并给出格式转换的流程及程序实现。最后给出某些地震事件的波形及头段变量内容。  相似文献   

4.
北京市地震局数字遥测地震台网的地震编目系统及其产出   总被引:1,自引:0,他引:1  
北京市地震局数字遥测地震台网利用"区域台网地震观测数据管理软件EDSP-DMS"开展地震数据管理和地震编目工作.通过数据库技术和网络技术,实现对北京市地震局数字遥测地震台网产出的地震目录、地震观测数据和地震事件波形文件的管理与检索.2002年1月开始正式产出本台网的地震月报目录和地震观测报告.  相似文献   

5.
陈银 《四川地震》2002,(3):44-44
成都数字遥测地震台网建成后 ,大震监测和速报能力明显提高 ,功能得到增强 ,还为地震预测研究和基础研究提供了大动态、高分辨、线性度好、便于存储交换和计算机智能化处理的数字观测数据。数字台网连续记录波形的方式是以一小时为长度保存一个文件的 ,在速报工作中经常会遇到这样一个问题 :对于震级相对较大的地震 ,如果初至波到时记录在前一个小时文件中 ,而面波出现在后一个小时文件中 ,(地震触发事件设定记录长度最长为 60分钟 ,但随着记录波形的衰减 ,一般波形记录最长为 2 0分钟 )。对于这样的地震 ,直接使用数字台网记录中心现有软件…  相似文献   

6.
通过分析MSDP二进制震相文件结构特点,利用Visual Basic语言研发震相数据提取软件,自动提取震相文件数据,并形成地震目录、地震报告以及双差定位研究等所需格式的原始数据,将研究人员从繁琐的数据格式转换中解脱出来,以大幅度提高科研进程和准确度。  相似文献   

7.
采用FSS-3DBH型井下地震计,测量山东省邹城市区域发生的矿震、爆破和天然地震相关事件信息,通过波形分析与波谱分析,对比研究不同类型地震事件特征.结果表明,该区域发生的矿震、爆破和天然地震的波形、幅值及频率等特征指标存在明显差异.通过实时监测该区域地震事件和进行相关的特征分析,可以为研究区域地震的基本规律提供客观数据和科学依据.  相似文献   

8.
首都圈数字地震台网地震事件波形的管理与服务   总被引:1,自引:1,他引:0  
介绍了首都圈数字地震台网地震事件波形数据的管理和服务系统的基本功能及使用方法,该系统利用GMT,SAC作为后台处理软件,通过perl,Unix Shell程序实现首都圈地震事件波形数据的网络发布。  相似文献   

9.
利用福建地震台网的人工爆破与天然地震的数字记录,采用波形对比法,分析发生在同一地区的爆破与地震波形特征.结果发现,爆破与地震在震相、P波初动符号分布、振幅比As/Ap等方面具有不同特征,据此得出爆破识别的有效判据,并对一次疑爆事件进行有效检验,为今后爆破的识别提供依据.  相似文献   

10.
CDSN台站典型震例数字地震图库的建立   总被引:2,自引:2,他引:0  
主要描述利用UNIX的SHELL组合命令实现数字地震图库的建立。图库按时间顺序排列,地震波形文件与数据库中相关的表(TABLE)分开存放,波形文件是以CSS3.0格式存储的,符合ARS的使用习惯。使用时用户只需更改数据库的名字,即可使用功能很强的ARS分析软件,对数字地震图进行回放处理等操作  相似文献   

11.
The seismic industry is increasingly acquiring broadband data in order to reap the benefits of extra low‐ and high‐frequency contents. At the low end, as the sharp low‐cut decay gets closer to zero frequency, it becomes harder for a well tie to estimate the low‐frequency response correctly. The fundamental difficulty is that well logs are too short to allow accurate estimation of the long‐period content of the data. Three distinctive techniques, namely parametric constant phase, frequency‐domain least squares with multi‐tapering, and Bayesian time domain with broadband priors, are introduced in this paper to provide a robust solution to the wavelet estimation problem for broadband seismic data. Each of these techniques has a different mathematical foundation that would enable one to explore a wide range of solutions that could be used on a case‐by‐case basis depending on the problem at hand. A case study from the North West Shelf Australia is used to analyse the performance of the proposed techniques. Cross‐validation is proposed as a robust quality control measure for evaluating well‐tie applications. It is observed that when the seismic data are carefully processed, then the constant phase approach would likely offer a good solution. The frequency‐domain method does not assume a constant phase. This flexibility makes it prone to over‐fitting when the phase is approximately constant. Broadband priors for the time‐domain least‐squares method are found to perform well in defining low‐frequency side lobes to the wavelet.  相似文献   

12.
A new filtering technique for single‐fold wide‐angle reflection/refraction seismic data is presented. The technique is based on the wavelet decomposition of a set of adjacent traces followed by coherence analysis. The filtering procedure consists of three steps. In the first, a wavelet decomposition of traces into different detail levels is performed. In the second, the coherence attributes for each level are evaluated by calculating cross‐correlation functions of detail portions contained in a space–time moving window. Finally, the filtered traces are obtained as a weighted reconstruction of the trace details. Each weight is obtained from the coherence‐attributes distribution estimated in a proper interval. A sequence of tests is then conducted in order to select possible optimum or unsuitable wavelet bases. The efficiency of the filter proposed was assessed by calculating some properly designed parameters in order to compare it with other standard de‐noising techniques. The proposed method produced a clear signal enhancement in high‐density wide‐angle seismic data, thus proving that it is a useful processing tool for a reliable correlation of seismic phases.  相似文献   

13.
Multi-source seismic technology is an efficient seismic acquisition method that requires a group of blended seismic data to be separated into single-source seismic data for subsequent processing. The separation of blended seismic data is a linear inverse problem. According to the relationship between the shooting number and the simultaneous source number of the acquisition system, this separation of blended seismic data is divided into an easily determined or overdetermined linear inverse problem and an underdetermined linear inverse problem that is difficult to solve. For the latter, this paper presents an optimization method that imposes the sparsity constraint on wavefields to construct the object function of inversion, and the problem is solved by using the iterative thresholding method. For the most extremely underdetermined separation problem with single-shooting and multiple sources, this paper presents a method of pseudo-deblending with random noise filtering. In this method, approximate common shot gathers are received through the pseudo-deblending process, and the random noises that appear when the approximate common shot gathers are sorted into common receiver gathers are eliminated through filtering methods. The separation methods proposed in this paper are applied to three types of numerical simulation data, including pure data without noise, data with random noise, and data with linear regular noise to obtain satisfactory results. The noise suppression effects of these methods are sufficient, particularly with single-shooting blended seismic data, which verifies the effectiveness of the proposed methods.  相似文献   

14.
We present a new method for the prediction of the discontinuities and lithological variations ahead of the tunnel face. The automatic procedure is applied to data collected by seismic reflection surveys, with the sources and sensors located along the tunnel. The method allows: i) to estimate an average value of the wave velocity; ii) to detect the discontinuities for each source point; and iii) to analyze and plot the number of superposing estimates for each node of the domain. The final result can be interpreted as the probability to detect a discontinuity at a certain distance from the tunnel face. The method automatically estimates the peaks in the seismograms that can be related to a reflection. On the base of this process, the method only requires the source–receiver geometry and the data acquisition parameters. The procedure has been tested on synthetic and real data coming from a seismic survey on a tunnel under construction. The results indicate that the method runs very fast and it is reliable in the identification of lithological changes and discontinuities, up to more than 100 m ahead of the tunnel face.  相似文献   

15.
地震子波估计是地震资料处理和解释中的一个关键问题,子波估计的可靠性会直接影响反褶积和反演的准确度.现有的子波估计方法分为确定型和统计型两种类型,本文通过结合这两类方法,利用确定型的谱分析法和统计型的偏度最大化方法,分别提取时变子波的振幅和相位信息,得到估计的时变子波.这种方法不需要对子波进行任何时不变或相位等的假设,具有对时变相位的估计能力.进而利用估计时变子波进行非稳态反褶积,提高地震记录的保真度,为精细储层预测和描述提供高质量的剖面.理论模型试算验证了方法的可行性,通过实际地震资料的处理应用,表明该方法能有效地提取出子波时变信息.  相似文献   

16.
Introduction The research on the structure and physical property of ancient hidden hill, igneous rocks and basement is relatively difficult by using seismic data only. If we combine seismic data, magneto-telluric (MT) data and geophysical data together, better results can be obtained for the above problem. A number of geophysicists at home and abroad, such as CHEN and WANG (1990), Siri-punvarapor and Egbert (2000) have tried many methods to solve the problem by the inversion of seismic da…  相似文献   

17.
目前对多次波的有效利用仅围绕多次波成像技术展开,通过成像多次波试图获取更丰富的地下构造信息.不同于该思路,本文另辟蹊径,从利用多次波提高地震数据分辨率角度出发,对多次波的有效利用进行了深入挖掘.首先基于聚焦变换思想在聚焦域内实现多次波的降阶,通过理论推导得出聚焦域内多次波表现为原始数据的多维子波反褶积这一重要结论,从理论上证明了本文方法提高地震数据分辨率的可行性;然后采用引入整形正则化的非稳态回归自适应匹配滤波方法将聚焦域内由多次波构建的高分辨率数据分离出来,实现原始数据的高分辨率转换.与常规反褶积模型不同,该方法基于波动理论推导得出,可以适用于任意复杂情况;每一道输出结果中所有炮记录都参与了运算,从空间上加以约束,在提高纵向分辨率的同时可以改善数据的横向分辨率.最后通过模型试算和实际资料处理对本文方法的有效性、适应性和实用性进行了验证.  相似文献   

18.
浅谈反射地震走时层析中的正则化   总被引:1,自引:2,他引:1       下载免费PDF全文
反射地震走时层析本质上是一个病态问题,而正则化是改善问题病态程度的有效手段.反射地震走时层析最终可归结为线性方程组的求解,本文讨论了在线性方程组求解过程中正则化的作用和方式.正则化的作用有:(1)用超定分量约束欠定分量和零空间分量;(2)用先验信息约束欠定分量和零空间分量;(3)对射线的不均匀覆盖进行阻尼;(4)对数据的不准确性进行阻尼.正则化的加入方式有:(1)加法型(将正则化矩阵补在层析矩阵后面,包括导数型正则化和零阶正则化,一阶导数型正则化对应最平坦解,二阶导数型正则化对应最光滑解,零阶正则化对应紧约束解);(2)乘法型(将正则化矩阵与层析矩阵相乘,主要包括阻尼型正则化).并利用简单的模型对正则化的效果进行了试验,发现经各种正则化约束后,与未加任何正则化约束得到的速度模型比较,尽管恢复的异常体的幅度不如后者大,但得到的速度剖面要平滑得多,更利于后续的射线追踪正演和层析反演.  相似文献   

19.
地震数据自动批处理系统EDSP_LX是针对水库流动观测台网海量观测数据快速处理的需要设计的,软件实现了基于文件系统的自动数据检索、地震事件检测与数据存储、连续数据存储等功能。文中阐明了软件的设计思路和主要功能,重点介绍了系统结构和波形数据自动检索方式。  相似文献   

20.
Seismic intensity, measured through the Mercalli–Cancani–Sieberg (MCS) scale, provides an assessment of ground shaking level deduced from building damages, any natural environment changes and from any observed effects or feelings. Generally, moving away from the earthquake epicentre, the effects are lower but intensities may vary in space, as there could be areas that amplify or reduce the shaking depending on the earthquake source geometry, geological features and local factors. Currently, the Istituto Nazionale di Geofisica e Vulcanologia analyzes, for each seismic event, intensity data collected through the online macroseismic questionnaire available at the web-page www.haisentitoilterremoto.it. Questionnaire responses are aggregated at the municipality level and analyzed to obtain an intensity defined on an ordinal categorical scale. The main aim of this work is to model macroseismic attenuation and obtain an intensity prediction equation which describes the decay of macroseismic intensity as a function of the magnitude and distance from the hypocentre. To do this we employ an ordered probit model, assuming that the intensity response variable is related through the link probit function to some predictors. Differently from what it is commonly done in the macroseismic literature, this approach takes properly into account the qualitative and ordinal nature of the macroseismic intensity as defined on the MCS scale. Using Markov chain Monte Carlo methods, we estimate the posterior probability of the intensity at each site. Moreover, by comparing observed and estimated intensities we are able to detect anomalous areas in terms of residuals. This kind of information can be useful for a better assessment of seismic risk and for promoting effective policies to reduce major damages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号