首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7篇
  免费   3篇
测绘学   1篇
地球物理   7篇
地质学   1篇
自然地理   1篇
  2018年   2篇
  2017年   1篇
  2013年   1篇
  2009年   1篇
  2003年   1篇
  2000年   1篇
  1994年   1篇
  1989年   2篇
排序方式: 共有10条查询结果,搜索用时 31 毫秒
1
1.
2.
We conducted a study of the spatial distributions of seismicity and earthquake hazard parameters for Turkey and the adjacent areas, applying the maximum likelihood method. The procedure allows for the use of either historical or instrumental data, or even a combination of the two. By using this method, we can estimate the earthquake hazard parameters, which include the maximum regional magnitude max, the activity rate of seismic events and the well-known value, which is the slope of the frequency-magnitude Gutenberg-Richter relationship. These three parameters are determined simultaneously using an iterative scheme. The uncertainty in the determination of the magnitudes was also taken into consideration. The return periods (RP) of earthquakes with a magnitude M ≥ m are also evaluated. The whole examined area is divided into 24 seismic regions based on their seismotectonic regime. The homogeneity of the magnitudes is an essential factor in such studies. In order to achieve homogeneity of the magnitudes, formulas that convert any magnitude to an MS-surface scale are developed. New completeness cutoffs and their corresponding time intervals are also assessed for each of the 24 seismic regions. Each of the obtained parameters is distributed into its respective seismic region, allowing for an analysis of the localized seismicity parameters and a representation of their regional variation on a map. The earthquake hazard level is also calculated as a function of the form Θ = (max,RP6.0), and a relative hazard scale (defined as the index K) is defined for each seismic region. The investigated regions are then classified into five groups using these parameters. This classification is useful for theoretical and practical reasons and provides a picture of quantitative seismicity. An attempt is then made to relate these values to the local tectonics.  相似文献   
3.
Three‐dimensional receiver ghost attenuation (deghosting) of dual‐sensor towed‐streamer data is straightforward, in principle. In its simplest form, it requires applying a three‐dimensional frequency–wavenumber filter to the vertical component of the particle motion data to correct for the amplitude reduction on the vertical component of non‐normal incidence plane waves before combining with the pressure data. More elaborate techniques use three‐dimensional filters to both components before summation, for example, for ghost wavelet dephasing and mitigation of noise of different strengths on the individual components in optimum deghosting. The problem with all these techniques is, of course, that it is usually impossible to transform the data into the crossline wavenumber domain because of aliasing. Hence, usually, a two‐dimensional version of deghosting is applied to the data in the frequency–inline wavenumber domain. We investigate going down the “dimensionality ladder” one more step to a one‐dimensional weighted summation of the records of the collocated sensors to create an approximate deghosting procedure. We specifically consider amplitude‐balancing weights computed via a standard automatic gain control before summation, reminiscent of a diversity stack of the dual‐sensor recordings. This technique is independent of the actual streamer depth and insensitive to variations in the sea‐surface reflection coefficient. The automatic gain control weights serve two purposes: (i) to approximately correct for the geometric amplitude loss of the Z data and (ii) to mitigate noise strength variations on the two components. Here, Z denotes the vertical component of the velocity of particle motion scaled by the seismic impedance of the near‐sensor water volume. The weights are time‐varying and can also be made frequency‐band dependent, adapting better to frequency variations of the noise. The investigated process is a very robust, almost fully hands‐off, approximate three‐dimensional deghosting step for dual‐sensor data, requiring no spatial filtering and no explicit estimates of noise power. We argue that this technique performs well in terms of ghost attenuation (albeit, not exact ghost removal) and balancing the signal‐to‐noise ratio in the output data. For instances where full three‐dimensional receiver deghosting is the final product, the proposed technique is appropriate for efficient quality control of the data acquired and in aiding the parameterisation of the subsequent deghosting processing.  相似文献   
4.
We propose a three‐step bandwidth enhancing wavelet deconvolution process, combining linear inverse filtering and non‐linear reflectivity construction based on a sparseness assumption. The first step is conventional Wiener deconvolution. The second step consists of further spectral whitening outside the spectral bandwidth of the residual wavelet after Wiener deconvolution, i.e., the wavelet resulting from application of the Wiener deconvolution filter to the original wavelet, which usually is not a perfect spike due to band limitations of the original wavelet. We specifically propose a zero‐phase filtered sparse‐spike deconvolution as the second step to recover the reflectivity dominantly outside of the bandwidth of the residual wavelet after Wiener deconvolution. The filter applied to the sparse‐spike deconvolution result is proportional to the deviation of the amplitude spectrum of the residual wavelet from unity, i.e., it is of higher amplitude; the closer the amplitude spectrum of the residual wavelet is to zero, but of very low amplitude, the closer it is to unity. The third step consists of summation of the data from the two first steps, basically adding gradually the contribution from the sparse‐spike deconvolution result at those frequencies at which the residual wavelet after Wiener deconvolution has small amplitudes. We propose to call this technique “sparsity‐enhanced wavelet deconvolution”. We demonstrate the technique on real data with the deconvolution of the (normal‐incidence) source side sea‐surface ghost of marine towed streamer data. We also present the extension of the proposed technique to time‐varying wavelet deconvolution.  相似文献   
5.
In two-component seismic observations with vertical and in-line horizontal geophones, the compressional (P-) wave amplitudes, as well as the vertically polarized shear (SV-) wave amplitudes, are observed on both vertical and horizontal geophones. In our case, we use a P-wave source, while the SV waves are the result of mode conversion. The mode-conversion mechanism considered here is related to the near-surface layers, i.e. we have a P-leg from the source and mode conversion at/in the weathered layer. The resulting SV waves therefore will show lateral variations because the elastic parameters of the near-surface layers vary along the seismic line, but these variations will be consistent at the surface. This effect is demonstrated by a synthetic example based on elastic parameters representative of the actual seismic line being considered. To separate the individual P and SV arrivals, we apply a two-dimensional convolution filter designed to meet the wavenumber-frequency (k-f) domain transfer function for P-SV separation which can be derived from thek-f domain geophone-receiving characteristic and the near surface P- and S-wave velocities. The reason for P-SV separation filtering in the offset-traveltime (X-T) domain instead of directly filtering in thek-f domain, is a great saving in computer time, asX-T filters, with few coefficients, can be used. In this paper, after a short summary of thek-f domain P-SV separation filters and their transformation to theX-T domain, we apply theX-T filters to synthetic data in order to demonstrate that our design is correct. We also work on actual data and discuss the problems being faced, which mainly, originate from the different geophone groups and, as a consequence, the different scalings of vertical and horizontal geophones. The main advantage of two-component seismic observations is two-fold: firstly, a clean P-wave section is obtained (SV-energy arriving at the receivers is cancelled by applying the foresaid separation filter) and, secondly we obtain an additional SV-wave section at almost no cost to data acquisition. These two sections contribute towards distinguishing between true and false bright spots, so they are, used as direct hydrocarbon indicator tools.  相似文献   
6.
The goal of seismic reflection surveys is the derivation of petrophysical subsurface parameters from surface measurements. Today's well established technique in data acquisition, as well as processing terms, is based on the acoustic approximation to the real world's wave propagation. In recent years a lot of work has been done to extend the technique to the elastic approximation. There was especially an important trend towards elastic inversion techniques operating on plane-wave seismograms, called simultaneous P-SV inversion (or short P-SV inversion) within this paper. Being still under investigation, some important aspects of P-SV inversion concerning data acquisition as well as pre-processing, should be pointed out. To fit the assumptions of P-SV inversion schemes, at least a two-dimensional picture of the reflected wavefield with vertical and in-line horizontal receivers has to be recorded. Moreover, the theoretical work done suggests that in addition to a survey with a compressional wave source, a second survey should be done using sources radiating vertically polarized shear waves, is needed. Finally, proper slant stacking must be performed to get plane-wave seismograms. The P/S separated plane-wave seismograms are then well prepared for feeding into the inversion algorithms. P/S separated planewave seismograms are then well prepared for feeding into the inversion algorithm.s In this paper, a tutorial overview of the data acquisition and pre-processing in accordance with the P-SV inversion philosophy is given and illustrated using synthetic seismograms. A judgement on the feasibility of the P-SV inversion philosophy must be left to ongoing research.  相似文献   
7.
The stacking velocity best characterizes the normal moveout curves in a common-mid-point gather, while the migration velocity characterizes the diffraction curves in a zero-offset section as well as in a common-midpoint gather. For horizontally layered media, the two velocity types coincide due to the conformance of the normal and the image ray. In the case of dipping subsurface structures, stacking velocities depend on the dip of the reflector and relate to normal rays, but with a dip-dependent lateral smear of the reflection point. After dip-moveout correction, the stacking velocities are reduced while the reflection-point smear vanishes, focusing the rays on the common reflection points. For homogeneous media the dip-moveout correction is independent of the actual velocity and can be applied as a dip-moveout correction to multiple offset before velocity analysis. Migration to multiple offset is a prestack, time-migration technique, which presents data sets which mimic high-fold, bin-centre adjusted, common-midpoint gathers. This method is independent of velocity and can migrate any 2D or 3D data set with arbitrary acquisition geometry. The gathers generated can be analysed for normal-moveout velocities using traditional methods such as the interpretation of multivelocity-function stacks. These stacks, however, are equivalent to multi-velocity-function time migrations and the derived velocities are migration velocities.  相似文献   
8.
Abstract

A community atlas is an effective method of promoting student‐centered, learning oriented instruction. It provides an integrated framework for teaching thematic interdisciplinary material and promotes collaborative work by students, whose efforts can be shared amongst themselves and with the community. This paper describes community atlas projects from three West Virginia middle schools, in which 320 students and five teachers participated. Younger and less structured students responded with more enthusiasm to the open‐ended nature of the assignment. Self‐disciplined students produced effective web pages combining images, maps, and non‐spatial information such as demographic tables and local perceptions. Although this project was a collaboration between a university and local middle schools, sufficient resources are available for teachers to implement community atlases without specialized assistance.  相似文献   
9.
Single‐component towed‐streamer marine data acquisition records the pressure variations of the upgoing compressional waves followed by the polarity‐reversed pressure variations of downgoing waves, creating sea‐surface ghost events in the data. The sea‐surface ghost for constant‐depth towed‐streamer marine data acquisition is usually characterised by a ghost operator acting on the upgoing waves, which can be formulated as a filtering process in the frequency–wavenumber domain. The deghosting operation, usually via the application of the inverse Wiener filter related to the ghost operator, acts on the signal as well as the noise. The noise power transfer into the deghosted data is proportional to the power spectrum of the inverse Wiener filter and is amplifying the noise strongly at the notch wavenumbers and frequencies of the ghost operator. For variable‐depth streamer acquisition, the sea‐surface ghost cannot be described any longer as a wavenumber–frequency operator but as a linear relationship between the wavenumber–frequency representation of the upgoing waves at the sea surface and the data in the space–frequency domain. In this article, we investigate how the application of the inverse process acts on noise. It turns out that the noise magnification is less severe with variable‐depth streamer data, as opposed to constant depth, and is inversely proportional to the local slant of the streamer. We support this statement via application of the deghosting process to real and numerical random noise. We also propose a more general concept of a wavenumber–frequency ghost power transfer function, applicable for variable‐depth streamer acquisition, and demonstrate that the inverse of the proposed variable‐depth ghost power transfer function can be used to approximately quantify the action of the variable‐depth streamer deghosting process on noise.  相似文献   
10.
'Coverage' or 'fold' is defined as the multiplicity of common-midpoint (CMP) data. For CMP stacking the coverage is consistent with the number of traces sharing a common reflection point on flat subsurface reflectors. This relationship is not true for dipping reflectors. The deficiencies of CMP stacking with respect to imaging dipping events have long been overcome by the introduction of the dip-moveout (DMO) correction. However, the concept of coverage has not yet satisfactorily been updated to a 'DMO coverage' consistent with DMO stacking. A definition of constant-velocity DMO coverage will be proposed here. A subsurface reflector will be illuminated from a given source and receiver location if the time difference between the reflector zero-offset traveltime and the NMO- and DMO-corrected traveltime of the reflection event is less than half a dominant wavelength. Due to the fact that a subsurface reflector location is determined by its zero-offset traveltime, its strike and its dip, the DMO coverage also depends on these three parameters. For every surface location, the proposed DMO coverage consists of a 3D fold distribution over reflector strike, dip and zero-offset traveltime.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号