首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
5.
In an accompanying paper, we used waveform tomography to obtain a velocity model between two boreholes from a real crosshole seismic experiment. As for all inversions of geophysical data, it is important to make an assessment of the final model, to determine which parts of the model are well-resolved and can confidently be used for geological interpretation. In this paper we use checkerboard tests to provide a quantitative estimate of the performance of the inversion and the reliability of the final velocity model. We use the output from the checkerboard tests to determine resolvability across the velocity model. Such tests can act as good guides for designing appropriate inversion strategies. Here we discovered that, by including both reference-model and smoothing constraints in initial inversions, and then relaxing the smoothing constraint for later inversions, an optimum velocity image was obtained. Additionally, we noticed that the performance of the inversion was dependent on a relationship between velocity perturbation and checkerboard grid-size: larger velocity perturbations were better-resolved when the grid-size was also increased. Our results suggest that model assessment is an essential step prior to interpreting features in waveform tomographic images.  相似文献   

6.
7.
8.
Regularization is usually necessary in solving seismic tomographic inversion problems. In general the equation system of seismic tomography is very large, often making a suitable choice of the regularization parameter difficult. In this paper, we propose an algorithm for the practical choice of the regularization parameter in linear tomographic inversion. The algorithm is based on the types of statistical assumptions most commonly used in seismic tomography. We first transfer the system of equations into a Krylov subspace by using Lanczos bidiagonalization. In the transformed subspace, the system of equations is then changed into the form of a standard damped least squares normal equation. The solution to this normal equation can be written as an explicit function of the regularization parameter, which makes the choice of the regularization computationally convenient. Two criteria for the choice of the regularization parameter are investigated with the numerical simulations. If the dimensions of the transformed space are much less than that of the original model space, the algorithm can be very computationally efficient, which is practically useful in large seismic tomography problems.  相似文献   

9.
10.
Inversion of seismic attributes for velocity and attenuation structure   总被引:1,自引:0,他引:1  
We have developed an inversion formuialion for velocity and attenuation structure using seismic attributes, including envelope amplitude, instantaneous frequency and arrival times of selected seismic phases. We refer to this approach as AFT inversion for amplitude, (instantaneous) frequency and time. Complex trace analysis is used to extract the different seismic attributes. The instantaneous frequency data are converted to t * using a matching procedure that approximately removes the effects of the source spectra. To invert for structure, ray-perturbation methods are used to compute the sensitivity of the seismic attributes to variations in the model. An iterative inversion procedure is then performed from smooth to less smooth models that progressively incorporates the shorter-wavelength components of the model. To illustrate the method, seismic attributes are extracted from seismic-refraction data of the Ouachita PASSCAL experiment and used to invert for shallow crustal velocity and attenuation structure. Although amplitude data are sensitive to model roughness, the inverted velocity and attenuation models were required by the data to maintain a relatively smooth character. The amplitude and t * data were needed, along with the traveltimes, at each step of the inversion in order to fit all the seismic attributes at the final iteration.  相似文献   

11.
Reweighting strategies in seismic deconvolution   总被引:2,自引:0,他引:2  
Reweighting strategies have been widely used to diminish the influence of outliers in inverse problems. In a similar fashion, they can be used to design the regularization term that must be incorporated to solve an inverse problem successfully. Zero-order quadratic regularization, or damped least squares (pre-whitening) is a common procedure used to regularize the deconvolution problem. This procedure entails the definition of a constant damping term which is used to control the roughness of the deconvolved trace. In this paper I examine two different regularization criteria that lead to an algorithm where the damping term is adapted to successfully retrieve a broad-band reflectivity.
Synthetic and field data examples are used to illustrate the ability of the algorithm to deconvolve seismic traces.  相似文献   

12.
We propose a two-step inversion of three-component seismograms that (1) recovers the far-field source time function at each station and (2) estimates the distribution of co-seismic slip on the fault plane for small earthquakes (magnitude 3 to 4). The empirical Green's function (EGF) method consists of finding a small earthquake located near the one we wish to study and then performing a deconvolution to remove the path, site, and instrumental effects from the main-event signal.
The deconvolution between the two earthquakes is an unstable procedure: we have therefore developed a simulated annealing technique to recover a stable and positive source time function (STF) in the time domain at each station with an estimation of uncertainties. Given a good azimuthal coverage, we can obtain information on the directivity effect as well as on the rupture process. We propose an inversion method by simulated annealing using the STF to recover the distribution of slip on the fault plane with a constant rupture-velocity model. This method permits estimation of physical quantities on the fault plane, as well as possible identification of the real fault plane.
We apply this two-step procedure for an event of magnitude 3 recorded in the Gulf of Corinth in August 1991. A nearby event of magnitude 2 provides us with empirical Green's functions for each station. We estimate an active fault area of 0.02 to 0.15 km2 and deduce a stress-drop value of 1 to 30 bar and an average slip of 0.1 to 1.6 cm. The selected fault of the main event is in good agreement with the existence of a detachment surface inferred from the tectonics of this half-graben.  相似文献   

13.
A set of coordinate transformations is used to linearize a general geophysical inverse problem. Statistical and analytic techniques are employed to estimate the parameters of such linearization transformations. In the transformed space, techniques from linear inverse theory may be utilized. Consequently, important concepts, such as model parameter covariance, model parameter resolution and averaging kernels, may be carried over to non-linear inverse problems. I apply the approach to a set of seismic cross-borehole traveltimes gathered at the Conoco Borehole Test Facility. the seismic survey was conducted within the Fort Riley formation, a limestone with thin interbedded shales. Between the boreholes, the velocity structure of the Fort Riley formation consists of a high-velocity region overlying a section of lower velocity. It is found that model parameter resolution is poorest and spatial averaging lengths are greatest in the underlying low-velocity region.  相似文献   

14.
15.
The frequency-domain version of waveform tomography enables the use of distinct frequency components to adequately reconstruct the subsurface velocity field, and thereby dramatically reduces the input data quantity required for the inversion process. It makes waveform tomography a computationally tractable problem for production uses, but its applicability to real seismic data particularly in the petroleum exploration and development scale needs to be examined. As real data are often band limited with missing low frequencies, a good starting model is necessary for waveform tomography, to fill in the gap of low frequencies before the inversion of available frequencies. In the inversion stage, a group of frequencies should be used simultaneously at each iteration, to suppress the effect of data noise in the frequency domain. Meanwhile, a smoothness constraint on the model must be used in the inversion, to cope the effect of data noise, the effect of non-linearity of the problem, and the effect of strong sensitivities of short wavelength model variations. In this paper we use frequency-domain waveform tomography to provide quantitative velocity images of a crosshole target between boreholes 300 m apart. Due to the complexity of the local geology the velocity variations were extreme (between 3000 and 5500 m s−1), making the inversion problem highly non-linear. Nevertheless, the waveform tomography results correlate well with borehole logs, and provide realistic geological information that can be tracked between the boreholes with confidence.  相似文献   

16.
17.
18.
19.
20.
When discussing error estimates of the point-source mechanism and the source time function obtained by the two-step procedure by Šílený, Panza & Campus (1992), the authors insist that in the first step—inversion of seismograms (after Sipkin 1982) to get the moment tensor rate functions (MTRFs)—a homogeneous variance for all the data is needed to keep the advantageous symmetry of the normal equations. We show that this is too strong a requirement and can be dropped.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号