首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Reverse‐time migration gives high‐quality, complete images by using full‐wave extrapolations. It is thus not subject to important limitations of other migrations that are based on high‐frequency or one‐way approximations. The cross‐correlation imaging condition in two‐dimensional pre‐stack reverse‐time migration of common‐source data explicitly sums the product of the (forward‐propagating) source and (backward‐propagating) receiver wavefields over all image times. The primary contribution at any image point travels a minimum‐time path that has only one (specular) reflection, and it usually corresponds to a local maximum amplitude. All other contributions at the same image point are various types of multipaths, including prismatic multi‐arrivals, free‐surface and internal multiples, converted waves, and all crosstalk noise, which are imaged at later times, and potentially create migration artefacts. A solution that facilitates inclusion of correctly imaged, non‐primary arrivals and removal of the related artefacts, is to save the depth versus incident angle slice at each image time (rather than automatically summing them). This results in a three‐parameter (incident angle, depth, and image time) common‐image volume that integrates, into a single unified representation, attributes that were previously computed by separate processes. The volume can be post‐processed by selecting any desired combination of primary and/or multipath data before stacking over image time. Separate images (with or without artifacts) and various projections can then be produced without having to remigrate the data, providing an efficient tool for optimization of migration images. A numerical example for a simple model shows how primary and prismatic multipath contributions merge into a single incident angle versus image time trajectory. A second example, using synthetic data from the Sigsbee2 model, shows that the contributions to subsalt images of primary and multipath (in this case, turning wave) reflections are different. The primary reflections contain most of the information in regions away from the salt, but both primary and multipath data contribute in the subsalt region.  相似文献   

2.
We develop a two‐dimensional full waveform inversion approach for the simultaneous determination of S‐wave velocity and density models from SH ‐ and Love‐wave data. We illustrate the advantages of the SH/Love full waveform inversion with a simple synthetic example and demonstrate the method's applicability to a near‐surface dataset, recorded in the village ?achtice in Northwestern Slovakia. Goal of the survey was to map remains of historical building foundations in a highly heterogeneous subsurface. The seismic survey comprises two parallel SH‐profiles with maximum offsets of 24 m and covers a frequency range from 5 Hz to 80 Hz with high signal‐to‐noise ratio well suited for full waveform inversion. Using the Wiechert–Herglotz method, we determined a one‐dimensional gradient velocity model as a starting model for full waveform inversion. The two‐dimensional waveform inversion approach uses the global correlation norm as objective function in combination with a sequential inversion of low‐pass filtered field data. This mitigates the non‐linearity of the multi‐parameter inverse problem. Test computations show that the influence of visco‐elastic effects on the waveform inversion result is rather small. Further tests using a mono‐parameter shear modulus inversion reveal that the inversion of the density model has no significant impact on the final data fit. The final full waveform inversion S‐wave velocity and density models show a prominent low‐velocity weathering layer. Below this layer, the subsurface is highly heterogeneous. Minimum anomaly sizes correspond to approximately half of the dominant Love‐wavelength. The results demonstrate the ability of two‐dimensional SH waveform inversion to image shallow small‐scale soil structure. However, they do not show any evidence of foundation walls.  相似文献   

3.
This study presents single‐objective and multi‐objective particle swarm optimization (PSO) algorithms for automatic calibration of Hydrologic Engineering Center‐ Hydrologic Modeling Systems rainfall‐runoff model of Tamar Sub‐basin of Gorganroud River Basin in north of Iran. Three flood events were used for calibration and one for verification. Four performance criteria (objective functions) were considered in multi‐objective calibration where different combinations of objective functions were examined. For comparison purposes, a fuzzy set‐based approach was used to determine the best compromise solutions from the Pareto fronts obtained by multi‐objective PSO. The candidate parameter sets determined from different single‐objective and multi‐objective calibration scenarios were tested against the fourth event in the verification stage, where the initial abstraction parameters were recalibrated. A step‐by‐step screening procedure was used in this stage while evaluating and comparing the candidate parameter sets, which resulted in a few promising sets that performed well with respect to at least three of four performance criteria. The promising sets were all from the multi‐objective calibration scenarios which revealed the outperformance of the multi‐objective calibration on the single‐objective one. However, the results indicated that an increase of the number of objective functions did not necessarily lead to a better performance as the results of bi‐objective function calibration with a proper combination of objective functions performed as satisfactorily as those of triple‐objective function calibration. This is important because handling multi‐objective optimization with an increased number of objective functions is challenging especially from a computational point of view. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
Radio‐frequency electromagnetic tomography (or radio imaging method) employs radio‐frequency (typically 0.1–10 MHz) electromagnetic wave propagation to delineate the distribution of electric properties between two boreholes. Currently, the straight‐ray imaging method is the primary imaging method for the radio imaging method data acquired for mineral exploration. We carried out synthetic studies using three‐dimensional finite‐element modelling implemented in COMSOL Multiphysics to study the electromagnetic field characteristics and to assess the capability of the straight‐ray imaging method using amplitude and phase data separately. We studied four sets of experiments with models of interest in the mining setting. In the first two experiments, we studied models with perfect conductors in homogeneous backgrounds, which show that the characteristics of the electromagnetic fields depend mainly on the wavelength. When the borehole separations are less than one wavelength, induction effects occur; conductors with simple geometries can be recovered acceptably with amplitude data but are incorrectly imaged on the phase tomogram. When the borehole separations are longer than two wavelengths, radiation effects play a major role. In this case, phase tomography provides images with acceptable quality, whereas amplitude tomography does not provide satisfactory results. The third experiment shows that imaging with both original and reciprocal datasets is somewhat helpful in improving the imaging quality by reducing the impact of noise. In the last experiment, we studied models with conductive zones extended into the borehole plane with different lengths, which were not accurately recovered with amplitude tomography. The experiment implies that it is difficult to determine the extent of a mineralised zone that has been intersected by one of the boreholes. Due to the large variation of the wavelength in the radio‐frequency range, we suggest investigating the local electric properties to select an operating frequency prior to a survey. We conclude that straight‐ray tomography with either amplitude or phase data cannot provide high‐quality imaging results. We suggest using more general methods based on full electromagnetic modelling to interpret the data. In circumstances when computational time is critical, we suggest saving time by using either induction methods for borehole separations less than one wavelength or wave‐based methods (only radiation fields are considered) for borehole separation larger than two wavelengths.  相似文献   

5.
We compare the performances of four stochastic optimisation methods using four analytic objective functions and two highly non‐linear geophysical optimisation problems: one‐dimensional elastic full‐waveform inversion and residual static computation. The four methods we consider, namely, adaptive simulated annealing, genetic algorithm, neighbourhood algorithm, and particle swarm optimisation, are frequently employed for solving geophysical inverse problems. Because geophysical optimisations typically involve many unknown model parameters, we are particularly interested in comparing the performances of these stochastic methods as the number of unknown parameters increases. The four analytic functions we choose simulate common types of objective functions encountered in solving geophysical optimisations: a convex function, two multi‐minima functions that differ in the distribution of minima, and a nearly flat function. Similar to the analytic tests, the two seismic optimisation problems we analyse are characterised by very different objective functions. The first problem is a one‐dimensional elastic full‐waveform inversion, which is strongly ill‐conditioned and exhibits a nearly flat objective function, with a valley of minima extended along the density direction. The second problem is the residual static computation, which is characterised by a multi‐minima objective function produced by the so‐called cycle‐skipping phenomenon. According to the tests on the analytic functions and on the seismic data, genetic algorithm generally displays the best scaling with the number of parameters. It encounters problems only in the case of irregular distribution of minima, that is, when the global minimum is at the border of the search space and a number of important local minima are distant from the global minimum. The adaptive simulated annealing method is often the best‐performing method for low‐dimensional model spaces, but its performance worsens as the number of unknowns increases. The particle swarm optimisation is effective in finding the global minimum in the case of low‐dimensional model spaces with few local minima or in the case of a narrow flat valley. Finally, the neighbourhood algorithm method is competitive with the other methods only for low‐dimensional model spaces; its performance sensibly worsens in the case of multi‐minima objective functions.  相似文献   

6.
In this paper, we discuss the effects of anomalous out‐of‐plane bodies in two‐dimensional (2D) borehole‐to‐surface electrical resistivity tomography with numerical resistivity modelling and synthetic inversion tests. The results of the two groups of synthetic resistivity model tests illustrate that anomalous bodies out of the plane of interest have an effect on two‐dimensional inversion and that the degree of influence of out‐of‐plane body on inverted images varies. The different influences are derived from two cases. One case is different resistivity models with the same electrode array, and the other case is the same resistivity model with different electrode arrays. Qualitative interpretation based on the inversion tests shows that we cannot find a reasonable electrode array to determine the best inverse solution and reveal the subsurface resistivity distribution for all types of geoelectrical models. Because of the three‐dimensional effect arising from neighbouring anomalous bodies, the qualitative interpretation of inverted images from the two‐dimensional inversion of electrical resistivity tomography data without prior information can be misleading. Two‐dimensional inversion with drilling data can decrease the three‐dimensional effect. We employed two‐ and three‐dimensional borehole‐to‐surface electrical resistivity tomography methods with a pole–pole array and a bipole–bipole array for mineral exploration at Abag Banner and Hexigten Banner in Inner Mongolia, China. Different inverse schemes were carried out for different cases. The subsurface resistivity distribution obtained from the two‐dimensional inversion of the field electrical resistivity tomography data with sufficient prior information, such as drilling data and other non‐electrical data, can better describe the actual geological situation. When there is not enough prior information to carry out constrained two‐dimensional inversion, the three‐dimensional electrical resistivity tomography survey is the better choice.  相似文献   

7.
Three‐dimensional receiver ghost attenuation (deghosting) of dual‐sensor towed‐streamer data is straightforward, in principle. In its simplest form, it requires applying a three‐dimensional frequency–wavenumber filter to the vertical component of the particle motion data to correct for the amplitude reduction on the vertical component of non‐normal incidence plane waves before combining with the pressure data. More elaborate techniques use three‐dimensional filters to both components before summation, for example, for ghost wavelet dephasing and mitigation of noise of different strengths on the individual components in optimum deghosting. The problem with all these techniques is, of course, that it is usually impossible to transform the data into the crossline wavenumber domain because of aliasing. Hence, usually, a two‐dimensional version of deghosting is applied to the data in the frequency–inline wavenumber domain. We investigate going down the “dimensionality ladder” one more step to a one‐dimensional weighted summation of the records of the collocated sensors to create an approximate deghosting procedure. We specifically consider amplitude‐balancing weights computed via a standard automatic gain control before summation, reminiscent of a diversity stack of the dual‐sensor recordings. This technique is independent of the actual streamer depth and insensitive to variations in the sea‐surface reflection coefficient. The automatic gain control weights serve two purposes: (i) to approximately correct for the geometric amplitude loss of the Z data and (ii) to mitigate noise strength variations on the two components. Here, Z denotes the vertical component of the velocity of particle motion scaled by the seismic impedance of the near‐sensor water volume. The weights are time‐varying and can also be made frequency‐band dependent, adapting better to frequency variations of the noise. The investigated process is a very robust, almost fully hands‐off, approximate three‐dimensional deghosting step for dual‐sensor data, requiring no spatial filtering and no explicit estimates of noise power. We argue that this technique performs well in terms of ghost attenuation (albeit, not exact ghost removal) and balancing the signal‐to‐noise ratio in the output data. For instances where full three‐dimensional receiver deghosting is the final product, the proposed technique is appropriate for efficient quality control of the data acquired and in aiding the parameterisation of the subsequent deghosting processing.  相似文献   

8.
With the availability of spatially distributed data, distributed hydrologic models are increasingly used for simulation of spatially varied hydrologic processes to understand and manage natural and human activities that affect watershed systems. Multi‐objective optimization methods have been applied to calibrate distributed hydrologic models using observed data from multiple sites. As the time consumed by running these complex models is increasing substantially, selecting efficient and effective multi‐objective optimization algorithms is becoming a nontrivial issue. In this study, we evaluated a multi‐algorithm, genetically adaptive multi‐objective method (AMALGAM) for multi‐site calibration of a distributed hydrologic model—Soil and Water Assessment Tool (SWAT), and compared its performance with two widely used evolutionary multi‐objective optimization (EMO) algorithms (i.e. Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Non‐dominated Sorted Genetic Algorithm II (NSGA‐II)). In order to provide insights into each method's overall performance, these three methods were tested in four watersheds with various characteristics. The test results indicate that the AMALGAM can consistently provide competitive or superior results compared with the other two methods. The multi‐method search framework of AMALGAM, which can flexibly and adaptively utilize multiple optimization algorithms, makes it a promising tool for multi‐site calibration of the distributed SWAT. For practical use of AMALGAM, it is suggested to implement this method in multiple trials with relatively small number of model runs rather than run it once with long iterations. In addition, incorporating different multi‐objective optimization algorithms and multi‐mode search operators into AMALGAM deserves further research. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
In this study, NSGA‐II is applied to multireservoir system optimization. Here, a four‐dimensional multireservoir system in the Han River basin was formulated. Two objective functions and three cases having different constraint conditions are used to achieve nondominated solutions. NSGA‐II effectively determines these solutions without being subject to any user‐defined penalty function, as it is applied to a multireservoir system optimization having a number of constraints (here, 246), multi‐objectives, and infeasible initial solutions. Most research by multi‐objective genetic algorithms only reveals a trade‐off in the objective function space present, and thus the decision maker must reanalyse this trade‐off relationship in order to obtain information on the decision variable. Contrastingly, this study suggests a method for identifying the best solutions among the nondominated ones by analysing the relation between objective function values and decision variables. Our conclusions demonstrated that NSGA‐II performs well in multireservoir system optimization having multi‐objectives. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
Three‐dimensional seismic survey design should provide an acquisition geometry that enables imaging and amplitude‐versus‐offset applications of target reflectors with sufficient data quality under given economical and operational constraints. However, in land or shallow‐water environments, surface waves are often dominant in the seismic data. The effectiveness of surface‐wave separation or attenuation significantly affects the quality of the final result. Therefore, the need for surface‐wave attenuation imposes additional constraints on the acquisition geometry. Recently, we have proposed a method for surface‐wave attenuation that can better deal with aliased seismic data than classic methods such as slowness/velocity‐based filtering. Here, we investigate how surface‐wave attenuation affects the selection of survey parameters and the resulting data quality. To quantify the latter, we introduce a measure that represents the estimated signal‐to‐noise ratio between the desired subsurface signal and the surface waves that are deemed to be noise. In a case study, we applied surface‐wave attenuation and signal‐to‐noise ratio estimation to several data sets with different survey parameters. The spatial sampling intervals of the basic subset are the survey parameters that affect the performance of surface‐wave attenuation methods the most. Finer spatial sampling will reduce aliasing and make surface‐wave attenuation easier, resulting in better data quality until no further improvement is obtained. We observed this behaviour as a main trend that levels off at increasingly denser sampling. With our method, this trend curve lies at a considerably higher signal‐to‐noise ratio than with a classic filtering method. This means that we can obtain a much better data quality for given survey effort or the same data quality as with a conventional method at a lower cost.  相似文献   

11.
Finite‐difference frequency‐domain modelling of seismic wave propagation is attractive for its efficient solution of multisource problems, and this is crucial for full‐waveform inversion and seismic imaging, especially in the three‐dimensional seismic problem. However, implementing the free surface in the finite‐difference method is nontrivial. Based on an average medium method and the limit theorem, we present an adaptive free‐surface expression to describe the behaviour of wavefields at the free surface, and no extra work for the free‐surface boundary condition is needed. Essentially, the proposed free‐surface expression is a modification of density and constitutive relation at the free surface. In comparison with a direct difference approximate method of the free‐surface boundary condition, this adaptive free‐surface expression can produce more accurate and stable results for a broad range of Poisson's ratio. In addition, this expression has a good performance in handling the lateral variation of Poisson's ratio adaptively and without instability.  相似文献   

12.
We developed a frequency‐domain acoustic‐elastic coupled waveform inversion based on the Gauss‐Newton conjugate gradient method. Despite the use of a high‐performance computer system and a state‐of‐the‐art parallel computation algorithm, it remained computationally prohibitive to calculate the approximate Hessian explicitly for a large‐scale inverse problem. Therefore, we adopted the conjugate gradient least‐squares algorithm, which is frequently used for geophysical inverse problems, to implement the Gauss‐Newton method so that the approximate Hessian is calculated implicitly. Thus, there was no need to store the Hessian matrix. By simultaneously back‐propagating multi‐components consisting of the pressure and displacements, we could efficiently extract information on the subsurface structures. To verify our algorithm, we applied it to synthetic data sets generated from the Marmousi‐2 model and the modified SEG/EAGE salt model. We also extended our algorithm to the ocean‐bottom cable environment and verified it using ocean‐bottom cable data generated from the Marmousi‐2 model. With the assumption of a hard seafloor, we recovered both the P‐wave velocity of complicated subsurface structures as well as the S‐wave velocity. Although the inversion of the S‐wave velocity is not feasible for the high Poisson's ratios used to simulate a soft seafloor, several strategies exist to treat this problem. Our example using multi‐component data showed some promise in mitigating the soft seafloor effect. However, this issue still remains open.  相似文献   

13.
A procedure which involves a non‐linear eigenvalue problem and is based on the substructure method is proposed for the free‐vibration analysis of a soil–structure system. In this procedure, the structure is modelled by the standard finite element method, while the unbounded soil is modelled by the scaled boundary finite element method. The fundamental frequency, and the corresponding radiation damping ratio as well as the modal shape are obtained by using inverse iteration. The free vibration of a dam–foundation system, a hemispherical cavity and a hemispherical deposit are analysed in detail. The numerical results are compared with available results and are also verified by the Fourier transform of the impulsive response calculated in the time domain by the three‐dimensional soil–structure–wave interaction analysis procedure proposed in our previous paper. The fundamental frequency obtained by the present procedure is very close to that obtained by Touhei and Ohmachi, but the damping ratio and the imaginary part of modal shape are significantly different due to the different definition of damping ratio. This study shows that although the classical mode‐superposition method is not applicable to a soil–structure system due to the frequency dependence of the radiation damping, it is still of interest in earthquake engineering to evaluate the fundamental frequency and the corresponding radiation damping ratio of the soil–structure system. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

14.
Surface waves in seismic data are often dominant in a land or shallow‐water environment. Separating them from primaries is of great importance either for removing them as noise for reservoir imaging and characterization or for extracting them as signal for near‐surface characterization. However, their complex properties make the surface‐wave separation significantly challenging in seismic processing. To address the challenges, we propose a method of three‐dimensional surface‐wave estimation and separation using an iterative closed‐loop approach. The closed loop contains a relatively simple forward model of surface waves and adaptive subtraction of the forward‐modelled surface waves from the observed surface waves, making it possible to evaluate the residual between them. In this approach, the surface‐wave model is parameterized by the frequency‐dependent slowness and source properties for each surface‐wave mode. The optimal parameters are estimated in such a way that the residual is minimized and, consequently, this approach solves the inverse problem. Through real data examples, we demonstrate that the proposed method successfully estimates the surface waves and separates them out from the seismic data. In addition, it is demonstrated that our method can also be applied to undersampled, irregularly sampled, and blended seismic data.  相似文献   

15.
Wavefield decomposition forms an important ingredient of various geophysical methods. An example of wavefield decomposition is the decomposition into upgoing and downgoing wavefields and simultaneous decomposition into different wave/field types. The multi‐component field decomposition scheme makes use of the recordings of different field quantities (such as particle velocity and pressure). In practice, different recordings can be obscured by different sensor characteristics, requiring calibration with an unknown calibration factor. Not all field quantities required for multi‐component field decomposition might be available, or they can suffer from different noise levels. The multi‐depth‐level decomposition approach makes use of field quantities recorded at multiple depth levels, e.g., two horizontal boreholes closely separated from each other, a combination of a single receiver array combined with free‐surface boundary conditions, or acquisition geometries with a high‐density of vertical boreholes. We theoretically describe the multi‐depth‐level decomposition approach in a unified form, showing that it can be applied to different kinds of fields in dissipative, inhomogeneous, anisotropic media, e.g., acoustic, electromagnetic, elastodynamic, poroelastic, and seismoelectric fields. We express the one‐way fields at one depth level in terms of the observed fields at multiple depth levels, using extrapolation operators that are dependent on the medium parameters between the two depth levels. Lateral invariance at the depth level of decomposition allows us to carry out the multi‐depth‐level decomposition in the horizontal wavenumber–frequency domain. We illustrate the multi‐depth‐level decomposition scheme using two synthetic elastodynamic examples. The first example uses particle velocity recordings at two depth levels, whereas the second example combines recordings at one depth level with the Dirichlet free‐surface boundary condition of zero traction. Comparison with multi‐component decomposed fields shows a perfect match in both amplitude and phase for both cases. The multi‐depth‐level decomposition scheme is fully customizable to the desired acquisition geometry. The decomposition problem is in principle an inverse problem. Notches may occur at certain frequencies, causing the multi‐depth‐level composition matrix to become uninvertible, requiring additional notch filters. We can add multi‐depth‐level free‐surface boundary conditions as extra equations to the multi‐component composition matrix, thereby overdetermining this inverse problem. The combined multi‐component–multi‐depth‐level decomposition on a land data set clearly shows improvements in the decomposition results, compared with the performance of the multi‐component decomposition scheme.  相似文献   

16.
Although waveform inversion has been intensively studied in an effort to properly delineate the Earth's structures since the early 1980s, most of the time‐ and frequency‐domain waveform inversion algorithms still have critical limitations in their applications to field data. This may be attributed to the highly non‐linear objective function and the unreliable low‐frequency components. To overcome the weaknesses of conventional waveform inversion algorithms, the acoustic Laplace‐domain waveform inversion has been proposed. The Laplace‐domain waveform inversion has been known to provide a long‐wavelength velocity model even for field data, which may be because it employs the zero‐frequency component of the damped wavefield and a well‐behaved logarithmic objective function. However, its applications have been confined to 2D acoustic media. We extend the Laplace‐domain waveform inversion algorithm to a 2D acoustic‐elastic coupled medium, which is encountered in marine exploration environments. In 2D acoustic‐elastic coupled media, the Laplace‐domain pressures behave differently from those of 2D acoustic media, although the overall features are similar to each other. The main differences are that the pressure wavefields for acoustic‐elastic coupled media show negative values even for simple geological structures unlike in acoustic media, when the Laplace damping constant is small and the water depth is shallow. The negative values may result from more complicated wave propagation in elastic media and at fluid‐solid interfaces. Our Laplace‐domain waveform inversion algorithm is also based on the finite‐element method and logarithmic wavefields. To compute gradient direction, we apply the back‐propagation technique. Under the assumption that density is fixed, P‐ and S‐wave velocity models are inverted from the pressure data. We applied our inversion algorithm to the SEG/EAGE salt model and the numerical results showed that the Laplace‐domain waveform inversion successfully recovers the long‐wavelength structures of the P‐ and S‐wave velocity models from the noise‐free data. The models inverted by the Laplace‐domain waveform inversion were able to be successfully used as initial models in the subsequent frequency‐domain waveform inversion, which is performed to describe the short‐wavelength structures of the true models.  相似文献   

17.
Three‐component borehole magnetics provide important additional information compared to total field or horizontal and vertical measurements. These data can be used for several tasks such as the localization of ferromagnetic objects, the determination of apparent polar wander curves and the computation of the magnetization of rock units. However, the crucial point in three‐component borehole magnetics is the reorientation of the measured data from the tool's frame to the geographic reference frame North, East and Downwards. For this purpose, our tool, the Göttinger Borehole Magnetometer, comprises three orthogonally aligned fibre optic gyros along with three fluxgate sensors. With these sensors, the vector of the magnetic field along with the tool rotation can be recorded continuously during the measurement. Using the high–precision gyro data, we can compute the vector of the magnetic anomaly with respect to the Earth's reference frame. Based on the comparison of several logs measured in the Outokumpu Deep Drill Hole (OKU R2500, Finland), the repeatability of the magnetic field vector is 0.8° in azimuthal direction, 0.08° in inclination and 71 nT in magnitude.  相似文献   

18.
Extrapolating wavefields and imaging at each depth during three‐dimensional recursive wave‐equation migration is a time‐consuming endeavor. For efficiency, most commercial techniques extrapolate wavefields through thick slabs followed by wavefield interpolation within each thick slab. In this article, we develop this strategy by associating more efficient interpolators with a Fourier‐transform‐related wavefield extrapolation method. First, we formulate a three‐dimensional first‐order separation‐of‐variables screen propagator for large‐step wavefield extrapolation, which allows for wide‐angle propagations in highly contrasting media. This propagator significantly improves the performance of the split‐step Fourier method in dealing with significant lateral heterogeneities at the cost of only one more fast Fourier transform in each thick slab. We then extend the two‐dimensional Kirchhoff and Born–Kirchhoff local wavefield interpolators to three‐dimensional cases for each slab. The three‐dimensional Kirchhoff interpolator is based on the traditional Kirchhoff formula and applies to moderate lateral velocity variations, whereas the three‐dimensional Born–Kirchhoff interpolator is derived from the Lippmann–Schwinger integral equation under the Born approximation and is adapted to highly laterally varying media. Numerical examples on the three‐dimensional salt model of the Society of Exploration Geophysicists/European Association of Geoscientists demonstrate that three‐dimensional first‐order separation‐of‐variables screen propagator Born–Kirchhoff depth migration using thick‐slab wavefield extrapolation plus thin‐slab interpolation tolerates a considerable depth‐step size of up to 72 ms, eventually resulting in an efficiency improvement of nearly 80% without obvious loss of imaging accuracy. Although the proposed three‐dimensional interpolators are presented with one‐way Fourier extrapolation methods, they can be extended for applications to general migration methods.  相似文献   

19.
Scattered ground roll is a type of noise observed in land seismic data that can be particularly difficult to suppress. Typically, this type of noise cannot be removed using conventional velocity‐based filters. In this paper, we discuss a model‐driven form of seismic interferometry that allows suppression of scattered ground‐roll noise in land seismic data. The conventional cross‐correlate and stack interferometry approach results in scattered noise estimates between two receiver locations (i.e. as if one of the receivers had been replaced by a source). For noise suppression, this requires that each source we wish to attenuate the noise from is co‐located with a receiver. The model‐driven form differs, as the use of a simple model in place of one of the inputs for interferometry allows the scattered noise estimate to be made between a source and a receiver. This allows the method to be more flexible, as co‐location of sources and receivers is not required, and the method can be applied to data sets with a variety of different acquisition geometries. A simple plane‐wave model is used, allowing the method to remain relatively data driven, with weighting factors for the plane waves determined using a least‐squares solution. Using a number of both synthetic and real two‐dimensional (2D) and three‐dimensional (3D) land seismic data sets, we show that this model‐driven approach provides effective results, allowing suppression of scattered ground‐roll noise without having an adverse effect on the underlying signal.  相似文献   

20.
In tight gas sands, the signal‐to‐noise ratio of nuclear magnetic resonance log data is usually low, which limits the application of nuclear magnetic resonance logs in this type of reservoir. This project uses the method of wavelet‐domain adaptive filtering to denoise the nuclear magnetic resonance log data from tight gas sands. The principles of the maximum correlation coefficient and the minimum root mean square error are used to decide on the optimal basis function for wavelet transformation. The feasibility and the effectiveness of this method are verified by analysing the numerical simulation results and core experimental data. Compared with the wavelet thresholding denoise method, this adaptive filtering method is more effective in noise filtering, which can improve the signal‐to‐noise ratio of nuclear magnetic resonance data and the inversion precision of transverse relaxation time T2 spectrum. The application of this method to nuclear magnetic resonance logs shows that this method not only can improve the accuracy of nuclear magnetic resonance porosity but also can enhance the recognition ability of tight gas sands in nuclear magnetic resonance logs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号