首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Frequency‐domain electromagnetic methods with a grounded‐wire source are powerful tools in geophysical exploration. However, the signal may be too weak to guarantee the quality of survey data in complex electromagnetic environments, especially when the receiver is located in the air for the newly developed grounded‐source airborne frequency‐domain electromagnetic method. In this paper, a signal enhancement method with multiple sources is proposed to solve this problem. To evaluate the signal enhancement effect, we compared the signals generated by a single source and multiple sources with equal electric moment. The signal differences caused by synchronisation error and separation distance between source elements were analysed, and the methods to achieve maximum signal were introduced. Besides, we discussed the interaction between adjacent source elements to ensure the system safety, including the changes in output current and the safe distance between two sources using a dual‐source model. Lastly, a comprehensive field experiment was designed and conducted to test the multiple‐source method. The data processing results are comparable for single and dual sources, and the signal‐to‐noise ratio of dual source is higher in the field test. The subsurface resistivity structure at the test site is consistent with the previous controlled‐source audio‐frequency magnetotellurics method. These results show that signal enhancement with multiple sources is feasible. This study provides guidance to the application of multiple sources in field surveys when the survey environment is complex and rigorous.  相似文献   

2.
Planar waves events recorded in a seismic array can be represented as lines in the Fourier domain. However, in the real world, seismic events usually have curvature or amplitude variability, which means that their Fourier transforms are no longer strictly linear but rather occupy conic regions of the Fourier domain that are narrow at low frequencies but broaden at high frequencies where the effect of curvature becomes more pronounced. One can consider these regions as localised “signal cones”. In this work, we consider a space–time variable signal cone to model the seismic data. The variability of the signal cone is obtained through scaling, slanting, and translation of the kernel for cone‐limited (C‐limited) functions (functions whose Fourier transform lives within a cone) or C‐Gaussian function (a multivariate function whose Fourier transform decays exponentially with respect to slowness and frequency), which constitutes our dictionary. We find a discrete number of scaling, slanting, and translation parameters from a continuum by optimally matching the data. This is a non‐linear optimisation problem, which we address by a fixed‐point method that utilises a variable projection method with ?1 constraints on the linear parameters and bound constraints on the non‐linear parameters. We observe that slow decay and oscillatory behaviour of the kernel for C‐limited functions constitute bottlenecks for the optimisation problem, which we partially overcome by the C‐Gaussian function. We demonstrate our method through an interpolation example. We present the interpolation result using the estimated parameters obtained from the proposed method and compare it with those obtained using sparsity‐promoting curvelet decomposition, matching pursuit Fourier interpolation, and sparsity‐promoting plane‐wave decomposition methods.  相似文献   

3.
We present a new inversion method to estimate, from prestack seismic data, blocky P‐ and S‐wave velocity and density images and the associated sparse reflectivity levels. The method uses the three‐term Aki and Richards approximation to linearise the seismic inversion problem. To this end, we adopt a weighted mixed l2, 1‐norm that promotes structured forms of sparsity, thus leading to blocky solutions in time. In addition, our algorithm incorporates a covariance or scale matrix to simultaneously constrain P‐ and S‐wave velocities and density. This a priori information is obtained by nearby well‐log data. We also include a term containing a low‐frequency background model. The l2, 1 mixed norm leads to a convex objective function that can be minimised using proximal algorithms. In particular, we use the fast iterative shrinkage‐thresholding algorithm. A key advantage of this algorithm is that it only requires matrix–vector multiplications and no direct matrix inversion. The latter makes our algorithm numerically stable, easy to apply, and economical in terms of computational cost. Tests on synthetic and field data show that the proposed method, contrarily to conventional l2‐ or l1‐norm regularised solutions, is able to provide consistent blocky and/or sparse estimators of P‐ and S‐wave velocities and density from a noisy and limited number of observations.  相似文献   

4.
Passive seismic has recently attracted a great deal of attention because non‐artificial source is used in subsurface imaging. The utilization of passive source is low cost compared with artificial‐source exploration. In general, constructing virtual shot gathers by using cross‐correlation is a preliminary step in passive seismic data processing, which provides the basis for applying conventional seismic processing methods. However, the subsurface structure is not uniformly illuminated by passive sources, which leads to that the ray path of passive seismic does not fit the hyperbolic hypothesis. Thereby, travel time is incorrect in the virtual shot gathers. Besides, the cross‐correlation results are contaminated by incoherent noise since the passive sources are always natural. Such noise is kinematically similar to seismic events and challenging to be attenuated, which will inevitably reduce the accuracy in the subsequent process. Although primary estimation for transient‐source seismic data has already been proposed, it is not feasible to noise‐source seismic data due to the incoherent noise. To overcome the above problems, we proposed to combine focal transform and local similarity into a highly integrated operator and then added it into the closed‐loop surface‐related multiple elimination based on the 3D L1‐norm sparse inversion framework. Results proved that the method was capable of reliably estimating noise‐free primaries and correcting travel time at far offsets for a foresaid virtual shot gathers in a simultaneous closed‐loop inversion manner.  相似文献   

5.
Scattered ground roll is a type of noise observed in land seismic data that can be particularly difficult to suppress. Typically, this type of noise cannot be removed using conventional velocity‐based filters. In this paper, we discuss a model‐driven form of seismic interferometry that allows suppression of scattered ground‐roll noise in land seismic data. The conventional cross‐correlate and stack interferometry approach results in scattered noise estimates between two receiver locations (i.e. as if one of the receivers had been replaced by a source). For noise suppression, this requires that each source we wish to attenuate the noise from is co‐located with a receiver. The model‐driven form differs, as the use of a simple model in place of one of the inputs for interferometry allows the scattered noise estimate to be made between a source and a receiver. This allows the method to be more flexible, as co‐location of sources and receivers is not required, and the method can be applied to data sets with a variety of different acquisition geometries. A simple plane‐wave model is used, allowing the method to remain relatively data driven, with weighting factors for the plane waves determined using a least‐squares solution. Using a number of both synthetic and real two‐dimensional (2D) and three‐dimensional (3D) land seismic data sets, we show that this model‐driven approach provides effective results, allowing suppression of scattered ground‐roll noise without having an adverse effect on the underlying signal.  相似文献   

6.
The cartography of erosion risk is mainly based on the development of models, which evaluate in a qualitative and quantitative manner the physical reproduction of the erosion processes (CORINE, EHU, INRA). These models are mainly semi‐quantitative but can be physically based and spatially distributed (the Pan‐European Soil Erosion Risk Assessment, PESERA). They are characterized by their simplicity and their applicability potential at large temporal and spatial scales. In developing our model SCALES (Spatialisation d'éChelle fine de l'ALéa Erosion des Sols/large‐scale assessment and mapping model of soil erosion hazard), we had in mind several objectives: (1) to map soil erosion at a regional scale with the guarantee of a large accuracy on the local level, (2) to envisage an applicability of the model in European oceanic areas, (3) to focus the erosion hazard estimation on the level of source areas (on‐site erosion), which are the agricultural parcels, (4) to take into account the weight of the temporality of agricultural practices (land‐use concept). Because of these objectives, the nature of variables, which characterize the erosion factors and because of its structure, SCALES differs from other models. Tested in Basse‐Normandie (Calvados 5500 km2) SCALES reveals a strong predisposition of the study area to the soil erosion which should require to be expressed in a wet year. Apart from an internal validation, we tried an intermediate one by comparing our results with those from INRA and PESERA. It appeared that these models under estimate medium erosion levels and differ in the spatial localization of areas with the highest erosion risks. SCALES underlines here the limitations in the use of pedo‐transfer functions and the interpolation of input data with a low resolution. One must not forget however that these models are mainly focused on an interregional comparative approach. Therefore the comparison of SCALES data with those of the INRA and PESERA models cannot result on a convincing validation of our model. For the moment the validation is based on the opinion of local experts, who agree with the qualitative indications delivered by our cartography. An external validation of SCALES is foreseen, which will be based on a thorough inventory of erosion signals in areas with different hazard levels. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
Nonparametric inverse methods provide a general framework for solving potential‐field problems. The use of weighted norms leads to a general regularization problem of Tikhonov form. We present an alternative procedure to estimate the source susceptibility distribution from potential field measurements exploiting inversion methods by means of a flexible depth‐weighting function in the Tikhonov formulation. Our approach improves the formulation proposed by Li and Oldenburg (1996, 1998) , differing significantly in the definition of the depth‐weighting function. In our formalism the depth weighting function is associated not to the field decay of a single block (which can be representative of just a part of the source) but to the field decay of the whole source, thus implying that the data inversion is independent on the cell shape. So, in our procedure, the depth‐weighting function is not given with a fixed exponent but with the structural index N of the source as the exponent. Differently than previous methods, our choice gives a substantial objectivity to the form of the depth‐weighting function and to the consequent solutions. The allowed values for the exponent of the depth‐weighting function depend on the range of N for sources: 0 ≤N≤ 3 (magnetic case). The analysis regarding the cases of simple sources such as dipoles, dipole lines, dykes or contacts, validate our hypothesis. The study of a complex synthetic case also proves that the depth‐weighting decay cannot be necessarily assumed as equal to 3. Moreover it should not be kept constant for multi‐source models but should instead depend on the structural indices of the different sources. In this way we are able to successfully invert the magnetic data of the Vulture area, Southern Italy. An original aspect of the proposed inversion scheme is that it brings an explicit link between two widely used types of interpretation methods, namely those assuming homogeneous fields, such as Euler deconvolution or depth from extreme points transformation and the inversion under the Tikhonov‐form including a depth‐weighting function. The availability of further constraints, from drillings or known geology, will definitely improve the quality of the solution.  相似文献   

8.
Seismic time‐lapse surveys are susceptible to repeatability errors due to varying environmental conditions. To mitigate this problem, we propose the use of interferometric least‐squares migration to estimate the migration images for the baseline and monitor surveys. Here, a known reflector is used as the reference reflector for interferometric least‐squares migration, and the data are approximately redatumed to this reference reflector before imaging. This virtual redatuming mitigates the repeatability errors in the time‐lapse migration image. Results with synthetic and field data show that interferometric least‐squares migration can sometimes reduce or eliminate artifacts caused by non‐repeatability in time‐lapse surveys and provide a high‐resolution estimate of the time‐lapse change in the reservoir.  相似文献   

9.
Topography and severe variations of near‐surface layers lead to travel‐time perturbations for the events in seismic exploration. Usually, these perturbations could be estimated and eliminated by refraction technology. The virtual refraction method is a relatively new technique for retrieval of refraction information from seismic records contaminated by noise. Based on the virtual refraction, this paper proposes super‐virtual refraction interferometry by cross‐correlation to retrieve refraction wavefields by summing the cross‐correlation of raw refraction wavefields and virtual refraction wavefields over all receivers located outside the retrieved source and receiver pair. This method can enhance refraction signal gradually as the source–receiver offset decreases. For further enhancement of refracted waves, a scheme of hybrid virtual refraction wavefields is applied by stacking of correlation‐type and convolution‐type super‐virtual refractions. Our new method does not need any information about the near‐surface velocity model, which can solve the problem of directly unmeasured virtual refraction energy from the virtual source at the surface, and extend the acquisition aperture to its maximum extent in raw seismic records. It can also reduce random noise influence in raw seismic records effectively and improve refracted waves’ signal‐to‐noise ratio by a factor proportional to the square root of the number of receivers positioned at stationary‐phase points, based on the improvement of virtual refraction's signal‐to‐noise ratio. Using results from synthetic and field data, we show that our new method is effective to retrieve refraction information from raw seismic records and improve the accuracy of first‐arrival picks.  相似文献   

10.
This paper presents the application of a multimodel method using a wavelet‐based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real‐time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet‐based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state‐estimates, each of which is weighted by its possibility that is also determined on‐line, are combined to form an optimal estimate. Validations conducted for the Wu‐Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time‐varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall–runoff process in the Wu‐Tu watershed. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
Most modern seismic imaging methods separate input data into parts (shot gathers). We develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield forward or backward in time. This approach has the potential for generating accurate images free of artiefacts associated with conventional approaches. We derive novel high‐order partial differential equations in the source–receiver time domain. The fourth‐order nature of the extrapolation in time leads to four solutions, two of which correspond to the incoming and outgoing P‐waves and reduce to the zero‐offset exploding‐reflector solutions when the source coincides with the receiver. A challenge for implementing two‐way time extrapolation is an essential singularity for horizontally travelling waves. This singularity can be avoided by limiting the range of wavenumbers treated in a spectral‐based extrapolation. Using spectral methods based on the low‐rank approximation of the propagation symbol, we extrapolate only the desired solutions in an accurate and efficient manner with reduced dispersion artiefacts. Applications to synthetic data demonstrate the accuracy of the new prestack modelling and migration approach.  相似文献   

12.
Interferometric redatuming is a data‐driven method to transform seismic responses with sources at one level and receivers at a deeper level into virtual reflection data with both sources and receivers at the deeper level. Although this method has traditionally been applied by cross‐correlation, accurate redatuming through a heterogeneous overburden requires solving a multidimensional deconvolution problem. Input data can be obtained either by direct observation (for instance in a horizontal borehole), by modelling or by a novel iterative scheme that is currently being developed. The output of interferometric redatuming can be used for imaging below the redatuming level, resulting in a so‐called interferometric image. Internal multiples from above the redatuming level are eliminated during this process. In the past, we introduced point‐spread functions for interferometric redatuming by cross‐correlation. These point‐spread functions quantify distortions in the redatumed data, caused by internal multiple reflections in the overburden. In this paper, we define point‐spread functions for interferometric imaging to quantify these distortions in the image domain. These point‐spread functions are similar to conventional resolution functions for seismic migration but they contain additional information on the internal multiples in the overburden and they are partly data‐driven. We show how these point‐spread functions can be visualized to diagnose image defocusing and artefacts. Finally, we illustrate how point‐spread functions can also be defined for interferometric imaging with passive noise sources in the subsurface or with simultaneous‐source acquisition at the surface.  相似文献   

13.
In the field of seismic interferometry, researchers have retrieved surface waves and body waves by cross‐correlating recordings of uncorrelated noise sources to extract useful subsurface information. The retrieved wavefields in most applications are between receivers. When the positions of the noise sources are known, inter‐source interferometry can be applied to retrieve the wavefields between sources, thus turning sources into virtual receivers. Previous applications of this form of interferometry assume impulsive point sources or transient sources with similar signatures. We investigate the requirements of applying inter‐source seismic interferometry using non‐transient noise sources with known positions to retrieve reflection responses at those positions and show the results using synthetic drilling noise as source. We show that, if pilot signals (estimates of the drill‐bit signals) are not available, it is required that the drill‐bit signals are the same and that the phases of the virtual reflections at drill‐bit positions can be retrieved by deconvolution interferometry or by cross‐coherence interferometry. Further, for this case, classic interferometry by cross‐correlation can be used if the source power spectrum can be estimated. If pilot signals are available, virtual reflection responses can be obtained by first using standard seismic‐while‐drilling processing techniques such as pilot cross‐correlation and pilot deconvolution to remove the drill‐bit signatures in the data and then applying cross‐correlation interferometry. Therefore, provided that pilot signals are reliable, drill‐bit data can be redatumed from surface to borehole depths using this inter‐source interferometry approach without any velocity information of the medium, and we show that a well‐positioned image below the borehole can be obtained using interferometrically redatumed reflection responses with just a simple velocity model. We discuss some of the practical hurdles that restrict the application of the proposed method offshore.  相似文献   

14.
This paper presents a methodology to estimate element‐by‐element demand‐to‐capacity ratios in instrumented steel moment‐resisting frames subject to earthquakes. The methodology combines a finite element model and acceleration measurements at various points throughout the building to estimate time history of displacements and internal force demands in all members. The estimated demands and their uncertainty are compared with code‐based capacity from which probabilistic bounds of demand‐to‐capacity ratios are obtained. The proposed methodology is verified using a simulated six‐story building and validated using acceleration data from California Strong Motion Instrumentation Programstation 24370 during the Northridge and Sierra Madre earthquakes.  相似文献   

15.
We present a 2D inversion scheme for magnetotelluric data, where the conductivity structure is parameterised with different wavelet functions that are collected in a wavelet‐based dictionary. The inversion model estimate is regularised in terms of wavelet coefficient sparsity following the compressive sensing approach. However, when the model is expressed on the basis of a single wavelet family only, the geometrical appearance of model features reflects the shape of the wavelet functions. Combining two or more wavelet families in a dictionary provides greater flexibility to represent the model structure, permitting, for example, the simultaneous occurrence of smooth and sharp anomalies within the same model. We show that the application of the sparsity regularisation scheme with wavelet dictionaries provides the user with a number of different model classes that may explain the data to the same extent. For a real data example from the Dead Sea Transform, we show that the use of such a scheme can be beneficial to evaluate the geometries of conductivity anomalies and to understand the effect of regularisation on the model estimate.  相似文献   

16.
Surface waves in seismic data are often dominant in a land or shallow‐water environment. Separating them from primaries is of great importance either for removing them as noise for reservoir imaging and characterization or for extracting them as signal for near‐surface characterization. However, their complex properties make the surface‐wave separation significantly challenging in seismic processing. To address the challenges, we propose a method of three‐dimensional surface‐wave estimation and separation using an iterative closed‐loop approach. The closed loop contains a relatively simple forward model of surface waves and adaptive subtraction of the forward‐modelled surface waves from the observed surface waves, making it possible to evaluate the residual between them. In this approach, the surface‐wave model is parameterized by the frequency‐dependent slowness and source properties for each surface‐wave mode. The optimal parameters are estimated in such a way that the residual is minimized and, consequently, this approach solves the inverse problem. Through real data examples, we demonstrate that the proposed method successfully estimates the surface waves and separates them out from the seismic data. In addition, it is demonstrated that our method can also be applied to undersampled, irregularly sampled, and blended seismic data.  相似文献   

17.
Determining the focal mechanism of earthquakes helps us to better define faults and understand the stress regime. This technique can be helpful in the oil and gas industry where it can be applied to microseismic events. The objective of this paper is to find double couple focal mechanisms, excluding scalar seismic moments, and the depths of small earthquakes using data from relatively few local stations. This objective is met by generating three‐component synthetic seismograms to match the observed normalized velocity seismograms. We first calculate Green's functions given an initial estimate of the earthquake's hypocentre, the locations of the seismic recording stations and a 1D velocity model of the region for a series of depths. Then, we calculate the moment tensor for different combinations of strikes, dips and rakes for each depth. These moment tensors are combined with the Green's functions and then convolved with a source time function to produce synthetic seismograms. We use a grid search to find the synthetic seismogram with the largest objective function that best fits all three components of the observed velocity seismogram. These parameters define the focal mechanism solution of an earthquake. We tested the method using three earthquakes in Southern California with moment magnitudes of 5.0, 5.1 and 4.4 using the frequency range 0.1–2.0 Hz. The source mechanisms of the events were determined independently using data from a multitude of stations. Our results obtained, from as few as three stations, generally match those obtained by the Southern California Earthquake Data Center. The main advantage of this method is that we use relatively high‐frequency full‐waveforms, including those from short‐period instruments, which makes it possible to find the focal mechanism and depth of earthquakes using as few as three stations when the velocity structure is known.  相似文献   

18.
This study describes the use of linearly modulated optically stimulated luminescence (LM‐OSL) to distinguish surface‐soil derived sediments from those derived from channel bank erosion. LM‐OSL signals from quartz extracted from 15 surface‐soil and five channel bank samples were analysed and compared to signals from samples collected from two downstream river sites. Discriminant analysis showed that the detrapping probabilities of fast, first slow and second slow components of the LM‐OSL signal can be used to differentiate between the samples collected from the channel bank and surface‐soil sources. We show that for each of these source end members these components are all normally distributed. These distributions are then used to estimate the relative contribution of surface‐soil derived and channel bank derived sediment to the river bed sediments. The results indicate that channel bank derived sediments dominate the sediment sources at both sites, with 90.1 ± 3% and 91.9 ± 1.9% contributions. These results are in agreement with a previous study which used measurements of 137Cs and 210Pbex fallout radionuclides to estimate the relative contribution from these two sources. This result shows that LM‐OSL may be a useful method, at least in the studied catchment, to estimate the relative contribution of surface soil and channel erosion to river sediments. However, further research in different settings is required to test the difference of OSL signals in distinguishing these sediment sources. And if generally acceptable, this technique may provide an alternative to the use of fallout radionuclides for source tracing. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
We have previously applied three‐dimensional acoustic, anisotropic, full‐waveform inversion to a shallow‐water, wide‐angle, ocean‐bottom‐cable dataset to obtain a high‐resolution velocity model. This velocity model produced an improved match between synthetic and field data, better flattening of common‐image gathers, a closer fit to well logs, and an improvement in the pre‐stack depth‐migrated image. Nevertheless, close examination reveals that there is a systematic mismatch between the observed and predicted data from this full‐waveform inversion model, with the predicted data being consistently delayed in time. We demonstrate that this mismatch cannot be produced by systematic errors in the starting model, by errors in the assumed source wavelet, by incomplete convergence, or by the use of an insufficiently fine finite‐difference mesh. Throughout these tests, the mismatch is remarkably robust with the significant exception that we do not see an analogous mismatch when inverting synthetic acoustic data. We suspect therefore that the mismatch arises because of inadequacies in the physics that are used during inversion. For ocean‐bottom‐cable data in shallow water at low frequency, apparent observed arrival times, in wide‐angle turning‐ray data, result from the characteristics of the detailed interference pattern between primary refractions, surface ghosts, and a large suite of wide‐angle multiple reflected and/or multiple refracted arrivals. In these circumstances, the dynamics of individual arrivals can strongly influence the apparent arrival times of the resultant compound waveforms. In acoustic full‐waveform inversion, we do not normally know the density of the seabed, and we do not properly account for finite shear velocity, finite attenuation, and fine‐scale anisotropy variation, all of which can influence the relative amplitudes of different interfering arrivals, which in their turn influence the apparent kinematics. Here, we demonstrate that the introduction of a non‐physical offset‐variable water density during acoustic full‐waveform inversion of this ocean‐bottom‐cable field dataset can compensate efficiently and heuristically for these inaccuracies. This approach improves the travel‐time match and consequently increases both the accuracy and resolution of the final velocity model that is obtained using purely acoustic full‐waveform inversion at minimal additional cost.  相似文献   

20.
To reduce the numerical errors arising from the improper enforcement of the artificial boundary conditions on the distant surface that encloses the underground part of the subsurface, we present a finite‐element–infinite‐element coupled method to significantly reduce the computation time and memory cost in the 2.5D direct‐current resistivity inversion. We first present the boundary value problem of the secondary potential. Then, a new type of infinite element is analysed and applied to replace the conventionally used mixed boundary condition on the distant boundary. In the internal domain, a standard finite‐element method is used to derive the final system of linear equations. With a novel shape function for infinite elements at the subsurface boundary, the final system matrix is sparse, symmetric, and independent of source electrodes. Through lower upper decomposition, the multi‐pole potentials can be swiftly obtained by simple back‐substitutions. We embed the newly developed forward solution to the inversion procedure. To compute the sensitivity matrix, we adopt the efficient adjoint equation approach to further reduce the computation cost. Finally, several synthetic examples are tested to show the efficiency of inversion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号