首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We introduce the signal dependent time–frequency distribution, which is a time–frequency distribution that allows the user to optimize the tradeoff between joint time–frequency resolution and suppression of transform artefacts. The signal‐dependent time–frequency distribution, as well as the short‐time Fourier transform, Stockwell transform, and the Fourier transform are analysed for their ability to estimate the spectrum of a known wavelet used in a tuning wedge model. Next, the signal‐dependent time–frequency distribution, and fixed‐ and variable‐window transforms are used to estimate spectra from a zero‐offset synthetic seismogram. Attenuation is estimated from the associated spectral ratio curves, and the accuracy of the results is compared. The synthetic consisted of six pairs of strong reflections, based on real well‐log data, with a modeled intrinsic attenuation value of 1000/Q = 20. The signal‐dependent time–frequency distribution was the only time–frequency transform found to produce spectra that estimated consistent attenuation values, with an average of 1000/Q = 26±2; results from the fixed‐ and variable‐window transforms were 24±17 and 39±10, respectively. Finally, all three time–frequency transforms were used in a pre‐stack attenuation estimation method (the pre‐stack Q inversion algorithm) applied to a gather from a North Sea seismic dataset, to estimate attenuation between nine different strong reflections. In this case, the signal‐dependent time‐frequency distribution produced spectra more consistent with the constant‐Q model of attenuation assumed in the pre‐stack attenuation estimation algorithm: the average L1 residuals of the spectral ratio surfaces from the theoretical constant‐Q expectation for the signal‐dependent time‐frequency distribution, short‐time Fourier transform, and Stockwell transform were 0.12, 0.21, and 0.33, respectively. Based on the results shown, the signal‐dependent time‐frequency distribution is a time–frequency distribution that can provide more accurate and precise estimations of the amplitude spectrum of a reflection, due to a higher attainable time–frequency resolution.  相似文献   

2.
Imaging using dipole acoustic logging reflections has become a research topic of increasing interest in recent years. Extracting reflections from the whole waveform is both important and extremely difficult because the reflections are obscured by large‐amplitude direct waves. A method of wavefield separation based on high‐resolution Radon transforms has been applied to separate the reflected waves. First, an analysis of the common offset gathers shows that the linear Radon transform can be used to separate the direct and reflected wave fields. However, traditional linear Radon transforms cannot focus the wave event using the least squares method. An improved high‐resolution linear Radon transform is achieved using the principles of maximum entropy and Bayesian methods based on previous studies. The separation method is tested using synthetic data for hard and soft formations, a void model, and a fault model. The high‐resolution Radon transform method is used to process a field dataset and exhibits improved results compared with those of the standard method.  相似文献   

3.
High‐resolution topography, e.g. 1‐m digital elevation model (DEM) from light detection and ranging (LiDAR), offers opportunity for accurate identification of topographic features of relevance for hydrologic and geomorphologic modelling. Yet, the computation of some derived topographic properties, such as the topographic index (TI), is characterized by daunting challenges that hamper the full exploration of topography‐based models. Particular problems, for example, arise when a distributed (or semi‐distributed) rainfall–runoff model is applied to high‐resolution DEMs. Indeed, the characteristic dependency between landscape resolution and the computed TI distribution results in the formation of un‐physical, unconnected saturated zones, which in turn cause unrealistic representations of rainfall–runoff dynamics. In this study, we present a methodology based on a multi‐resolution wavelet transformation that, by means of a soft‐thresholding scheme on the wavelet coefficients, filters the noise of high‐resolution topography to construct regularized sets of locally smoother topography on which the TI is computed. While the methodology needs a somewhat arbitrary definition of the wavelet coefficients threshold, our study shows that when the information content (entropy) of the TI distribution is used as a filtering efficiency metric, a critical threshold automatically emerges in the landscape reconstruction. The methodology is demonstrated using 1‐m LiDAR data for the Elder Creek River basin in California. While the proposed case study uses a TOPMODEL approach, the methodology can be extended to different topography‐based models and is not limited to hydrological applications. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

4.
We suggest a new method to determine the piecewise‐continuous vertical distribution of instantaneous velocities within sediment layers, using different order time‐domain effective velocities on their top and bottom points. We demonstrate our method using a synthetic model that consists of different compacted sediment layers characterized by monotonously increasing velocity, combined with hard rock layers, such as salt or basalt, characterized by constant fast velocities, and low velocity layers, such as gas pockets. We first show that, by using only the root‐mean‐square velocities and the corresponding vertical travel times (computed from the original instantaneous velocity in depth) as input for a Dix‐type inversion, many different vertical distributions of the instantaneous velocities can be obtained (inverted). Some geological constraints, such as limiting the values of the inverted vertical velocity gradients, should be applied in order to obtain more geologically plausible velocity profiles. In order to limit the non‐uniqueness of the inverted velocities, additional information should be added. We have derived three different inversion solutions that yield the correct instantaneous velocity, avoiding any a priori geological constraints. The additional data at the interface points contain either the average velocities (or depths) or the fourth‐order average velocities, or both. Practically, average velocities can be obtained from nearby wells, whereas the fourth‐order average velocity can be estimated from the quartic moveout term during velocity analysis. Along with the three different types of input, we consider two types of vertical velocity models within each interval: distribution with a constant velocity gradient and an exponential asymptotically bounded velocity model, which is in particular important for modelling thick layers. It has been shown that, in the case of thin intervals, both models lead to similar results. The method allows us to establish the instantaneous velocities at the top and bottom interfaces, where the velocity profile inside the intervals is given by either the linear or the exponential asymptotically bounded velocity models. Since the velocity parameters of each interval are independently inverted, discontinuities of the instantaneous velocity at the interfaces occur naturally. The improved accuracy of the inverted instantaneous velocities is particularly important for accurate time‐to‐depth conversion.  相似文献   

5.
Reflection full waveform inversion can update subsurface velocity structure of the deeper part, but tends to get stuck in the local minima associated with the waveform misfit function. These local minima cause cycle skipping if the initial background velocity model is far from the true model. Since conventional reflection full waveform inversion using two‐way wave equation in time domain is computationally expensive and consumes a large amount of memory, we implement a correlation‐based reflection waveform inversion using one‐way wave equations to retrieve the background velocity. In this method, one‐way wave equations are used for the seismic wave forward modelling, migration/de‐migration and the gradient computation of objective function in frequency domain. Compared with the method using two‐way wave equation, the proposed method benefits from the lower computational cost of one‐way wave equations without significant accuracy reduction in the cases without steep dips. It also largely reduces the memory requirement by an order of magnitude than implementation using two‐way wave equation both for two‐ and three‐dimensional situations. Through numerical analysis, we also find that one‐way wave equations can better construct the low wavenumber reflection wavepath without producing high‐amplitude short‐wavelength components near the image points in the reflection full waveform inversion gradient. Synthetic test and real data application show that the proposed method efficiently updates the background velocity model.  相似文献   

6.
In this paper we explore the optimum assimilation of high‐resolution data into numerical models using the example of topographic data provision for flood inundation simulation. First, we explore problems with current assimilation methods in which numerical grids are generated independent of topography. These include possible loss of significant length scales of topographic information, poor representation of the original surface and data redundancy. These are resolved through the development of a processing chain consisting of: (i) assessment of significant length scales of variation in the input data sets; (ii) determination of significant points within the data set; (iii) translation of these into a conforming model discretization that preserves solution quality for a given numerical solver; and (iv) incorporation of otherwise redundant sub‐grid data into the model in a computationally efficient manner. This processing chain is used to develop an optimal finite element discretization for a 12 km reach of the River Stour in Dorset, UK, for which a high‐resolution topographic data set derived from airborne laser altimetry (LiDAR) was available. For this reach, three simulations of a 1 in 4 year flood event were conducted: a control simulation with a mesh developed independent of topography, a simulation with a topographically optimum mesh, and a further simulation with the topographically optimum mesh incorporating the sub‐grid topographic data within a correction algorithm for dynamic wetting and drying in fixed grid models. The topographically optimum model is shown to represent better the ‘raw’ topographic data set and that differences between this surface and the control are hydraulically significant. Incorporation of sub‐grid topographic data has a less marked impact than getting the explicit hydraulic calculation correct, but still leads to important differences in model behaviour. The paper highlights the need for better validation data capable of discriminating between these competing approaches and begins to indicate what the characteristics of such a data set should be. More generally, the techniques developed here should prove useful for any data set where the resolution exceeds that of the model in which it is to be used. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
Migration velocity analysis aims at determining the background velocity model. Classical artefacts, such as migration smiles, are observed on subsurface offset common image gathers, due to spatial and frequency data limitations. We analyse their impact on the differential semblance functional and on its gradient with respect to the model. In particular, the differential semblance functional is not necessarily minimum at the expected value. Tapers are classically applied on common image gathers to partly reduce these artefacts. Here, we first observe that the migrated image can be defined as the first gradient of an objective function formulated in the data‐domain. For an automatic and more robust formulation, we introduce a weight in the original data‐domain objective function. The weight is determined such that the Hessian resembles a Dirac function. In that way, we extend quantitative migration to the subsurface‐offset domain. This is an automatic way to compensate for illumination. We analyse the modified scheme on a very simple 2D case and on a more complex velocity model to show how migration velocity analysis becomes more robust.  相似文献   

8.
9.
The tau‐p inversion algorithm is widely employed to generate starting models with many computer programs that implement refraction tomography. However, this algorithm can frequently fail to detect even major lateral variations in seismic velocities, such as a 50 m wide shear zone, which is the subject of this study. By contrast, the shear zone is successfully defined with the inversion algorithms of the generalized reciprocal method. The shear zone is confirmed with a 2D analysis of the head wave amplitudes, a spectral analysis of the refraction convolution section and with numerous closely spaced orthogonal seismic profiles recorded for a later 3D refraction investigation. Further improvements in resolution, which facilitate the recognition of additional zones with moderate reductions in seismic velocity, are achieved with a novel application of the Hilbert transform to the refractor velocity analysis algorithm. However, the improved resolution also requires the use of a lower average vertical seismic velocity, which accommodates a velocity reversal in the weathering. The lower seismic velocity is derived with the generalized reciprocal method, whereas most refraction tomography programs assume vertical velocity gradients as the default. Although all of the tomograms are consistent with the traveltime data, the resolution of each tomogram is comparable only with that of the starting model. Therefore, it is essential to employ inversion algorithms that can generate detailed starting models, where detailed lateral resolution is the objective. Non‐uniqueness can often be readily resolved with head wave amplitudes, attribute processing of the refraction convolution section and additional seismic traverses, prior to the acquisition of any borehole data. It is concluded that, unless specific measures are taken to address non‐uniqueness, the production of a single refraction tomogram that fits the traveltime data to sufficient accuracy does not necessarily demonstrate that the result is either correct, or even the most probable.  相似文献   

10.
Full‐waveform inversion is re‐emerging as a powerful data‐fitting procedure for quantitative seismic imaging of the subsurface from wide‐azimuth seismic data. This method is suitable to build high‐resolution velocity models provided that the targeted area is sampled by both diving waves and reflected waves. However, the conventional formulation of full‐waveform inversion prevents the reconstruction of the small wavenumber components of the velocity model when the subsurface is sampled by reflected waves only. This typically occurs as the depth becomes significant with respect to the length of the receiver array. This study first aims to highlight the limits of the conventional form of full‐waveform inversion when applied to seismic reflection data, through a simple canonical example of seismic imaging and to propose a new inversion workflow that overcomes these limitations. The governing idea is to decompose the subsurface model as a background part, which we seek to update and a singular part that corresponds to some prior knowledge of the reflectivity. Forcing this scale uncoupling in the full‐waveform inversion formalism brings out the transmitted wavepaths that connect the sources and receivers to the reflectors in the sensitivity kernel of the full‐waveform inversion, which is otherwise dominated by the migration impulse responses formed by the correlation of the downgoing direct wavefields coming from the shot and receiver positions. This transmission regime makes full‐waveform inversion amenable to the update of the long‐to‐intermediate wavelengths of the background model from the wide scattering‐angle information. However, we show that this prior knowledge of the reflectivity does not prevent the use of a suitable misfit measurement based on cross‐correlation, to avoid cycle‐skipping issues as well as a suitable inversion domain as the pseudo‐depth domain that allows us to preserve the invariant property of the zero‐offset time. This latter feature is useful to avoid updating the reflectivity information at each non‐linear iteration of the full‐waveform inversion, hence considerably reducing the computational cost of the entire workflow. Prior information of the reflectivity in the full‐waveform inversion formalism, a robust misfit function that prevents cycle‐skipping issues and a suitable inversion domain that preserves the seismic invariant are the three key ingredients that should ensure well‐posedness and computational efficiency of full‐waveform inversion algorithms for seismic reflection data.  相似文献   

11.
We have studied in detail the theoretical and numerical properties of a finite-difference algorithm for image-wave time-remigration. For a number of synthetic models, numerical experiments have been performed. For these examples, we obtained perfect agreement between the theoretical predictions and numerical results. The examples also prove the computational efficiency of the algorithm. An application to ground-penetrating-radar (GPR) data demonstrates that image-wave remigration can be used to estimate models with laterally varying velocities. The quality of the latter is confirmed by a final zero-offset time migration.  相似文献   

12.
This paper describes the preliminary development of a network‐index approach to modify and to extend the classic TOPMODEL. Application of the basic Beven and Kirkby form of TOPMODEL to high‐resolution (2·0 m) laser altimetric data (based upon the UK Environment Agency's light detection and ranging (LIDAR) system) to a 13·8 km2 catchment in an upland environment identified many saturated areas that remained unconnected from the drainage network even during an extreme flood event. This is shown to be a particular problem with using high‐resolution topographic data, especially over large appreciable areas. To deal with the hydrological consequences of disconnected areas, we present a simple network index modification in which saturated areas are only considered to contribute when the topographic index indicates continuous saturation through the length of a flow path to the point where the path becomes a stream. This is combined with an enhanced method for dealing with the problem of pits and hollows, which is shown to become more acute with higher resolution topographic data. The paper concludes by noting the implications of the research as presented for both methodological and substantive research that is currently under way. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
Artesian springs are localized aquifer outlets that originate when pressurized ground water is allowed to rise to the surface. Computing artesian discharge directly is often subject to practical difficulties such as restricted accessibility, abundant vegetation or slow flow rates. These circumstances call for indirect approaches to quantify flow. This paper presents a method to estimate ground water discharge through an upwelling spring by means of a three‐layer steady‐state groundwater flow model. Model inputs include on‐site measurements of vertical sediment permeability, sediment temperatures and hydraulic gradients. About 70 spring bed piezometers were used to carry out permeability tests within the spring sediments, as well as to quantify the hydraulic head at different depths below the discharge point. Sediment temperatures were measured at different depths and correlated to permeabilities in order to demonstrate the potential of temperature as a substitute for cumbersome slug tests. Results show that the spatial distribution of discharge through the spring bottom is highly heterogeneous, as sediment permeability varies by several orders of magnitude within centimetres. Sensitivity analyses imply that geostatistical interpolation is irrelevant to the results if field datasets come from a sufficiently high resolution of piezometric records. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
The reassignment method remaps the energy of each point in a time‐frequency spectrum to a new coordinate that is closer to the actual time‐frequency location. Two applications of the reassignment method are developed in this paper. We first describe time‐frequency reassignment as a tool for spectral decomposition. The reassignment method helps to generate more clear frequency slices of layers and therefore, it facilitates the interpretation of thin layers. The second application is to seismic data de‐noising. Through thresholding in the reassigned domain rather than in the Gabor domain, random noise is more easily attenuated since seismic events are more compactly represented with a relatively larger energy than the noise. A reconstruction process that permits the recovery of seismic data from a reassigned time‐frequency spectrum is developed. Two approaches of the reassignment method are used in this paper, one of which is referred to as the trace by trace time reassignment that is mainly used for seismic spectral decomposition and another that is the spatial reassignment that is mainly used for seismic de‐noising. Synthetic examples and two field data examples are used to test the proposed method. For comparison, the Gabor transform method, inversion‐based method and common deconvolution method are also used in the examples.  相似文献   

15.
Despite being less general than 3D surface‐related multiple elimination (3D‐SRME), multiple prediction based on wavefield extrapolation can still be of interest, because it is less CPU and I/O demanding than 3D‐SRME and moreover it does not require any prior data regularization. Here we propose a fast implementation of water‐bottom multiple prediction that uses the Kirchhoff formulation of wavefield extrapolation. With wavefield extrapolation multiple prediction is usually obtained through the cascade of two extrapolation steps. Actually by applying the Fermat’s principle (i.e., minimum reflection traveltime) we show that the cascade of two operators can be replaced by a single approximated extrapolation step. The approximation holds as long as the water bottom is not too complex. Indeed the proposed approach has proved to work well on synthetic and field data when the water bottom is such that wavefront triplications are negligible, as happens in many practical situations.  相似文献   

16.
Microseismic monitoring has proven invaluable for optimizing hydraulic fracturing stimulations and monitoring reservoir changes. The signal to noise ratio of the recorded microseismic data varies enormously from one dataset to another, and it can often be very low, especially for surface monitoring scenarios. Moreover, the data are often contaminated by correlated noises such as borehole waves in the downhole monitoring case. These issues pose a significant challenge for microseismic event detection. In addition, for downhole monitoring, the location of microseismic events relies on the accurate polarization analysis of the often weak P‐wave to determine the event azimuth. Therefore, enhancing the microseismic signal, especially the low signal to noise ratio P‐wave data, has become an important task. In this study, a statistical approach based on the binary hypothesis test is developed to detect the weak events embedded in high noise. The method constructs a vector space, known as the signal subspace, from previously detected events to represent similar, yet significantly variable microseismic signals from specific source regions. Empirical procedures are presented for building the signal subspace from clusters of events. The distribution of the detection statistics is analysed to determine the parameters of the subspace detector including the signal subspace dimension and detection threshold. The effect of correlated noise is corrected in the statistical analysis. The subspace design and detection approach is illustrated on a dual‐array hydrofracture monitoring dataset. The comparison between the subspace approach, array correlation method, and array short‐time average/long‐time average detector is performed on the data from the far monitoring well. It is shown that, at the same expected false alarm rate, the subspace detector gives fewer false alarms than the array short‐time average/long‐time average detector and more event detections than the array correlation detector. The additionally detected events from the subspace detector are further validated using the data from the nearby monitoring well. The comparison demonstrates the potential benefit of using the subspace approach to improve the microseismic viewing distance. Following event detection, a novel method based on subspace projection is proposed to enhance weak microseismic signals. Examples on field data are presented, indicating the effectiveness of this subspace‐projection‐based signal enhancement procedure.  相似文献   

17.
A marine source generates both a direct wavefield and a ghost wavefield. This is caused by the strong surface reflectivity, resulting in a blended source array, the blending process being natural. The two unblended response wavefields correspond to the real source at the actual location below the water level and to the ghost source at the mirrored location above the water level. As a consequence, deghosting becomes deblending (‘echo‐deblending’) and can be carried out with a deblending algorithm. In this paper we present source deghosting by an iterative deblending algorithm that properly includes the angle dependence of the ghost: It represents a closed‐loop, non‐causal solution. The proposed echo‐deblending algorithm is also applied to the detector deghosting problem. The detector cable may be slanted, and shot records may be generated by blended source arrays, the blending being created by simultaneous sources. Similar to surface‐related multiple elimination the method is independent of the complexity of the subsurface; only what happens at and near the surface is relevant. This means that the actual sea state may cause the reflection coefficient to become frequency dependent, and the water velocity may not be constant due to temporal and lateral variations in the pressure, temperature, and salinity. As a consequence, we propose that estimation of the actual ghost model should be part of the echo‐deblending algorithm. This is particularly true for source deghosting, where interaction of the source wavefield with the surface may be far from linear. The echo‐deblending theory also shows how multi‐level source acquisition and multi‐level streamer acquisition can be numerically simulated from standard acquisition data. The simulated multi‐level measurements increase the performance of the echo‐deblending process. The output of the echo‐deblending algorithm on the source side consists of two ghost‐free records: one generated by the real source at the actual location below the water level and one generated by the ghost source at the mirrored location above the water level. If we apply our algorithm at the detector side as well, we end up with four ghost‐free shot records. All these records are input to migration. Finally, we demonstrate that the proposed echo‐deblending algorithm is robust for background noise.  相似文献   

18.
19.
River managers and scientists interested in hyporheic processes need adequate tools for characterizing hyporheic exchange flow (HEF) at local sites where only poor information on subsurface properties are available. This study evaluates a three‐dimensional modelling approach, on the basis of detailed surface parameterization and a simplified subsurface structure, for comparison of potential HEF characteristics at three experimental reaches at the channel‐unit scale. First, calibration is conducted to determine the best fit‐of‐heads given the model simplification, then the structure of residuals are used to evaluate the origin of the misfit, and finally, a sensitivity analysis is conducted to identify inter‐site differences in HEF. Results show that such an approach can highlight potential magnitude differences in HEF characteristics between reaches. The sensitivity analysis is successful in delineating the small area of exchange that remains under conditions of high groundwater discharge. In this case, however, the calibrated model performs poorly in representing the exchange pattern at the sediment–water interface, thus suggesting that the approach is less adequate for a deterministic simulation of observed heads. The summary statistics are in the range of similar published models, for which the reported indicator is the root mean square error on heads normalized by the head drop over the reach. We recommend, however, that modellers use a more comparable indicator, such as a measure of the residuals normalized by a measure of observed vertical head differences. Overall, when subsurface data are unavailable or sparse, a three‐dimensional groundwater model based on high‐resolution topographic data combined with a sensitivity analysis appears as a useful tool for a preliminary characterization of HEF. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Multi‐offset phase analysis of seismic surface waves is an established technique for the extraction of dispersion curves with high spatial resolution and, consequently, for the investigation of the subsurface in terms of shear wave velocity distribution. However, field applications are rarely documented in the published literature. In this paper, we discuss an implementation of the multi‐offset phase analysis consisting of the estimation of the Rayleigh wave velocity by means of a moving window with a frequency‐dependent length. This allows maximizing the lateral resolution at high frequencies while warranting stability at the lower frequencies. In this way, we can retrieve the shallow lateral variability with high accuracy and, at the same time, obtain a robust surface‐wave velocity measurement at depth. In this paper, we apply this methodology to a dataset collected for hydrogeophysical purposes and compare the inversion results with those obtained by using refraction seismics and electrical resistivity tomography. The surface‐wave results are in good agreement with those provided by the other methods and demonstrate a superior capability in retrieving both lateral and vertical velocity variations, including inversions. Our results are further corroborated by the lithological information from a borehole drilled on the acquisition line. The availability of multi‐offset phase analysis data also allows disentangling a fairly complex interpretation of the other geophysical results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号