首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary The various data processing techniques (downward continuation, first and second derivatives and their downward continuation) used in gravity interpretation, are analogous to different types of linear filtering operations whose theoretical filter (amplitude) responses can be derived from by suitably choosingN andd, whereu andv are angular frequencies in two perpendicular directions,d the height or depth of continuation in unit of grid interval; andN denotes the order of the vertical derivative. By incorporating a mathematical smoothing function, ( being the smoothing parameter) in the theoretical filter response function, it has been possible, by selecting a suitable value of smoothing parameter, to establish an approximate equivalence of the effect of the mathematical smoothing with the inherent smoothing introduced, because of the numerical approximation (approximation error) for practically all data-processing techniques. This approximate equivalence leads to a generalized method of computing sets of weight coefficients for various data-processing techniques from filter response matching method. Several sets of weight coefficients thus have been computed with different smoothing parameter. The amplitude response curves of the various existing sets of weight coefficients have also been calculated for assessing the quality of the approximation in achieving the desired filtering operation.  相似文献   

2.
基于小波变换的地震相干体算法研究   总被引:27,自引:2,他引:27       下载免费PDF全文
基于计算相干体算法,提出了用模拟地震子波的小波函数(或高分辨导数小波函数)的小波变换,得到分频瞬时相位,再计算相干体的相干体算法1和用小波变换得到的实、虚部(相当Hilbert变换)计算相干体的相干体算法2. 在油田构造解释中,为了突出小断层特征,用分频计算出的相干体进行重构. 实际资料计算表明,相干体算法2较K. J. Marfurt相干体算法抗噪声能力强;相干体算法1较相干体算法2在实际应用中效果更明显.  相似文献   

3.
Some of the methods such as regional removal and second derivative calculations which can be used to outline anomalies on potential data maps can be thought of as a filtering operation. The analysis and design of such two-dimensional filters by means of direct and inverse two-dimensional Fourier transforms have been considered. An analysis of several published sets of second derivative coefficient sets indicates that, in general, they are not a good approximation to the theoretical second derivative filter. Alternate methods of designing regional removal and second derivative filters are discussed. The properties of various two-dimensional filters are further illustrated by means of maps obtained from the convolution of several of these filters with a set of observed field data. These maps show the large changes in anomaly shape which can result from the inclusion or rejection of various wavelength components.  相似文献   

4.
An accurate estimate of the seismic wavelet on a seismic section is extremely important for interpretation of fine details on the section and for estimation of acoustic impedance. In the absence of well-control, the recognized best approach to wavelet estimation is to use the technique of multiple coherence analysis to estimate the coherent signal and its amplitude spectrum, and thence construct the seismic wavelet under the minimum-phase assumption. The construction of the minimum-phase wavelet is critically dependent on the decay of the spectrum at the low-frequency end. Traditional methods of cross-spectral estimation, such as frequency smoothing using a Papoulis window, suffer from substantial side-lobe leakage in the areas of the spectrum where there is a large change of power over a relatively small frequency range. The low-frequency end of the seismic spectrum (less than 4 Hz) decays rapidly to zero. Side-lobe leakage causes poor estimates of the low-frequency decay, resulting in degraded wavelet estimates. Thomson's multitaper method of cross-spectral estimation which suffers little from side-lobe leakage is applied here, and compared with the result of using frequency smoothing with the Papoulis window. The multitaper method seems much less prone to estimating spuriously high coherences at very low frequencies. The wavelet estimated by the multitaper approach from the data used here is equivalent to imposing a low-frequency roll-off of some 48 dB/oct (below 3.91 Hz) on the amplitude spectrum. Using Papoulis smoothing the equivalent roll-off is only about 36 dB/oct. Thus the multitaper method gives a low-frequency decay rate of the amplitude spectrum which is some 4 times greater than for Papoulis smoothing. It also gives more consistent results across the section. Furthermore, the wavelet obtained using the multi-taper method and seismic data only (with no reference to well data) has more attractive physical characteristics when compared with a wavelet extracted using well data, than does an estimate using traditional smoothing.  相似文献   

5.
Two new algorithms are presented for efficiently selecting suites of ground motions that match a target multivariate distribution or conditional intensity measure target. The first algorithm is a Markov chain Monte Carlo (MCMC) approach in which records are sequentially added to a selected set such that the joint probability density function (PDF) of the target distribution is progressively approximated by the discrete distribution of the selected records. The second algorithm derives from the concept of the acceptance ratio within MCMC but does not involve any sampling. The first method takes advantage of MCMC's ability to efficiently explore a sampling distribution through the implementation of a traditional MCMC algorithm. This method is shown to enable very good matches to multivariate targets to be obtained when the numbers of records to be selected is relatively large. A weaker performance for fewer records can be circumvented by the second method that uses greedy optimisation to impose additional constraints upon properties of the target distribution. A preselection approach based upon values of the multivariate PDF is proposed that enables near‐optimal record sets to be identified with a very close match to the target. Both methods are applied for a number response analyses associated with different sizes of record sets and rupture scenarios. Comparisons are made throughout with the Generalised Conditional Intensity Measure (GCIM) approach. The first method provides similar results to GCIM but with slightly worse performance for small record sets, while the second method outperforms method 1 and GCIM for all considered cases.  相似文献   

6.
A method to calculate the resistivity transform of Schlumberger VES curves has been developed. It consists in approximating the field apparent resistivity data by utilizing a linear combination of simple functions, which must satisfy the following requirements: (i) they must be suitable for fitting the resistivity data; (ii) once the fitting function has been obtained they allow the kernel to be determined in an analytic way. The fitting operation is carried out by the least mean squares method, which also accomplishes a useful smoothing of the field curve (and therefore a partial noise filtering). It gives the possibility of assigning different weights to the apparent resistivity values to be approximated according to their different reliability. For several examples (theoretical resistivity curves in order to estimate the precision of the method and with field data to verify the practicality) yield good results with short execution time independent of shape the apparent resistivity curve.  相似文献   

7.
In the oil and gas industry, well test analysis using derivative plots, has been the core technique in examining reservoir and well behavior over the last three decades. Recently, diagnostics plots have gained recognition in the field of hydrogeology; however, this tool is still underused by groundwater professionals. The foremost drawback is that the derivative function must be computed from noisy field measurements, usually based on finite‐difference schemes, which complicates the analysis. We propose a B‐spline framework for smooth derivative computation, referred to as Constrained Quartic B‐Splines with Free Knots. The approach presents the following novelties in relation to methodological precedents: (1) the use of automatic equality derivative constraints, (2) a knot removal strategy and (3) the introduction of a Boolean shape parameter that defines the number of initial knots to choose. These can lead to evaluate both simple (manually recorded drawdown measurements) and complex (transducer measured records) datasets. Furthermore, we propose an additional shape preserving smoothing preprocess procedure, as a simple, fast and robust method to deal with extremely noisy signals. Our framework was tested in four pumping tests by comparing the spline derivative with regards to the Bourdet algorithm, and we found that the latter is rather noisy (even for large differentiation intervals) and the second derivative response is basically unreadable. In contrast, the spline first and second derivative led to smoother responses, which are more suitable for interpretation. We concluded that the proposed framework is a welcome contribution to evaluate reliable aquifer tests using derivative‐diagnostic plots.  相似文献   

8.
We start from the Hankel transform of Stefanescu's integral written in the convolutionintegral form suggested by Ghosh (1971). In this way it is possible to obtain the kernel function by the linear electric filter theory. Ghosh worked out the sets of filter coefficients in frequency domain and showed the very low content of high frequencies of apparent resistivity curves. Vertical soundings in the field measure a series of apparent resistivity values at a constant increment Δx of the logarithm of electrode spacing. Without loss of information we obtain the filter coefficient series by digital convolution of the Bessel function of exponential argument with sine function of the appropriate argument. With a series of forty-one values we obtain the kernel functions from the resistivity curves to an accuracy of better than 0.5%. With the digital method it is possible to calculate easily the filter coefficients for any electrode arrangement and any cut-off frequency.  相似文献   

9.
A numerical method is proposed for solving the problem of steady current flow. The electrodynamic model is replaced by the equivalent stationary charge distribution obtained by Poisson's analysis, in which the surface integral equation for field intensity is reduced to a set of simultaneous linear algebraic equations by means of the method of sub-areas. The solution of the set allows the calculation of an approximation for the charge density distribution on the discontinuity surfaces of conductivity. The method is valid for complex conductivities, whereby the apparent phase shift of IP can be calculated from the complex potential or field intensity. The phase shift anomaly calculated as an application is very similar to the corresponding frequency effect anomaly. The method allows the calculation of the mise-à-la-masse effect as a solution to a potential problem, in which the primary current electrode is located within the body to be surveyed.  相似文献   

10.
Total magnetic intensity contour maps for the study region (between 2°E to 10°E and 56°N to 60°N) were digitized and converted to a regular grid of 285 × 285 points. The study area measures approximately 444 km × 444 km and the grid spacing is thus 1. 56 km. The International Geomagnetic Reference Field for 1975 was gridded for the above-used net, and from the two data sets a further grid of the ?T field was generated. A large number of profiles were constructed which were suitable for depth determinations. The regular grid ?T data is also convenient for the computation of the second vertical derivative. Using the method of vertical prisms of Vacquier et al. (1963), a large suite of curvature-depth indices was measured to complement the depths obtained from the intensity slopes and from boreholes which reach the crystalline basement. The depth to the magnetic basement has been contoured, and the resulting map is shown to be in good agreement with what is known about the deeper geology of the study area. The work reported here is part of a research project supported by Amoco Norway, BP Petroleum Development Ltd, Elf Aquitaine, Esso Exploration and Production, Norwegian Gulf, Norsk Hydro, Mobil Exploration Norway, Norwegian Petroleum Directorate, Royal Norwegian Council for Scientific and Industrial Research (NTNF), Norske Shell, and Statoil.  相似文献   

11.
Fourier transform techniques have been used to calculate the theoretical filter (amplitude) response function of Nth order vertical derivative continuation operation. The amplitude response functions of the vertical gradient and its continuation follow from the same. These response functions are subsequently used to calculate the weighting coefficients suitable for two dimensional equispaced data. A shortening operator has been incorporated to limit the extent of the operator. For comparative study, some of the developed coefficient sets and the one presented in this paper are analysed in the frequency domain and their merits and demerits are discussed.  相似文献   

12.
It is often very useful to be able to smooth velocity fields estimated from exploration seismic data. For example seismic migration is most successful when accurate but also smooth migration velocity fields are used. Smoothing in one, two and three dimensions is examined using North Sea velocity data. A number of ways for carrying out this smoothing are examined, and the technique of locally weighted regression (LOESS) emerges as most satisfactory. In this method each smoothed value is formed using a local regression on a neighbourhood of points downweighted according to their distance from the point of interest. In addition the method incorporates ‘blending’ which saves computations by using function and derivative information, and ‘weighting and robustness’ which allows the smooth to be biased towards reliable points, or away from unreliable ones. A number of other important factors are also considered: namely, the effect of changing the scales of axes, or of thinning the velocity field, prior to smoothing, as well as the problem of smoothing on to irregular subsurfaces.  相似文献   

13.
A three‐dimensional (3D) electrical resistivity modelling code is developed to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh, allowing the estimation of the potential everywhere without the need to interpolate between nodes. Modelling generates a huge matrix, made up of Green's functions, which is stored by using the method of pyramidal compression. The potential is compared with the analytical and the numerical solutions obtained by finite‐difference codes for two models: the two‐layer case and the vertical contact case. The integral method is more accurate around the source point and at the limits of the domain for the potential calculation using a pole‐pole array. A technique is proposed to calculate the sensitivity (Jacobian) and Hessian matrices in 3D. The sensitivity is based on the derivative with respect to the block conductivity of the potential computed using the integral equation; it is only necessary to compute the electrical field at the source location. A direct extension of this technique allows the determination of the second derivatives. The technique is compared with the analytical solutions and with the calculation of the sensitivity according to the method using the inner product of the current densities calculated at the source and receiver points. Results are very accurate when the Green's function that includes the source image is used. The calculation of the three components of the electric field on the interfaces of the mesh is carried out simultaneously and quickly, using matrix compression.  相似文献   

14.
The key problem in nonparametric frequency analysis of flood and droughts is the estimation of the bandwidth parameter which defines the degree of smoothing. Most of the proposed bandwidth estimators have been based on the density function rather than the cumulative distribution function or the quantile that are the primary interest in frequency analysis. We propose a new bandwidth estimator derived from properties of quantile estimators. The estimator builds on work by Altman and Léger (1995). The estimator is compared to the well-known method of least squares cross-validation (LSCV) using synthetic data generated from various parametric distributions used in hydrologic frequency analysis. Simulations suggest that our estimator performs at least as well as, and in many cases better than, the method of LSCV. In particular, the use of the proposed plug-in estimator reduces bias in the estimation as compared to LSCV. When applied to data sets containing observations with identical values, typically the result of rounding or truncation, the LSCV and most other techniques generally underestimates the bandwidth. The proposed technique performs very well in such situations.  相似文献   

15.
The determination of the vertical and lateral extent of discontinuities is an important aspect of interpreting seismic reflection data. The Common Fault Point (CFP) stacking method appears to be promising in imaging discontinuities in acoustic impedance by making use of diffracted energy from a spatial array of receivers. The problems of vertical and lateral resolution in the method are most important when carrying out an interpretation. Source signature, subsurface velocities and the depth of the discontinuity are the most important parameters affecting the resolution. We use, for a perfectly coherent source, the first derivative of the Gaussian function which is an antisymmetric band-limited wavelet. Rayleigh's, Ricker's and Widess' criteria are also applicable to this wavelet. The limits of vertical and lateral resolution are illustrated by using a step fault and a dike model respectively. The vertical resolution of the CFP method is found to be of the order of λ/16 which is half the theoretically predicted value for a single receiver. The lateral resolution is still limited by the size of the Fresnel zone which depends upon the velocity, two-way time and the dominant frequency of the wavelet. The resolution limits of the CFP method are compared with that of the CDP method, prestack migration and post-stack migration. Obtaining high resolution with real data is limited by the extent to which it is possible to generate a coherent source or to simulate one during computer processing with before stack seismic data. The CFP method is an artificial intelligence approach to imaging diffracting points as it localizes parts of the structure that scatter acoustic waves.  相似文献   

16.
A study has been made of the interaction between the thermosphere and the ionosphere at high latitudes, with particular regard to the value of the O+-O collision parameter. The European incoherent scatter radar (EISCAT) was used to make tristatic measurements of plasma parameters at F-region altitudes while simultaneous measurements of the neutral wind were made by a Fabry-Perot interferometer (FPI). The radar data were used to derive the meridional neutral winds in a way similar to that used by previous authors. The accuracy of this technique at high latitudes is reduced by the dynamic nature of the auroral ionosphere and the presence of significant vertical winds. The derived winds were compared with the meridional winds measured by the FPI. For each night, the value of the O+-O collision parameter which produced the best agreement between the two data sets was found. The precision of the collision frequency found in this way depends on the accuracy of the data. The statistical method was critically examined in an attempt to account for the variability in the data sets. This study revealed that systematic errors in the data, if unaccounted for by the analysis, have a tendency to increase the value of the derived collision frequency. Previous analyses did not weight each data set in order to account for the quality of the data; an improved method of analysis is suggested.  相似文献   

17.
The automatic detection of geological features such as faults and channels is a challenging problem in today's seismic exploration industry. Edge detection filters are generally applied to locate features. It is desirable to reduce noise in the data before edge detection. The application of smoothing or low‐pass filters results in noise suppression, but this causes edge blurring as well. Edge‐preserving smoothing is a technique that results in simultaneous edge preservation and noise suppression. Until now, edge‐preserving smoothing has been carried out on rectangular sampled seismic data. In this paper, an attempt has been made to detect edges by applying edge‐preserving smoothing as a pre‐processing step in the hexagonally sampled seismic‐data spatial domain. A hexagonal approach is an efficient method of sampling and has greater symmetry than a rectangular approach. Here, spiral architecture has been employed to handle the hexagonally sampled seismic data. A comparison of edge‐preserving smoothing on both rectangular and hexagonally sampled seismic data is carried out. The data used were provided by Saudi Aramco. It is shown that hexagonal processing results in well‐defined edges with fewer computations.  相似文献   

18.
General expressions are derived for the numerical evaluation of Duhamel's integral and its derivative. The work comprises an extension (to unequal time steps) and an application (to a piecewise linear forcing function) of the numerical integration approach adopted by Cronin.1 The application is particularly relevant to the digital computation of response spectra from strong motion earthquake records.  相似文献   

19.
《水文科学杂志》2013,58(4):808-824
Abstract

We report results of three field campaigns conducted at 39 stations. At each station, we measured reflectance spectra in situ and collected water samples for measuring chlorophyll a (CHL) and suspended solids (SS) concentrations in the laboratory. To identify the indicative bands and develop suitable estimation models for CHL (C CHL) and SS (C SS) concentrations in Taihu Lake, a spectral-feature method and a derivative method were applied. The following conclusions were drawn: (a) the critical C CHL and C SS probably causing their spectral variation are, respectively: 0, 10, 50 and 75 μg L?1, and 0, 10, 50 and 100 mg L?1; (b) the derivative method is better than the spectral-feature method for estimating C CHL and C SS; (c) the optimal variable for CHL is a reflectance second-order derivative at 501 nm or a reflectance first-order derivative at 698 nm; the optimal variable for SS can change when its concentration is low and the range is narrow; otherwise, the optimal variable is a reflectance first-order derivative at 878 nm; and (d) the CHL and SS have an effect on one another's retrieval. The C CHL estimation accuracy would benefit from narrowing the C SS range. With C CHL increasing and its range broadening, the corresponding C SS estimation accuracy decreases gradually.  相似文献   

20.
The effects of systematic (constant) and random errors in the observed data have been investigated analytically for rational approximation method of computing second derivative involving a summation of the products of the averages of the gravity field with the corresponding weight coefficients, both in numerator as well as in denominator. A theoretical gravity anomaly over three spheres has been analyzed to demonstrate the high accuracy in the approximation. Since the sums of the weight coefficients in numerator and denominator are zero and one respectively, the regional gravity anomaly, even though approximated by a constant value over the entire area under computation, can produce substantially large error in the calculated derivative value. This is happening because of the contribution of the regional field in the denominator. Thus, inspite of the high accuracy in rational approximation, the method has limited application to field cases where a combined gravity field consisting of regional and residual anomalies is usually used. Master curves are presented for the constant and random errors by which a rough estimate of the percentage of error in second derivative computation can be made provided one has some idea of the magnitudes of the regional field and random error.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号