首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Jones NL  Davis RJ  Sabbah W 《Ground water》2003,41(4):411-419
Interpolation of contaminant data can present a significant challenge due to sample clustering and sharp gradients in concentration. The research presented in this paper represents a study of commonly used interpolation schemes applied to three-dimensional plume characterization. Kriging, natural neighbor, and inverse distance weighted interpolation were tested on four actual data sets. The accuracy of each scheme was gauged using the cross-validation approach. Each scheme was compared to the other schemes and the effect of various interpolation parameters was studied. The kriging approach resulted in the lowest error at three of the four sites. The simpler and quicker inverse distance weighted approach resulted in a lower interpolation error on the other site and performed well overall. The natural neighbor method had the highest average error at all four sites in spite of the fact that it has been shown to perform well with clustered data. Another unexpected result was that the computationally expensive high order nodal functions resulted in reduced accuracy for the inverse distance weighted and natural neighbor approaches.  相似文献   

2.
Practical decisions are often made based on the subsurface images obtained by inverting geophysical data. Therefore it is important to understand the resolution of the image, which is a function of several factors, including the underlying geophysical experiment, noise in the data, prior information and the ability to model the physics appropriately. An important step towards interpreting the image is to quantify how much of the solution is required to satisfy the data observations and how much exists solely due to the prior information used to stabilize the solution. A procedure to identify the regions that are not constrained by the data would help when interpreting the image. For linear inverse problems this procedure is well established, but for non‐linear problems the procedure is more complicated. In this paper we compare two different approaches to resolution analysis of geophysical images: the region of data influence index and a resolution spread computed using point spread functions. The region of data influence method is a fully non‐linear approach, while the point spread function analysis is a linearized approach. An approximate relationship between the region of data influence and the resolution matrix is derived, which suggests that the region of data influence is connected with the rows of the resolution matrix. The point‐spread‐function spread measure is connected with the columns of the resolution matrix, and therefore the point‐spread‐function spread and the region of data influence are fundamentally different resolution measures. From a practical point of view, if two different approaches indicate similar interpretations on post‐inversion images, the confidence in the interpretation is enhanced. We demonstrate the use of the two approaches on a linear synthetic example and a non‐linear synthetic example, and apply them to a non‐linear electromagnetic field data example.  相似文献   

3.
Inverse distance interpolation for facies modeling   总被引:1,自引:0,他引:1  
Inverse distance weighted interpolation is a robust and widely used estimation technique. In practical applications, inverse distance interpolation is oftentimes favored over kriging-based techniques when there is a problem of making meaningful estimates of the field spatial structure. Nowadays application of inverse distance interpolation is limited to continuous random variable modeling. There is a need to extend the approach to categorical/discrete random variables. In this paper we propose such an extension using indicator formalism. The applicability of inverse distance interpolation for categorical modeling is then illustrated using Total’s Joslyn Lease facies data.  相似文献   

4.
Full waveform inversion aims to use all information provided by seismic data to deliver high-resolution models of subsurface parameters. However, multiparameter full waveform inversion suffers from an inherent trade-off between parameters and from ill-posedness due to the highly non-linear nature of full waveform inversion. Also, the models recovered using elastic full waveform inversion are subject to local minima if the initial models are far from the optimal solution. In addition, an objective function purely based on the misfit between recorded and modelled data may honour the seismic data, but disregard the geological context. Hence, the inverted models may be geologically inconsistent, and not represent feasible lithological units. We propose that all the aforementioned difficulties can be alleviated by explicitly incorporating petrophysical information into the inversion through a penalty function based on multiple probability density functions, where each probability density function represents a different lithology with distinct properties. We treat lithological units as clusters and use unsupervised K-means clustering to separate the petrophysical information into different units of distinct lithologies that are not easily distinguishable. Through several synthetic examples, we demonstrate that the proposed framework leads full waveform inversion to elastic models that are superior to models obtained either without incorporating petrophysical information, or with a probabilistic penalty function based on a single probability density function.  相似文献   

5.
Great emphasis is being placed on the use of rainfall intensity data at short time intervals to accurately model the dynamics of modern cropping systems, runoff, erosion and pollutant transport. However, rainfall data are often readily available at more aggregated level of time scale and measurements of rainfall intensity at higher resolution are available only at limited stations. A distribution approach is a good compromise between fine-scale (e.g. sub-daily) models and coarse-scale (e.g. daily) rainfall data, because the use of rainfall intensity distribution could substantially improve hydrological models. In the distribution approach, the cumulative distribution function of rainfall intensity is employed to represent the effect of the within-day temporal variability of rainfall and a disaggregation model (i.e. a model disaggregates time series into sets of higher solution) is used to estimate distribution parameters from the daily average effective precipitation. Scaling problems in hydrologic applications often occur at both space and time dimensions and temporal scaling effects on hydrologic responses may exhibit great spatial variability. Transferring disaggregation model parameter values from one station to an arbitrary position is prone to error, thus a satisfactory alternative is to employ spatial interpolation between stations. This study investigates the spatial interpolation of the probability-based disaggregation model. Rainfall intensity observations are represented as a two-parameter lognormal distribution and methods are developed to estimate distribution parameters from either high-resolution rainfall data or coarse-scale precipitation information such as effective intensity rates. Model parameters are spatially interpolated by kriging to obtain the rainfall intensity distribution when only daily totals are available. The method was applied to 56 pluviometer stations in Western Australia. Two goodness-of-fit statistics were used to evaluate the skill—daily and quantile coefficient of efficiency between simulations and observations. Simulations based on cross-validation show that kriging performed better than other two spatial interpolation approaches (B-splines and thin-plate splines).  相似文献   

6.
We propose a mathematical representation to qualitatively describe the spatio-temporal slip evolution during earthquake rupture in an efficient and easy-to-use manner for numerical simulations of strong ground motion. It is based on three basis functions and associated expansion coefficients. It is an extension of the approach of Ide and Takeo, (J Geophys Res, 102:27379–27391, 1997). We compare our approach and theirs using simple kinematic source models to illustrate differences between the two approaches, and show that our approach more accurately represents the spatio-temporal slip evolution. We also propose a technique based on our representation for extracting a spatio-temporal slip velocity function from a kinematic source model obtained by the conventional source inversion. We then demonstrate the feasibility of our procedure with application to an inverted source model of the 26 March 1997 Northwestern Kagoshima, Japan, earthquake (M W6.1). In the simulations for actual earthquakes, source models obtained from kinematic source inversions are commonly employed. Our scheme could be used as an interpolation method of slip time functions from relatively coarse finite-source models obtained by conventional kinematic source inversions.  相似文献   

7.
8.
Estimation of the magnitude of reservoir induced seismicity is essential for seismic risk analysis of dam sites. Different geological and empirical methods dealing with the mechanism or magnitude of such earthquakes are available in the literature. In this study, a method based on an artificial neural network utilizing radial basis functions (RBF network) was employed to analyze the problem. The network has only two input neurons, one representing the maximum depth of the reservoir and the other being a comprehensive parameter representing reservoir geometry. Magnitudes of the induced earthquakes predicted using the RBF network were compared with the actual recorded data. Compared with the conventional statistical approach, the proposed method gives a better prediction, both in terms of coefficients of correlation and error rates.  相似文献   

9.
As mineral exploration seeks deeper targets, there will be a greater reliance on geophysical data and a better understanding of the geological meaning of the responses will be required, and this must be achieved with less geological control from drilling. Also, exploring based on the mineral system concept requires particular understanding of geophysical responses associated with altered rocks. Where petrophysical datasets of adequate sample size and measurement quality are available, physical properties show complex variations, reflecting the combined effects of various geological processes. Large datasets, analysed as populations, are required to understand the variations. We recommend the display of petrophysical data as frequency histograms because the nature of the data distribution is easily seen with this form of display. A petrophysical dataset commonly contains a combination of overlapping sub-populations, influenced by different geological factors. To understand the geological controls on physical properties in hard rock environments, it is necessary to analyse the petrophysical data not only in terms of the properties of different rock types. It is also necessary to consider the effects of processes such as alteration, weathering, metamorphism and strain, and variables such as porosity and stratigraphy. To address this complexity requires that much more supporting geological information be acquired than in current practice. The widespread availability of field portable instruments means quantitative geochemical and mineralogical data can now be readily acquired, making it unnecessary to rely primarily on categorical rock classification schemes. The petrophysical data can be combined with geochemical, petrological and mineralogical data to derive explanations for observed physical property variations based not only on rigorous rock classification methods, but also in combination with quantitative estimates of alteration and weathering. To understand how geological processes will affect different physical properties, it is useful to define three end-member forms of behaviour. Bulk behaviour depends on the physical properties of the dominant mineral components. Density and, to a lesser extent, seismic velocity show such behaviour. Grain and texture behaviour occur when minor components of the rock are the dominate controls on its physical properties. Grain size and shape control grain properties, and for texture properties the relative positions of these grains are also important. Magnetic and electrical properties behave in this fashion. Thinking in terms of how geological processes change the key characteristics of the major and minor mineralogical components allows the resulting changes in physical properties to be understood and anticipated.  相似文献   

10.
Radon transform is a powerful tool with many applications in different stages of seismic data processing, because of its capability to focus seismic events in the transform domain. Three-parameter Radon transform can optimally focus and separate different seismic events, if its basis functions accurately match the events. In anisotropic media, the conventional hyperbolic or shifted hyperbolic basis functions lose their accuracy and cannot preserve data fidelity, especially at large offsets. To address this issue, we propose an accurate traveltime approximation for transversely isotropic media with vertical symmetry axis, and derive two versions of Radon basis functions, time-variant and time-invariant. A time-variant basis function can be used in time domain Radon transform algorithms while a time-invariant version can be used in, generally more efficient, frequency domain algorithms. Comparing the time-variant and time-invariant Radon transform by the proposed basis functions, the time-invariant version can better focus different seismic events; it is also more accurate, especially in presence of vertical heterogeneity. However, the proposed time-invariant basis functions are suitable for a specific type of layered anisotropic media, known as factorized media. We test the proposed methods and illustrate successful applications of them for trace interpolation and coherent noise attenuation.  相似文献   

11.
基于约束最小二乘与信赖域的储层参数反演方法   总被引:1,自引:0,他引:1       下载免费PDF全文
林恬  孟小红  张致付 《地球物理学报》2017,60(10):3969-3983
基于包体岩石物理模型的储层参数地震反演方法面临数学形式复杂、多解性强、适应性差、涉及迭代运算等问题,本文提出一种基于约束最小二乘与信赖域的储层参数地震反演方法.该方法基于储层参数与弹性参数关联岩石物理模型,使用最小二乘方法构建目标函数和信赖域约束全局寻优求解,有效降低了地震反演多解性,极大提高了收敛速度.特别是通过在最小二乘求解中引入垂向约束,有效提高反演结果的抗噪声能力.经过模型测试和实际资料的应用,验证了方法的可行性和适用性.  相似文献   

12.
Near‐surface problem is a common challenge faced by land seismic data processing, where often, due to near‐surface anomalies, events of interest are obscured. One method to handle this challenge is near‐surface layer replacement, which is a wavefield reconstruction process based on downward wavefield extrapolation with the near‐surface velocity model and upward wavefield extrapolation with a replacement velocity model. This requires, in theory, that the original wavefield should be densely sampled. In reality, data acquisition is always sparse due to economic reasons, and as a result in the near‐surface layer replacement data interpolation should be resorted to. For datasets with near‐surface challenges, because of the complex event behaviour, a suitable interpolation scheme by itself is a challenging problem, and this, in turn, makes it difficult to carry out the near‐surface layer replacement. In this research note, we first point out that the final objective of the near‐surface layer replacement is not to obtain a newly reconstructed wavefield but to obtain a better final image. Next, based upon this finding, we propose a new thinking, interpolation‐free near‐surface layer replacement, which can handle complex datasets without any interpolation. Data volume expansion is the key idea, and with its help, the interpolation‐free near‐surface layer replacement is capable of preserving the valuable information of areas of interest in the original dataset. Two datasets, i.e., a two‐dimensional synthetic dataset and a three‐dimensional field dataset, are used to demonstrate this idea. One conclusion that can be drawn is that an attempt to interpolate data before layer replacement may deteriorate the final image after layer replacement, whereas interpolation‐free near‐surface layer replacement preserves all image details in the subsurface.  相似文献   

13.
Statistical approach to inverse distance interpolation   总被引:1,自引:0,他引:1  
Inverse distance interpolation is a robust and widely used estimation technique. Variants of kriging are often proposed as statistical techniques with superior mathematical properties such as minimum error variance; however, the robustness and simplicity of inverse distance interpolation motivate its continued use. This paper presents an approach to integrate statistical controls such as minimum error variance into inverse distance interpolation. The optimal exponent and number of data may be calculated globally or locally. Measures of uncertainty and local smoothness may be derived from inverse distance estimates.  相似文献   

14.
We present a new method of transforming borehole gravity meter data into vertical density logs. This new method is based on the regularized spectral domain deconvolution of density functions. It is a novel alternative to the “classical” approach, which is very sensitive to noise, especially for high‐definition surveys with relatively small sampling steps. The proposed approach responds well to vertical changes of density described by linear and polynomial functions. The model used is a vertical cylinder with large outer radius (flat circular plate) crossed by a synthetic vertical borehole profile. The task is formulated as a minimization problem, and the result is a low‐pass filter (controlled by a regularization parameter) in the spectral domain. This regularized approach is tested on synthetic datasets with noise and gives much more stable solutions than the classical approach based on the infinite Bouguer slab approximation. Next, the tests on real‐world datasets are presented. The properties and presented results make our proposed approach a viable alternative to the other processing methods of borehole gravity meter data based on horizontally layered formations.  相似文献   

15.
The most common approach for the processing of data of gravity field satellite missions is the so-called time-wise approach. In this approach satellite data are considered as a time series and processed by a standard least-squares approach. This approach has a very strong flexibility but it is computationally very demanding. To improve the computational efficiency and numerical stability, the so-called torus and Rosborough approaches have been developed. So far, these approaches have been applied only for global gravity field determinations, based on spherical harmonics as basis functions. For regional applications basis functions with a local support are superior to spherical harmonics, because they provide the same approximation quality with much less parameters. So far, torus and Rosborough approach have been developed for spherical harmonics only. Therefore, the paper aims at the development and testing of the torus and Rosborough approach for regional gravity field improvements, based on radial basis functions as basis functions. The developed regional Rosborough approach is tested against a changing gravity field produced by simulated ice-mass changes over Greenland. With only 350 parameters a recovery of the simulated mass changes with a relative accuracy of 5% is possible.  相似文献   

16.
Optimal Model for Geoid Determination from Airborne Gravity   总被引:2,自引:0,他引:2  
Two different approaches for transformation of airborne gravity disturbances, derived from gravity observations at low-elevation flying platforms, into geoidal undulations are formulated, tested and discussed in this contribution. Their mathematical models are based on Green's integral equations. They are in these two approaches defined at two different levels and also applied in a mutually reversed order. While one of these approaches corresponds to the classical method commonly applied in processing of ground gravity data, the other approach represents a new method for processing of gravity data in geoid determination that is unique to airborne gravimetry. Although theoretically equivalent in the continuous sense, both approaches are tested numerically for possible numerical advantages, especially due to the inverse of discretized Fredholm's integral equation of the first kind applied on different data. High-frequency synthetic gravity data burdened by the 2-mGal random noise, that are expected from current airborne gravity systems, are used for numerical testing. The results show that both approaches can deliver for the given data a comparable cm-level accuracy of the geoidal undulations. The new approach has, however, significantly higher computational efficiency. It would be thus recommended for real life geoid computations. Additional errors related to regularization of gravity data and the geoid, and to accuracy of the reference field, that would further deteriorate the quality of estimated geoidal undulations, are not considered in this study.  相似文献   

17.
In this paper we present a case history of seismic reservoir characterization where we estimate the probability of facies from seismic data and simulate a set of reservoir models honouring seismically‐derived probabilistic information. In appraisal and development phases, seismic data have a key role in reservoir characterization and static reservoir modelling, as in most of the cases seismic data are the only information available far away from the wells. However seismic data do not provide any direct measurements of reservoir properties, which have then to be estimated as a solution of a joint inverse problem. For this reason, we show the application of a complete workflow for static reservoir modelling where seismic data are integrated to derive probability volumes of facies and reservoir properties to condition reservoir geostatistical simulations. The studied case is a clastic reservoir in the Barents Sea, where a complete data set of well logs from five wells and a set of partial‐stacked seismic data are available. The multi‐property workflow is based on seismic inversion, petrophysics and rock physics modelling. In particular, log‐facies are defined on the basis of sedimentological information, petrophysical properties and also their elastic response. The link between petrophysical and elastic attributes is preserved by introducing a rock‐physics model in the inversion methodology. Finally, the uncertainty in the reservoir model is represented by multiple geostatistical realizations. The main result of this workflow is a set of facies realizations and associated rock properties that honour, within a fixed tolerance, seismic and well log data and assess the uncertainty associated with reservoir modelling.  相似文献   

18.
Abstract

Abstract The prediction and estimation of suspended sediment concentration are investigated by using multi-layer perceptrons (MLP). The fastest MLP training algorithm, that is the Levenberg-Marquardt algorithm, is used for optimization of the network weights for data from two stations on the Tongue River in Montana, USA. The first part of the study deals with prediction and estimation of upstream and down-stream station sediment data, separately, and the second part focuses on the estimation of downstream suspended sediment data by using data from both stations. In each case, the MLP test results are compared to those of generalized regression neural networks (GRNN), radial basis function (RBF) and multi-linear regression (MLR) for the best-input combinations. Based on the comparisons, it was found that the MLP generally gives better suspended sediment concentration estimates than the other neural network techniques and the conventional statistical method (MLR). However, for the estimation of maximum sediment peak, the RBF was mostly found to be better than the MLP and the other techniques. The results also indicate that the RBF and GRNN may provide better performance than the MLP in the estimation of the total sediment load.  相似文献   

19.
A new parameter estimation algorithm based on ensemble Kalman filter (EnKF) is developed. The developed algorithm combined with the proposed problem parametrization offers an efficient parameter estimation method that converges using very small ensembles. The inverse problem is formulated as a sequential data integration problem. Gaussian process regression is used to integrate the prior knowledge (static data). The search space is further parameterized using Karhunen–Loève expansion to build a set of basis functions that spans the search space. Optimal weights of the reduced basis functions are estimated by an iterative regularized EnKF algorithm. The filter is converted to an optimization algorithm by using a pseudo time-stepping technique such that the model output matches the time dependent data. The EnKF Kalman gain matrix is regularized using truncated SVD to filter out noisy correlations. Numerical results show that the proposed algorithm is a promising approach for parameter estimation of subsurface flow models.  相似文献   

20.
A new seismic interpolation and denoising method with a curvelet transform matching filter, employing the fast iterative shrinkage thresholding algorithm (FISTA), is proposed. The approach treats the matching filter, seismic interpolation, and denoising all as the same inverse problem using an inversion iteration algorithm. The curvelet transform has a high sparseness and is useful for separating signal from noise, meaning that it can accurately solve the matching problem using FISTA. When applying the new method to a synthetic noisy data sets and a data sets with missing traces, the optimum matching result is obtained, noise is greatly suppressed, missing seismic data are filled by interpolation, and the waveform is highly consistent. We then verified the method by applying it to real data, yielding satisfactory results. The results show that the method can reconstruct missing traces in the case of low SNR (signal-to-noise ratio). The above three problems can be simultaneously solved via FISTA algorithm, and it will not only increase the processing efficiency but also improve SNR of the seismic data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号