首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
Using auxiliary information to improve the prediction accuracy of soil properties in a physically meaningful and technically efficient manner has been widely recognized in pedometrics. In this paper, we explored a novel technique to effectively integrate sampling data and auxiliary environmental information, including continuous and categorical variables, within the framework of the Bayesian maximum entropy (BME) theory. Soil samples and observed auxiliary variables were combined to generate probability distributions of the predicted soil variable at unsampled points. These probability distributions served as soft data of the BME theory at the unsampled locations, and, together with the hard data (sample points) were used in spatial BME prediction. To gain practical insight, the proposed approach was implemented in a real-world case study involving a dataset of soil total nitrogen (TN) contents in the Shayang County of the Hubei Province (China). Five terrain indices, soil types, and soil texture were used as auxiliary variables to generate soft data. Spatial distribution of soil total nitrogen was predicted by BME, regression kriging (RK) with auxiliary variables, and ordinary kriging (OK). The results of the prediction techniques were compared in terms of the Pearson correlation coefficient (r), mean error (ME), and root mean squared error (RMSE). These results showed that the BME predictions were less biased and more accurate than those of the kriging techniques. In sum, the present work extended the BME approach to implement certain kinds of auxiliary information in a rigorous and efficient manner. Our findings showed that the BME prediction technique involving the transformation of variables into soft data can improve prediction accuracy considerably, compared to other techniques currently in use, like RK and OK.  相似文献   

2.
Spatial interpolation methods for nonstationary plume data   总被引:1,自引:0,他引:1  
Plume interpolation consists of estimating contaminant concentrations at unsampled locations using the available contaminant data surrounding those locations. The goal of ground water plume interpolation is to maximize the accuracy in estimating the spatial distribution of the contaminant plume given the data limitations associated with sparse monitoring networks with irregular geometries. Beyond data limitations, contaminant plume interpolation is a difficult task because contaminant concentration fields are highly heterogeneous, anisotropic, and nonstationary phenomena. This study provides a comprehensive performance analysis of six interpolation methods for scatter-point concentration data, ranging in complexity from intrinsic kriging based on intrinsic random function theory to a traditional implementation of inverse-distance weighting. High resolution simulation data of perchloroethylene (PCE) contamination in a highly heterogeneous alluvial aquifer were used to generate three test cases, which vary in the size and complexity of their contaminant plumes as well as the number of data available to support interpolation. Overall, the variability of PCE samples and preferential sampling controlled how well each of the interpolation schemes performed. Quantile kriging was the most robust of the interpolation methods, showing the least bias from both of these factors. This study provides guidance to practitioners balancing opposing theoretical perspectives, ease-of-implementation, and effectiveness when choosing a plume interpolation method.  相似文献   

3.
Data collected along transects are becoming more common in environmental studies as indirect measurement devices, such as geophysical sensors, that can be attached to mobile platforms become more prevalent. Because exhaustive sampling is not always possible under constraints of time and costs, geostatistical interpolation techniques are used to estimate unknown values at unsampled locations from transect data. It is known that outlying observations can receive significantly greater ordinary kriging weights than centrally located observations when the data are contiguously aligned along a transect within a finite search window. Deutsch (1994) proposed a kriging algorithm, finite domain kriging, that uses a redundancy measure in place of the covariance function in the data-to-data kriging matrix to address the problem of overweighting the outlying observations. This paper compares the performances of two kriging techniques, ordinary kriging (OK) and finite domain kriging (FDK), on examining unexploded ordnance (UXO) densities by comparing prediction errors at unsampled locations. The impact of sampling design on object count prediction is also investigated using data collected from transects and at random locations. The Poisson process is used to model the spatial distribution of UXO for three 5000 × 5000 m fields; one of which does not have any ordnance target (homogeneous field), while the other two sites have an ordnance target in the center of the site (isotropic and anisotropic fields). In general, for a given sampling transects width, the differences between OK and FDK in terms of the mean error and the mean square error are not significant regardless of the sampled area and the choice of the field. When 20% or more of the site is sampled, the estimation of object counts is unbiased on average for all three fields regardless of the choice of the transect width and the choice of the kriging algorithm. However, for non-homogeneous fields (isotropic and anisotropic fields), the mean error fluctuates considerably when a small number of transects are sampled. The difference between the transect sampling and the random sampling in terms of prediction errors becomes almost negligible if more than 20% of the site is sampled. Overall, FDK is no better than OK in terms of the prediction performances when the transect sampling procedure is used.  相似文献   

4.
Sequential kriging and cokriging: Two powerful geostatistical approaches   总被引:1,自引:0,他引:1  
A sequential linear estimator is developed in this study to progressively incorporate new or different spatial data sets into the estimation. It begins with a classical linear estimator (i.e., kriging or cokriging) to estimate means conditioned to a given observed data set. When an additional data set becomes available, the sequential estimator improves the previous estimate by using linearly weighted sums of differences between the new data set and previous estimates at sample locations. Like the classical linear estimator, the weights used in the sequential linear estimator are derived from a system of equations that contains covariances and cross-covariances between sample locations and the location where the estimate is to be made. However, the covariances and cross-covariances are conditioned upon the previous data sets. The sequential estimator is shown to produce the best, unbiased linear estimate, and to provide the same estimates and variances as classic simple kriging or cokriging with the simultaneous use of the entire data set. However, by using data sets sequentially, this new algorithm alleviates numerical difficulties associated with the classical kriging or cokriging techniques when a large amount of data are used. It also provides a new way to incorporate additional information into a previous estimation.  相似文献   

5.
Interpolations of groundwater table elevation in dissected uplands   总被引:3,自引:0,他引:3  
Chung JW  Rogers JD 《Ground water》2012,50(4):598-607
The variable elevation of the groundwater table in the St. Louis area was estimated using multiple linear regression (MLR), ordinary kriging, and cokriging as part of a regional program seeking to assess liquefaction potential. Surface water features were used to determine the minimum water table for MLR and supplement the principal variables for ordinary kriging and cokriging. By evaluating the known depth to the water and the minimum water table elevation, the MLR analysis approximates the groundwater elevation for a contiguous hydrologic system. Ordinary kriging and cokriging estimate values in unsampled areas by calculating the spatial relationships between the unsampled and sampled locations. In this study, ordinary kriging did not incorporate topographic variations as an independent variable, while cokriging included topography as a supporting covariable. Cross validation suggests that cokriging provides a more reliable estimate at known data points with less uncertainty than the other methods. Profiles extending through the dissected uplands terrain suggest that: (1) the groundwater table generated by MLR mimics the ground surface and elicits a exaggerated interpolation of groundwater elevation; (2) the groundwater table estimated by ordinary kriging tends to ignore local topography and exhibits oversmoothing of the actual undulations in the water table; and (3) cokriging appears to give the realistic water surface, which rises and falls in proportion to the overlying topography. The authors concluded that cokriging provided the most realistic estimate of the groundwater surface, which is the key variable in assessing soil liquefaction potential in unconsolidated sediments.  相似文献   

6.
Top‐kriging is a method for estimating stream flow‐related variables on a river network. Top‐kriging treats these variables as emerging from a two‐dimensional spatially continuous process in the landscape. The top‐kriging weights are estimated by regularising the point variogram over the catchment area (kriging support), which accounts for the nested nature of the catchments. We test the top‐kriging method for a comprehensive Austrian data set of low stream flows. We compare it with the regional regression approach where linear regression models between low stream flow and catchment characteristics are fitted independently for sub‐regions of the study area that are deemed to be homogeneous in terms of flow processes. Leave‐one‐out cross‐validation results indicate that top‐kriging outperforms the regional regression on average over the entire study domain. The coefficients of determination (cross‐validation) of specific low stream flows are 0.75 and 0.68 for the top‐kriging and regional regression methods, respectively. For locations without upstream data points, the performances of the two methods are similar. For locations with upstream data points, top‐kriging performs much better than regional regression as it exploits the low flow information of the neighbouring locations. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Spatial prediction of river channel topography by kriging   总被引:2,自引:0,他引:2  
Topographic information is fundamental to geomorphic inquiry, and spatial prediction of bed elevation from irregular survey data is an important component of many reach‐scale studies. Kriging is a geostatistical technique for obtaining these predictions along with measures of their reliability, and this paper outlines a specialized framework intended for application to river channels. Our modular approach includes an algorithm for transforming the coordinates of data and prediction locations to a channel‐centered coordinate system, several different methods of representing the trend component of topographic variation and search strategies that incorporate geomorphic information to determine which survey data are used to make a prediction at a specific location. For example, a relationship between curvature and the lateral position of maximum depth can be used to include cross‐sectional asymmetry in a two‐dimensional trend surface model, and topographic breaklines can be used to restrict which data are retained in a local neighborhood around each prediction location. Using survey data from a restored gravel‐bed river, we demonstrate how transformation to the channel‐centered coordinate system facilitates interpretation of the variogram, a statistical model of reach‐scale spatial structure used in kriging, and how the choice of a trend model affects the variogram of the residuals from that trend. Similarly, we show how decomposing kriging predictions into their trend and residual components can yield useful information on channel morphology. Cross‐validation analyses involving different data configurations and kriging variants indicate that kriging is quite robust and that survey density is the primary control on the accuracy of bed elevation predictions. The root mean‐square error of these predictions is directly proportional to the spacing between surveyed cross‐sections, even in a reconfigured channel with a relatively simple morphology; sophisticated methods of spatial prediction are no substitute for field data. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

8.
Bayesian data fusion in a spatial prediction context: a general formulation   总被引:1,自引:1,他引:1  
In spite of the exponential growth in the amount of data that one may expect to provide greater modeling and predictions opportunities, the number and diversity of sources over which this information is fragmented is growing at an even faster rate. As a consequence, there is real need for methods that aim at reconciling them inside an epistemically sound theoretical framework. In a statistical spatial prediction framework, classical methods are based on a multivariate approach of the problem, at the price of strong modeling hypotheses. Though new avenues have been recently opened by focusing on the integration of uncertain data sources, to the best of our knowledges there have been no systematic attemps to explicitly account for information redundancy through a data fusion procedure. Starting from the simple concept of measurement errors, this paper proposes an approach for integrating multiple information processing as a part of the prediction process itself through a Bayesian approach. A general formulation is first proposed for deriving the prediction distribution of a continuous variable of interest at unsampled locations using on more or less uncertain (soft) information at neighboring locations. The case of multiple information is then considered, with a Bayesian solution to the problem of fusing multiple information that are provided as separate conditional probability distributions. Well-known methods and results are derived as limit cases. The convenient hypothesis of conditional independence is discussed by the light of information theory and maximum entropy principle, and a methodology is suggested for the optimal selection of the most informative subset of information, if needed. Based on a synthetic case study, an application of the methodology is presented and discussed.  相似文献   

9.
We focus on the Bayesian estimation of strongly heterogeneous transmissivity fields conditional on data sampled at a set of locations in an aquifer. Log-transmissivity, Y, is modeled as a stochastic Gaussian process, parameterized through a truncated Karhunen–Loève (KL) expansion. We consider Y fields characterized by a short correlation scale as compared to the size of the observed domain. These systems are associated with a KL decomposition which still requires a high number of parameters, thus hampering the efficiency of the Bayesian estimation of the underlying stochastic field. The distinctive aim of this work is to present an efficient approach for the stochastic inverse modeling of fully saturated groundwater flow in these types of strongly heterogeneous domains. The methodology is grounded on the construction of an optimal sparse KL decomposition which is achieved by retaining only a limited set of modes in the expansion. Mode selection is driven by model selection criteria and is conditional on available data of hydraulic heads and (optionally) Y. Bayesian inversion of the optimal sparse KLE is then inferred using Markov Chain Monte Carlo (MCMC) samplers. As a test bed, we illustrate our approach by way of a suite of computational examples where noisy head and Y values are sampled from a given randomly generated system. Our findings suggest that the proposed methodology yields a globally satisfactory inversion of the stochastic head and Y fields. Comparison of reference values against the corresponding MCMC predictive distributions suggests that observed values are well reproduced in a probabilistic sense. In a few cases, reference values at some unsampled locations (typically far from measurements) are not captured by the posterior probability distributions. In these cases, the quality of the estimation could be improved, e.g., by increasing the number of measurements and/or the threshold for the selection of KL modes.  相似文献   

10.
Determining Earth’s structure is a fundamental goal of Earth science, and geophysical methods play a prominent role in investigating Earth’s interior. Geochemical, cosmochemical, and petrological analyses of terrestrial samples and meteoritic material provide equally important insights. Complementary information comes from high-pressure mineral physics and chemistry, i.e., use of sophisticated experimental techniques and numerical methods that are capable of attaining or simulating physical properties at very high pressures and temperatures, thereby allowing recovered samples from Earth’s crust and mantle to be analyzed in the laboratory or simulated computationally at the conditions that prevail in Earth’s mantle and core. This is particularly important given that the vast bulk of Earth’s interior is geochemically unsampled. This paper describes a quantitative approach that combines data and results from mineral physics, petrological analyses of mantle minerals, and geophysical inverse calculations, in order to map geophysical data directly for mantle composition (major element chemistry and water content) and thermal state. We illustrate the methodology by inverting a set of long-period electromagnetic response functions beneath six geomagnetic stations that cover a range of geological settings for major element chemistry, water content, and thermal state of the mantle. The results indicate that interior structure and constitution of the mantle can be well-retrieved given a specific set of measurements describing (1) the conductivity of mantle minerals, (2) the partitioning behavior of water between major upper mantle and transition-zone minerals, and (3) the ability of nominally anhydrous minerals to store water in their crystal structures. Specifically, upper mantle water contents determined here bracket the ranges obtained from analyses of natural samples, whereas transition-zone water concentration is an order-of-magnitude greater than that of the upper mantle and appears to vary laterally underneath the investigated locations.  相似文献   

11.
The estimation of overburden sediment thickness is important in hydrogeology, geotechnics and geophysics. Usually, thickness is known precisely at a few sparse borehole data. To improve precision of estimation, one useful complementary information is the known position of outcrops. One intuitive approach is to code the outcrops as zero thickness data. A problem with this approach is that the outcrops are preferentially observed compared to other thickness information. This introduces a strong bias in the thickness estimation that kriging is not able to remove. We consider a new approach to incorporate point or surface outcrop information based on the use of a non-stationary covariance model in kriging. The non-stationary model is defined so as to restrict the distance of influence of the outcrops. Within this distance of influence, covariance parameters are assumed simple regular functions of the distance to the nearest outcrop. Outside the distance of influence of the outcrops, the thickness covariance is assumed stationary. The distance of influence is obtained thru a cross-validation. Compared to kriging based on a stationary model with or without zero thickness at outcrop locations, the non-stationary model provides more precise estimation, especially at points close to an outcrop. Moreover, the thickness map obtained with the non-stationary covariance model is more realistic since it forces the estimates to zero close to outcrops without the bias incurred when outcrops are simply treated as zero thickness in a stationary model.  相似文献   

12.
Illman WA  Berg SJ  Yeh TC 《Ground water》2012,50(3):421-431
The main purpose of this paper was to compare three approaches for predicting solute transport. The approaches include: (1) an effective parameter/macrodispersion approach (Gelhar and Axness 1983); (2) a heterogeneous approach using ordinary kriging based on core samples; and (3) a heterogeneous approach based on hydraulic tomography. We conducted our comparison in a heterogeneous sandbox aquifer. The aquifer was first characterized by taking 48 core samples to obtain local-scale hydraulic conductivity (K). The spatial statistics of these K values were then used to calculate the effective parameters. These K values and their statistics were also used for kriging to obtain a heterogeneous K field. In parallel, we performed a hydraulic tomography survey using hydraulic tests conducted in a dipole fashion with the drawdown data analyzed using the sequential successive linear estimator code (Yeh and Liu 2000) to obtain a K distribution (or K tomogram). The effective parameters and the heterogeneous K fields from kriging and hydraulic tomography were used in forward simulations of a dipole conservative tracer test. The simulated and observed breakthrough curves and their temporal moments were compared. Results show an improvement in predictions of drawdown behavior and tracer transport when the K tomogram from hydraulic tomography was used. This suggests that the high-resolution prediction of solute transport is possible without collecting a large number of small-scale samples to estimate flow and transport properties that are costly to obtain at the field scale.  相似文献   

13.
In most groundwater applications, measurements of concentration are limited in number and sparsely distributed within the domain of interest. Therefore, interpolation techniques are needed to obtain most likely values of concentration at locations where no measurements are available. For further processing, for example, in environmental risk analysis, interpolated values should be given with uncertainty bounds, so that a geostatistical framework is preferable. Linear interpolation of steady-state concentration measurements is problematic because the dependence of concentration on the primary uncertain material property, the hydraulic conductivity field, is highly nonlinear, suggesting that the statistical interrelationship between concentration values at different points is also nonlinear. We suggest interpolating steady-state concentration measurements by conditioning an ensemble of the underlying log-conductivity field on the available hydrological data in a conditional Monte Carlo approach. Flow and transport simulations for each conditional conductivity field must meet the measurements within their given uncertainty. The ensemble of transport simulations based on the conditional log-conductivity fields yields conditional statistical distributions of concentration at points between observation points. This method implicitly meets physical bounds of concentration values and non-Gaussianity of their statistical distributions and obeys the nonlinearity of the underlying processes. We validate our method by artificial test cases and compare the results to kriging estimates assuming different conditional statistical distributions of concentration. Assuming a beta distribution in kriging leads to estimates of concentration with zero probability of concentrations below zero or above the maximal possible value; however, the concentrations are not forced to meet the advection-dispersion equation.  相似文献   

14.
A novel grid-free geostatistical simulation method (GFS) allows representing coregionalized variables as an analytical function of the coordinates of the simulation locations. Simulation on unstructured grids, regridding and refinement of available realizations of natural phenomena including, but not limited to, environmental systems are possible with GFS in a consistent manner. The unconditional realizations are generated by utilizing the linear model of coregionalization and Fourier series-based decomposition of the covariance function. The conditioning to data is performed by kriging. The data can be measured at scattered point-scale locations or sampled at a block scale. Secondary data are usually used in conjunction with primary data for the improved modeling. Satellite imaging is an example of exhaustively sampled secondary data. Improvements and recommendations are made to the implementation of GFS to properly assimilate secondary exhaustive data sets in a grid-free manner. Intrinsic cokriging (ICK) is utilized to reduce computational time and preserve the overall quality of the simulation. To further reduce the computational cost of ICK, a block matrix inversion is implemented in the calculation of the kriging weights. A projection approach to ICK is proposed to avoid artifacts in the realizations around the edges of the exhaustive data region when the data do not cover the entire modeling domain. The point-scale block value representation of the block-scale data is developed as an alternative to block cokriging to integrate block-scale data into realizations within the GFS framework. Several case studies support the proposed enhancements.  相似文献   

15.
Why do we need and how should we implement Bayesian kriging methods   总被引:1,自引:0,他引:1  
The spatial prediction methodology that has become known under the heading of kriging is largely based on the assumptions that the underlying random field is Gaussian and the covariance function is exactly known. In practical applications, however, these assumptions will not hold. Beyond Gaussianity of the random field, lognormal kriging, disjunctive kriging, (generalized linear) model-based kriging and trans-Gaussian kriging have been proposed in the literature. The latter approach makes use of the Box–Cox-transform of the data. Still, all the alternatives mentioned do not take into account the uncertainty with respect to the distribution (or transformation) and the estimated covariance function of the data. The Bayesian trans-Gaussian kriging methodology proposed in the present paper is in the spirit of the “Bayesian bootstrap” idea advocated by Rubin (Ann Stat 9:130–134, 1981) and avoids the unusual specification of noninformative priors often made in the literature and is entirely based on the sample distribution of the estimators of the covariance function and of the Box–Cox parameter. After some notes on Bayesian spatial prediction, noninformative priors and developing our new methodology finally we will present an example illustrating our pragmatic approach to Bayesian prediction by means of a simulated data set.  相似文献   

16.
In the geostatistical analysis of regionalized data, the practitioner may not be interested in mapping the unsampled values of the variable that has been monitored, but in assessing the risk that these values exceed or fall short of a regulatory threshold. This kind of concern is part of the more general problem of estimating a transfer function of the variable under study. In this paper, we focus on the multigaussian model, for which the regionalized variable can be represented (up to a nonlinear transformation) by a Gaussian random field. Two cases are analyzed, depending on whether the mean of this Gaussian field is considered known or not, which lead to the simple and ordinary multigaussian kriging estimators respectively. Although both of these estimators are theoretically unbiased, the latter may be preferred to the former for practical applications since it is robust to a misspecification of the mean value over the domain of interest and also to local fluctuations around this mean value. An advantage of multigaussian kriging over other nonlinear geostatistical methods such as indicator and disjunctive kriging is that it makes use of the multivariate distribution of the available data and does not produce order relation violations. The use of expansions into Hermite polynomials provides three additional results: first, an expression of the multigaussian kriging estimators in terms of series that can be calculated without numerical integration; second, an expression of the associated estimation variances; third, the derivation of a disjunctive-type estimator that minimizes the variance of the error when the mean is unknown.  相似文献   

17.
Gaussian conditional realizations are routinely used for risk assessment and planning in a variety of Earth sciences applications. Assuming a Gaussian random field, conditional realizations can be obtained by first creating unconditional realizations that are then post-conditioned by kriging. Many efficient algorithms are available for the first step, so the bottleneck resides in the second step. Instead of doing the conditional simulations with the desired covariance (F approach) or with a tapered covariance (T approach), we propose to use the taper covariance only in the conditioning step (half-taper or HT approach). This enables to speed up the computations and to reduce memory requirements for the conditioning step but also to keep the right short scale variations in the realizations. A criterion based on mean square error of the simulation is derived to help anticipate the similarity of HT to F. Moreover, an index is used to predict the sparsity of the kriging matrix for the conditioning step. Some guides for the choice of the taper function are discussed. The distributions of a series of 1D, 2D and 3D scalar response functions are compared for F, T and HT approaches. The distributions obtained indicate a much better similarity to F with HT than with T.  相似文献   

18.
Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.  相似文献   

19.
The mapping of saline soils is the first task before any reclamation effort. Reclamation is based on the knowledge of soil salinity in space and how it evolves with time. Soil salinity is traditionally determined by soil sampling and laboratory analysis. Recently, it became possible to complement these hard data with soft secondary data made available using field sensors like electrode probes. In this study, we had two data sets. The first includes measurements of field salinity (ECa) at 413 locations and 19 time instants. The second, which is a subset of the first (13 to 20 locations), contains, in addition to ECa, salinity determined in the laboratory (EC2.5). Based on a procedure of cross-validation, we compared the prediction performance in the space-time domain of 3 methods: kriging using either only hard data (HK) or hard and mid interval soft data (HMIK), and Bayesian maximum entropy (BME) using probabilistic soft data. We found that BME was less biased, more accurate and giving estimates, which were better correlated with the observed values than the two kriging techniques. In addition, BME allowed one to delineate with better detail saline from non-saline areas.  相似文献   

20.
Bayesian Maximum Entropy (BME) has been successfully used in geostatistics to calculate predictions of spatial variables given some general knowledge base and sets of hard (precise) and soft (imprecise) data. This general knowledge base commonly consists of the means at each of the locations considered in the analysis, and the covariances between these locations. When the means are not known, the standard practice is to estimate them from the data; this is done by either generalized least squares or maximum likelihood. The BME prediction then treats these estimates as the general knowledge means, and ignores their uncertainty. In this paper we develop a prediction that is based on the BME method that can be used when the general knowledge consists of the covariance model only. This prediction incorporates the uncertainty in the estimated local mean. We show that in some special cases our prediction is equal to results from classical geostatistics. We investigate the differences between our approach and the standard approach for predicting in this common practical situation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号