首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper outlines the process taken to create two separate gold prospectivity maps. The first was created using a combination of several knowledge-driven (KD) techniques. The second was created using a relatively new classification method called random forests (RF). The purpose of this study was to examine the results of the RF technique and to compare the results to that of the KD model. The datasets used for the creation of evidence maps for the gold prospectivity mapping include a comprehensive lake sediment geochemical dataset, interpreted geological structures (form lines), mapped and interpreted faults, lithology, topographic features (lakes), and known Au occurrences. The RF method performed well in that the gold prospectivity map created was a better predictor of the known Au occurrences than the KD gold prospectivity map. This was further validated by a fivefold repetition using a subset of the input training areas. Several advantages to the use of RF include (1) the ability to take both continuous and/or categorical data as variable inputs, (2) an internal, unbiased estimation of the mapping error (out-of-bag error) removing the need for a cross-validation of the final outputs to determine accuracy, and (3) the estimation of importance of each input variable. Efficiency of prediction curves illustrates that the RF method performs better than the KD method. The success rate is significantly higher for the RF method than for the KD method.  相似文献   

2.
地球重力场的变化是导致陆地水储量变化的重要因素之一,利用GRACE(Gravity Recovery and Climate Experiment)重力场恢复与气候实验重力卫星数据,结合GLDAS(Global Land Data Assimilation Systems)全球陆面数据同化系统和实测地下水位数据,反演和田地区克里雅河流域11年间四季和田地区的陆地水储量动态变化,模拟计算地下水等效水高变化趋势,构建了地下水水位估算模型。研究结果表明:和田地区春、夏两季的陆地水储量呈现出增加趋势,而秋、冬两季出现亏损状态;GRACE地球重力卫星所反演的陆地水储量比GLDAS同化系统所模拟的水资源变化更为剧烈,但2类数据的动态变化拟合度很高;GLDAS水资源等效水高二阶微分、GLDAS水资源变化倒数一阶微分、GRACE陆地水储量变化倒数变化、地下水储量变化一阶微分的敏感程度最高,构建的多元逐步回归模型明显优于线性函数,且水位深度越浅,该估算模型的适用性越高。  相似文献   

3.
We present an approximate method to estimate the resolution, covariance and correlation matrix for linear tomographic systems Ax = b that are too large to be solved by singular value decomposition. An explicit expression for the approximate inverse matrix A is found using one-step backprojections on the Penrose condition AA ≈ I , from which we calculate the statistical properties of the solution. The computation of A can easily be parallelized, each column being constructed independently.
The method is validated on small systems for which the exact covariance can still be computed with singular value decomposition. Though A is not accurate enough to actually compute the solution x , the qualitative agreement obtained for resolution and covariance is sufficient for many purposes, such as rough assessment of model precision or the reparametrization of the model by the grouping of correlating parameters. We present an example for the computation of the complete covariance matrix of a very large (69 043 × 9610) system with 5.9 × 106 non-zero elements in A . Computation time is proportional to the number of non-zero elements in A . If the correlation matrix is computed for the purpose of reparametrization by combining highly correlating unknowns x i , a further gain in efficiency can be obtained by neglecting the small elements in A , but a more accurate estimation of the correlation requires a full treatment of even the smaller A ij . We finally develop a formalism to compute a damped version of A .  相似文献   

4.
Summary. We investigate the effects of various sources of error on the estimation of the seismic moment tensor using a linear least squares inversion on surface wave complex spectra. A series of numerical experiments involving synthetic data subjected to controlled error contamination are used to demonstrate the effects. Random errors are seen to enter additively or multiplicitively into the complex spectra. We show that random additive errors due to background recording noise do not pose difficulties for recovering reliable estimates of the moment tensor. On the other hand, multiplicative errors from a variety of sources, such as focusing, multipathing, or epicentre mislocation, may lead to significant overestimation or underestimation of the tensor elements and in general cause the estimates to be less reliable.  相似文献   

5.
A case application of data-driven estimation of evidential belief functions (EBFs) is demonstrated to prospectivity mapping in Lundazi district (eastern Zambia). Spatial data used to represent recognition criteria of prospectivity for aquamarine-bearing pegmatites include mapped granites, mapped faults/fractures, mapped shear zones, and radioelement concentration ratios derived from gridded airborne radiometric data. Data-driven estimates EBFs take into account not only (a) spatial association between an evidential map layer and target deposits but also (b) spatial relationships between classes of evidences in an evidential map layer. Data-driven estimates of EBFs can indicate which spatial data provide positive or negative evidence of prospectivity. Data-driven estimates of EBFs of only spatial data providing positive evidence of prospectivity were integrated according to Dempster’s rule of combination. Map of integrated degrees of belief was used to delineate zones of relative degress of prospectivity for aquamarine-bearing pegmatites. The predictive map has at least 85% prediction rate and at least 79% success rate of delineating training and validation deposits, respectively. The results illustrate usefulness of data-driven estimation of EBFs in GIS-based predictive mapping of mineral prospectivity. The results also show usefulness of EBFs in managing uncertainties associated with evidential maps.  相似文献   

6.
Mineral deposit grades are usually estimated using data from samples of rock cores extracted from drill holes. Commonly, mineral deposit grade estimates are required for each block to be mined. Every estimated grade has always a corresponding error when compared against real grades of blocks. The error depends on various factors, among which the most important is the number of correlated samples used for estimation. Samples may be collected on a regular sampling grid and, as the spacing between samples decreases, the error of grade estimated from the data generally decreases. Sampling can be expensive. The maximum distance between samples that provides an acceptable error of grade estimate is useful for deciding how many samples are adequate. The error also depends on the geometry of a block, as lower errors would be expected when estimating the grade of large-volume blocks, and on the variability of the data within the region of the blocks. Local variability is measured in this study using the coefficient of variation (CV). We show charts analyzing error in block grade estimates as a function of sampling grid (obtained by geostatistical simulation), for various block dimensions (volumes) and for a given CV interval. These charts show results for two different attributes (Au and Ni) of two different deposits. The results show that similar errors were found for the two deposits, although they share similar features: sampling grid, block volume, CV, and continuity model. Consequently, the error for other attributes with similar features could be obtained from a single chart.  相似文献   

7.
Artificial Intelligence (AI) models such as Artificial Neural Networks (ANNs), Decision Trees and Dempster–Shafer's Theory of Evidence have long claimed to be more error‐tolerant than conventional statistical models, but the way error is propagated through these models is unclear. Two sources of error have been identified in this study: sampling error and attribute error. The results show that these errors propagate differently through the three AI models. The Decision Tree was the most affected by error, the Artificial Neural Network was less affected by error, and the Theory of Evidence model was not affected by the errors at all. The study indicates that AI models have very different modes of handling errors. In this case, the machine‐learning models, including ANNs and Decision Trees, are more sensitive to input errors. Dempster–Shafer's Theory of Evidence has demonstrated better potential in dealing with input errors when multisource data sets are involved. The study suggests a strategy of combining AI models to improve classification accuracy. Several combination approaches have been applied, based on a ‘majority voting system’, a simple average, Dempster–Shafer's Theory of Evidence, and fuzzy‐set theory. These approaches all increased classification accuracy to some extent. Two of them also demonstrated good performance in handling input errors. Second‐stage combination approaches which use statistical evaluation of the initial combinations are able to further improve classification results. One of these second‐stage combination approaches increased the overall classification accuracy on forest types to 54% from the original 46.5% of the Decision Tree model, and its visual appearance is also much closer to the ground data. By combining models, it becomes possible to calculate quantitative confidence measurements for the classification results, which can then serve as a better error representation. Final classification products include not only the predicted hard classes for individual cells, but also estimates of the probability and the confidence measurements of the prediction.  相似文献   

8.
9.
Recent upward trends in acres irrigated have been linked to increasing near-surface moisture. Unfortunately, stations with dew point data for monitoring near-surface moisture are sparse. Thus, models that estimate dew points from more readily observed data sources are useful. Daily average dew temperatures were estimated and evaluated at 14 stations in Southwest Georgia using linear regression models and artificial neural networks (ANN). Estimation methods were drawn from simple and readily available meteorological observations, therefore only temperature and precipitation were considered as input variables. In total, three linear regression models and 27 ANN were analyzed. The two methods were evaluated using root mean square error (RMSE), mean absolute error (MAE), and other model evaluation techniques to assess the skill of the estimation methods. Both methods produced adequate estimates of daily averaged dew point temperatures, with the ANN displaying the best overall skill. The optimal performance of both models was during the warm season. Both methods had higher error associated with colder dew points, potentially due to the lack of observed values at those ranges. On average, the ANN reduced RMSE by 6.86% and MAE by 8.30% when compared to the best performing linear regression model.  相似文献   

10.
Abstract

Kriging is an optimal method of spatial interpolation that produces an error for each interpolated value. Block kriging is a form of kriging that computes averaged estimates over blocks (areas or volumes) within the interpolation space. If this space is sampled sparsely, and divided into blocks of a constant size, a variable estimation error is obtained for each block, with blocks near to sample points having smaller errors than blocks farther away. An alternative strategy for sparsely sampled spaces is to vary the sizes of blocks in such away that a block's interpolated value is just sufficiently different from that of an adjacent block given the errors on both blocks. This has the advantage of increasing spatial resolution in many regions, and conversely reducing it in others where maintaining a constant size of block is unjustified (hence achieving data compression). Such a variable subdivision of space can be achieved by regular recursive decomposition using a hierarchical data structure. An implementation of this alternative strategy employing a split-and-merge algorithm operating on a hierarchical data structure is discussed. The technique is illustrated using an oceanographic example involving the interpolation of satellite sea surface temperature data. Consideration is given to the problem of error propagation when combining variable resolution interpolated fields in GIS modelling operations.  相似文献   

11.
12.
区域尺度蒸散发遥感估算——反演与数据同化研究进展   总被引:3,自引:0,他引:3  
尹剑  欧照凡  付强  刘东  邢贞相 《地理科学》2018,38(3):448-456
遥感技术近年来在估算区域尺度蒸散发中应用广泛。不同方法在驱动数据、模型机理和适用范围往往存在很大差别。鉴于此,阐述了基于传统方法空间尺度扩展的遥感模型,经验统计公式,特征空间法,单源、双源垂向能量平衡余项法等几类的遥感蒸散发反演方法,简要介绍了三温模型、非参数化模型、半经验模型、集成模型等常用模型。同时,分析了遥感数据同化实现连续估算区域蒸散发的主要思路,综述了基于能量平衡和基于复杂过程模型的数据同化的原理、方法演进及常用同化算法等。最后,探讨了各类区域蒸散发遥感方法的优劣、展望了模型机理完善、不确定性研究、结果验证等与蒸散发直接反演和数据同化相关的研究方向。  相似文献   

13.
We present a new formulation of the inverse problem of determining the temporal and spatial power moments of the seismic moment rate density distribution, in which its positivity is enforced through a set of linear conditions. To test and demonstrate the method, we apply it to artificial data for the great 1994 deep Bolivian earthquake. We use two different kinds of faulting models to generate the artificial data. One is the Haskell-type of faulting model. The other consists of a collection of a few isolated points releasing moment on a fault, as was proposed in recent studies of this earthquake. The positions of 13 teleseismic stations for which P - and SH -wave data are actually available for this earthquake are used. The numerical experiments illustrate the importance of the positivity constraints without which incorrect solutions are obtained. We also show that the Green functions associated with the problem must be approximated with a low approximation error to obtain reliable solutions. This is achieved by using a more uniform approximation than Taylor's series. We also find that it is necessary to use relatively long-period data first to obtain the low- (0th and 1st) degree moments. Using the insight obtained into the size and duration of the process from the first-degree moments, we can decrease the integration region, substitute these low-degree moments into the problem and use higher-frequency data to find the higher-power moments, so as to obtain more reliable estimates of the spatial and temporal source dimensions. At the higher frequencies, it is necessary to divide the region in which we approximate the Green functions into small pieces and approximate the Green functions separately in each piece to achieve a low approximation error. A derivation showing that the mixed spatio-temporal moments of second degree represent the average speeds of the centroids in the corresponding direction is given.  相似文献   

14.
15.
A variety of methods exist for interpolating Cartesian or spherical surface data onto an equidistant lattice in a procedure known as gridding. Methods based on Green's functions are particularly simple to implement. In such methods, the Green's function for the gridding operator is determined and the resulting gridding solution is composed of the superposition of contributions from each data constraint, weighted by the Green's function evaluated for all output–input point separations. The Green's function method allows for considerable flexibility, such as complete freedom in specifying where the solution will be evaluated (it does not have to be on a lattice) and the ability to include both surface heights and surface gradients as data constraints. Green's function solutions for Cartesian data in 1-, 2- and 3-D spaces are well known, as is the dilogarithm solution for minimum curvature spline on a spherical surface. Here, the spherical surface case is extended to include tension and the new generalized Green's function is derived. It is shown that the new function reduces to the dilogarithm solution in the limit of zero tension. Properties of the new function are examined and the new gridding method is implemented in Matlab® and demonstrated on three geophysical data sets.  相似文献   

16.
气温变化对人群健康有重要的影响。通过对美国县区人口加权的月平均温度的准确估计可以用于气温与人群健康行为以及疾病的关联关系研究,如基于以县区为单位的抽样或者报告数据。针对气温的估计,多数学者都采用ArcGIS软件,很少使用SAS这一统计软件。本文比较了两种地统计模型的性能,并在同一个CITGO平台上采用ArcGIS9.3和SAS9.2工具软件估算全美48个州县区月平均温度。来自全美5435个气温监测站点2007年1-12月的平均温度和站点的海拔高度被用于估算县区人口中心点的温度,其中海拔数据是作为协变量。通过调整决定系数R2、均方误差、均方根误差和处理时间等指标来比较模型的效能。在ArcGIS中独立验证预测准确性在11个月中都达到90%以上,SAS中12个月均达到90%以上。与ArcGIS协同克里格相比,SAS协同克里格插值能获得更高的准确性和较低的偏差。两个软件包对于县区水平的气温估计值呈现正相关(调整R2在0.95-0.99之间);通过引入海拔高度作为协变量,使准确性和精确性都得以改善。两种方法对于美国县区层面的气温估计都是可靠的,但ArcGIS在空间数据前期处理和处理时间上的优势,尤其在涉及多年或者多个州的项目中是软件选择上的重要考虑。  相似文献   

17.
A global estimate of the absolute oceanic general circulation from a geostrophic inversion of in situ hydrographic data is tested against and then combined with an estimate obtained from TOPEX/POSEIDON altimetric data and a geoid model computed using the JGM-3 gravity-field solution. Within the quantitative uncertainties of both the hydrographic inversion and the geoid estimate, the two estimates derived by very different methods are consistent. When the in situ inversion is combined with the altimetry/geoid scheme using a recursive inverse procedure, a new solution, fully consistent with both hydrography and altimetry, is found. There is, however, little reduction in the uncertainties of the calculated ocean circulation and its mass and heat fluxes because the best available geoid estimate remains noisy relative to the purely oceano-graphic inferences. The conclusion drawn from this is that the comparatively large errors present in the existing geoid models now limit the ability of satellite altimeter data to improve directly the general ocean circulation models derived from in situ measurements. Because improvements in the geoid could be realized through a dedicated spaceborne gravity recovery mission, the impact of hypothetical much better, future geoid estimates on the circulation uncertainty is also quantified, showing significant hypothetical reductions in the uncertainties of oceanic transport calculations, Full ocean general circulation models could better exploit both existing oceanographic data and future gravity-mission data, but their present use is severely limited by the inability to quantify their error budgets.  相似文献   

18.
气温是反映生态环境的重要参数之一,准确估算气温的时空分布对于气候变化研究具有重要意义.论文基于2011-2019年青海省气温实测数据、MODIS产品和SRTM DEM数据,在像元尺度分别开展了晴天条件和有云条件下瞬时空气温度的遥感估算研究,并评价了不同气温估算方法的精度差异,进而通过多元回归模型生成研究区高精度月空气温...  相似文献   

19.
Resource estimation of a placer deposit is always a difficult and challenging job because of high variability in the deposit. The complexity of resource estimation increases when drill-hole data are sparse. Since sparsely sampled placer deposits produce high-nugget variograms, a traditional geostatistical technique like ordinary kriging sometimes fails to produce satisfactory results. In this article, a machine learning algorithm—the support vector machine (SVM)—is applied to the estimation of a platinum placer deposit. A combination of different neighborhood samples is selected for the input space of the SVM model. The trade-off parameter of the SVM and the bandwidth of the kernel function are selected by genetic algorithm learning, and the algorithm is tested on a testing data set. Results show that if eight neighborhood samples and their distances and angles from the estimated point are considered as the input space for the SVM model, the developed model performs better than other configurations. The proposed input space-configured SVM model is compared with ordinary kriging and the traditional SVM model (location as input) for resource estimation. Comparative results reveal that the proposed input space-configured SVM model outperforms the other two models.  相似文献   

20.
A systematic test of time-to-failure analysis   总被引:7,自引:0,他引:7  
Time-to-failure analysis is a technique for predicting earthquakes in which a failure function is fit to a time-series of accumulated Benioff strain. Benioff strain is computed from regional seismicity in areas that may produce a large earthquake. We have tested the technique by fitting two functions, a power law proposed by Bufe & Varnes (1993) and a log-periodic function proposed by Sornette & Sammis (1995). We compared predictions from the two time-to-failure models to observed activity and to predicted levels of activity based upon the Poisson model. Likelihood ratios show that the most successful model is Poisson, with the simple Poisson model four times as likely to be correct as the best time-to-failure model. The best time-failure model is a blend of 90 per cent Poisson and 10 per cent log-periodic predictions. We tested the accuracy of the error estimates produced by the standard least-squares fitter and found greater accuracy for fits of the simple power law than for fits of the more complicated log-periodic function. The least-squares fitter underestimates the true error in time-to-failure functions because the error estimates are based upon linearized versions of the functions being fitted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号