共查询到20条相似文献,搜索用时 15 毫秒
1.
The reliability of using fractal dimension (D) as a quantitative parameter to describe geological variables is dependent mainly
on the accuracy of estimated D values from observed data. Two widely used methods for the estimation of fractal dimensions
are based on fitting a fractal model to experimental variograms or power-spectra on a log-log plot. The purpose of this paper
is to study the uncertainty in the fractal dimension estimated by these two methods. The results indicate that both spectrum
and variogram methods result in biased estimates of the D value. Fractal dimension calculated by these two methods for the
same data will be different unless the bias is properly corrected. The spectral method results in overestimated D values.
The variogram method has a critical fractal dimension, below which overestimation occurs and above which underestimation occurs.
On the bases of 36,000 simulated realizations we propose empirical formulae to correct for biases in the spectral and variogram
estimated fractal dimension. Pitfalls in estimating fractal dimension from data contaminated by white noise or data having
several fractal components have been identified and illustrated by simulated examples. 相似文献
2.
On the practice of estimating fractal dimension 总被引:11,自引:0,他引:11
Coastlines epitomize deterministic fractals and fractal (Hausdorff-Besicovitch) dimensions; a divider [compass] method can be used to calculate fractal dimensions for these features. Noise models are used to develop another notion of fractals, a stochastic one. Spectral and variogram methods are used to estimate fractal dimensions for stochastic fractals. When estimating fractal dimension, the objective of the analysis must be consistent with the method chosen for fractal dimension calculation. Spectal and variogram methods yield fractal dimensions which indicate the similarity of the feature under study to noise (e.g., Brownian noise). A divider measurement method yields a fractal dimension which is a measure of complexity of shape. 相似文献
3.
Brian Klinkenberg 《Mathematical Geology》1994,26(1):23-46
An in-depth review of the more commonly applied methods used in the determination of the fractal dimension of one-dimensional curves is presented. Many often conflicting opinions about the different methods have been collected and are contrasted with each other. In addition, several little known but potentially useful techniques are also reviewed. General recommendations which should be considered whenever applying any method are made. 相似文献
4.
利用毛管压力曲线分形分维方法研究流动单元 总被引:3,自引:0,他引:3
利用取心井铸体薄片获得的图像资料和毛管压力曲线,通过图像分形几何学方法以分维数的形式定量地表征出了复杂的微观孔隙喉道结构特征,发现能够很好地划分和评价孔隙岩石中油、气、水的渗流差异,可以用于储层微观流动单元表征。文中阐述了岩石微观孔隙喉道结构分形的理论基础、计算方法和应用于表征流动单元的依据。建立了中国西部砾岩低渗透油藏微观孔隙喉道分维数与孔隙度、渗透率之间计算图版,据此在油藏中利用常规测井资料获得的孔隙度、渗透率参数计算微观孔隙喉道分维数,开展全油藏流动单元划分与评价,取得了良好的效果。研究结果表明,利用毛管压力曲线分形分维方法研究储层微观流动单元是一种很有效的途径。 相似文献
5.
Alan L. Flint Lorraine E. Flint Edward M. Kwicklis June T. Fabryka-Martin Gudmundur S. Bodvarsson 《Hydrogeology Journal》2002,10(1):180-204
Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada,
USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive
waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms
operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution
of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques,
calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data,
inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric
radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are
highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially
distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale
recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model
that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and
spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently
from the non-uniform infiltration pattern at the surface.
Electronic Publication 相似文献
6.
To simultaneously evaluate the decay constant of 40K () and the age of a standard (t
std) using isotopic data from geologic materials, we applied a series of statistical methods. The problem of estimating the most probable intercept of many nonlinear curves in and t
std space is formulated by an errors-in-variables nonlinear regression model. Then a maximum likelihood method is applied to the model for a point estimate, which is equivalent to the nonlinear least square method when measurement error distributions are Gaussian. Uncertainties and confidence regions of the estimates can be approximated using three methods: the asymptotic normal approximation, the parametric bootstrap method and Bonferroni confidence regions. Five pairs of published data for samples with ages from 2 ka to 4.5 Ga were used to estimate and the age of Fish Canyon sanidine (t
FCs). The statistical procedure yields most probable estimates of (5.4755 ± 0.0170 × 10–10 (1)/year) and t
FCs (28.269 ± 0.0661 (1) Ma) which are in between previously published values. These results indicate the power of our approach to provide improved constraints on these parameters, although the preliminary nature of some of the input data require further review before the values can be adopted. 相似文献
7.
8.
9.
Using pore-solid fractal dimension to estimate residual LNAPLs saturation in sandy aquifers: A column experiment 下载免费PDF全文
The“tailing”effect caused by residual non-aqueous phase liquids(NAPLs)in porous aquifers is one of the frontiers in pollution hydrogeology research.Based on the current knowledge that the residual NAPLs is mainly controlled by the pore structure of soil,this study established a method for evaluating the residual saturation of NAPLs by investigating the fractal dimension of porous media.In this study,the soil column experiments of residual light NAPLs(LNAPLs)in sandy aquifer with different ratios of sands and soil were carried out,and the correlation between the fractal dimension of the medium,the residual of LNAPLs and the soil structure parameters are statistically analyzed,and its formation mechanism and main control factors are discussed.The results show that:Under our experimental condition:(1)the fractal dimension of the medium has a positive correlation with the residual saturation of NAPLs generally,and the optimal fitting function can be described by a quadratic model:SR=192.02 D2-890.73 D+1040.8;(2)the dominant formation mechanism is:Smaller pores in the medium is related to larger fractal dimension,which leads to higher residual saturation of NAPLs;stronger heterogeneity of the medium is related to larger fractal dimension,which also leads to higher residual saturation of NAPLs;(3)the micro capillary pores characterized by fine sand are the main controlling factors of the formation mechanism.It is concluded that both the theory and the method of using fractal dimension of the medium to evaluate the residual saturation of NAPLs are feasible.This study provides a new perspective for the research of“tailing”effect of NAPLs in porous media aquifer. 相似文献
10.
A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence—induced by the constant sum constraint—is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. 相似文献
11.
根据中国东部浅水湖泊受人类活动影响较严重的情况,将季节分解的非参数局部线性回归模型、频率分析和几何分块自助法有机结合,提出了一种基于非参数方法的湖泊参照状态确定的新方法。该方法首先将季节分解模型用于湖泊营养盐及其响应物的观测值,选取出适合用于推断参照状态的时间段;其次使用频率分析法分析此时间段内的观测值,并给出湖泊总氮、总磷和叶绿素a的参照状态值;最后用几何分块自助法给出各自的置信区间。该方法能有效克服前人提出方法的缺点。以太湖为例,采用该方法推断了参照状态浓度,总氮为0.78mg/L,总磷为0.030mg/L,叶绿素a为2.63μg/L;给出相应的95%置信区间分别为0.57~0.83mg/L、0.025~0.046mg/L和1.86~2.65μg/L。该方法也可适用受人类活动影响较大的中国东部其他浅水湖泊。 相似文献
12.
针对复杂洪水灾害系统中随机、模糊、灰色等各种不确定性,结合洪水灾害风险管理广义熵智能分析的理论框架,以最大熵原理和属性区间识别理论为基础,建立了基于最大熵原理的洪水灾害风险属性区间识别模型(AIRM-POME),利用梯形模糊数和层次分析法相结合的方法确定评价指标权重,采用均化系数综合AIRM-POME计算得到的属性测度区间,由置信度准则和特征值公式对各评价单元进行危险、易损等级的评定和排序,并根据联合国对自然灾害风险的定义及其定量表达式给出风险等级。将模型应用到荆江分洪区洪水灾害风险分析中,实例研究表明,模型合理可靠,深层次地刻画了各种不确定性,是一种风险分析新方法,可推广应用到其他自然灾害的风险分析中。 相似文献
13.
A. J. Haines 《Tectonophysics》1983,93(3-4)
Before the introduction in 1977 of new procedures for calculating the local magnitudes of New Zealand earthquakes, these were calculated using a derivative of Richter's method for Southern California. There are systematic differences between the magnitudes calculated by the two methods which are functions of position, magnitude and the types of seismometer recording the event. Usually the difference is small, though, for large earthquakes in particular, it can be close to unity.Differences of order unity are often observed between mB values reported by the ISC and both sets of local magnitudes. There appears to be no relationship between the local magnitudes, which now allow for the nature of wave propagation below New Zealand, and those reported by the ISC. 相似文献
14.
A comparison of the GIS based landslide susceptibility assessment methods: multivariate versus bivariate 总被引:30,自引:0,他引:30
The purpose of this study is to evaluate and to compare the results of multivariate (logical regression) and bivariate (landslide susceptibility) methods in Geographical Information System (GIS) based landslide susceptibility assessment procedures. In order to achieve this goal the Asarsuyu catchment in NW Turkey was selected as a test zone because of its well-known landslide occurrences interfering with the E-5 highway mountain pass.Two methods were applied to the test zone and two separate susceptibility maps were produced. Following this a two-fold comparison scheme was implemented. Both methods were compared by the Seed Cell Area Indexes (SCAI) and by the spatial locations of the resultant susceptibility pixels.It was found that both of the methods converge in 80% of the area; however, the weighting algorithm in the bivariate technique (landslide susceptibility method) had some severe deficiencies, as the resultant hazard classes in overweighed areas did not converge with the factual landslide inventory map. The result of the multivariate technique (logical regression) was more sensitive to the different local features of the test zone and it resulted in more accurate and homogeneous susceptibility maps. 相似文献
15.
Two data sets, one from surface sediment samples obtained from a subtidal sand wave and another consisting of three wave-height records representing different oceanographic conditions, are employed to test the application of maximum entropy (ME) and optimal number of class interval (K) concepts. Each data set was modified further to obtain a frequency distribution determined by eight unequal-size class intervals. Both original and modified sets were treated by a reliable statistical method. Comparison of relative entropies and results from statistical treatments show the advantage of the transformation of any original data set by means of ME and K methods before analyzing it further.Contribution 127, Instituto Argentino de Oceanografia. 相似文献
16.
17.
18.
Late pleistocene glacial equilibrium-line altitudes in the Colorado Front Range: A comparison of methods 总被引:1,自引:0,他引:1
Six methods for approximating late Pleistocene (Pinedale) equilibrium-line altitudes (ELAs) are compared for rapidity of data collection and error (RMSE) from first-order trend surfaces, using the Colorado Front Range. Trend surfaces computed from rapidly applied techniques, such as glaciation threshold, median altitude of small reconstructed glaciers, and altitude of lowest cirque floors have relatively high RMSEs (97–186 m) because they are subjectively derived and are based on small glaciers sensitive to microclimatic variability. Surfaces computed for accumulation-area ratios (AARs) and toe-to-headwall altitude ratios (THARs) of large reconstructed glaciers show that an AAR of 0.65 and a THAR of 0.40 have the lowest RMSEs (about 80 m) and provide the same mean ELA estimate (about 3160 m) as that of the more subjectively derived maximum altitudes of Pinedale lateral moraines (RMSE = 149 m). Second-order trend surfaces demonstrate low ELAs in the latitudinal center of the Front Range, perhaps due to higher winter accumulation there. The mountains do not presently reach the ELA for large glaciers, and small Front Range cirque glaciers are not comparable to small glaciers existing during Pinedale time. Therefore, Pleistocene ELA depression and consequent temperature depression cannot reliably be ascertained from the calculated ELA surfaces. 相似文献
19.
第四纪沉积物光释光测年中等效剂量测定方法的对比研究 总被引:12,自引:0,他引:12
准确地测定碎屑矿物沉积后吸收的等效剂量第四纪沉积物释光测年中最关键的一环。用光释光测年技术对几年全新世坡积物,古土地和黄土等样品中的石英,长石等碎屑矿物进行了测定等效剂量的对比研究。 相似文献
20.
Assessment of the sampling variance of the experimental variogram is an important topic in geostatistics as it gives the uncertainty of the variogram estimates. This assessment, however, is repeatedly overlooked in most applications mainly, perhaps, because a general approach has not been implemented in the most commonly used software packages for variogram analysis. In this paper the authors propose a solution that can be implemented easily in a computer program, and which, subject to certain assumptions, is exact. These assumptions are not very restrictive: second-order stationarity (the process has a finite variance and the variogram has a sill) and, solely for the purpose of evaluating fourth-order moments, a Gaussian distribution for the random function. The approach described here gives the variance–covariance matrix of the experimental variogram, which takes into account not only the correlation among the experiemental values but also the multiple use of data in the variogram computation. Among other applications, standard errors may be attached to the variogram estimates and the variance–covariance matrix may be used for fitting a theoretical model by weighted, or by generalized, least squares. Confidence regions that hold a given confidence level for all the variogram lag estimates simultaneously have been calculated using the Bonferroni method for rectangular intervals, and using the multivariate Gaussian assumption for K-dimensional elliptical intervals (where K is the number of experimental variogram estimates). A general approach for incorporating the uncertainty of the experimental variogram into the uncertainty of the variogram model parameters is also shown. A case study with rainfall data is used to illustrate the proposed approach. 相似文献