首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 6 毫秒
1.
The association between constant-sum variables Xi and Xj expressed as percentages can be calculated as a product-moment correlation between Xi and Xj/(100 – Xi ) and a correlation between Xj and Xi/(100 – Xj ). An asymmetric, square matrix may be formed from these coefficients, and multivariate analysis performed by two methods: singular value decomposition and canonical decomposition. Either analysis avoids problems in the interpretation of correlation coefficients determined from closed arrays, and provides information about dependencies among the variables beyond that obtained from the usual correlation coefficient between Xi and Xj.Two examples show the canonical decomposition to have the greater usefulness.  相似文献   

2.
Compositional data, consisting of vectors of proportions summing to unity such as the geochemical compositions of rocks, have proved difficult to analyze. Recently, the introduction of logistic and logratio transformations between the d-dimensional simplex and Euclidean space has allowed the use of familiar multivariate methods. The problem of how to model and analyze measurement errors in such data is approached through the concept of a perturbation of a composition. Such modeling allows investigation of the role of rescaling, quantification of measurement error, analysis of observor error, and assessment of the effect of measurement error on inferences.  相似文献   

3.
Genetic algorithms can solve least-squares problems where local minima may trap more traditional methods. Although genetic algorithms are applicable to compositional as well as noncompositional data, the standard implementation treats compositional data awkwardly. A need to decode, renormalize, then reincode the fitted parameters to regain a composition is not only computationally costly, but may thwart convergence. A modification to the genetic algorithm, described here, adapts the tools of reproduction, crossover, and mutation to compositional data. The modification consists of replacing crossover with a linear mixture of two parents and replacing mutation with a linear mixture of one of the members of the breeding population and a randomly generated individual. By using continuously evolving populations, rather than discrete generations, reproduction is no longer required. As a test of this new approach, a mixture of four Gaussian functions with given means and variances are deconvolved to recover their mixing proportions.  相似文献   

4.
The high-dimensionality of many compositional data sets has caused geologists to look for insights into the observed patterns of variability through two dimension-reducing procedures: (i)the selection of a few subcompositions for particular study, and (ii)principal component analysis. After a brief critical review of the unsatisfactory state of current statistical methodology for these two procedures, this paper takes as a starting point for the resolution of persisting difficulties a recent approach to principal component analysis through a new definition of the covariance structure of a composition. This approach is first applied for expository purposes to a small illustrative compositional data set and then to a number of larger published geochemical data sets. The new approach then leads naturally to a method of measuring the extent to which a subcomposition retains the pattern of variability of the whole composition and so provides a criterion for the selection of suitable subcompositions. Such a selection process is illustrated by application to geochemical data sets.  相似文献   

5.
刘祜  韩绍阳  赵丹  柯丹  李必红 《铀矿地质》2012,(6):370-375,387
通过对全国39个铀成矿带和29个预测区的重力数据处理、重力异常提取、地质构造推断,为全国铀矿资源潜力评价提供了构造单元、断裂构造、岩体范围及深度、盆地范围和地层结构等预测要素。总结了铀成矿环境的重力场特征,与热液成矿作用有关的铀矿(花岗岩型、火山岩型、及部分碳硅泥岩型)主要位于区域重力场从高场向低场过渡的部位或偏向低场区域,主要成矿要素花岗岩岩体、火山盆地均表现为重力低场;大型砂岩型铀矿成矿盆地位于区域重力场的高场区域,而蚀源区则为低场,盆地边缘或内部的隆起区一般位于剩余重力高场区;与上述各类型铀成矿有密切关系的断裂构造在重力场中显示为梯度带、不同性质重力异常的分界线、串珠状异常或条带状异常。  相似文献   

6.
Commonly, geological studies compare mean values of two or more compositional data suites in order to determine if, how, and by how much they differ. Simple approaches for evaluating and statistically testing differences in mean values for open data fail for compositional (closed) data. A new parameter, an f-value, therefore has been developed, which correctly quantifies the differences among compositional mean values and allows testing those differences for statistical significance. In general, this parameter quantifies only therelative factor by which compositional variables differ across data suites; however for situations where, arguably, at least one component has neither increased nor decreased, anabsolute f-value can be computed. In situations where the compositional variables have undergone many perturbations, arguments based upon thef-values and the central limit theorem indicate that logratios of compositional variables should be normally distributed.  相似文献   

7.
The statistical analysis of compositional data is based on determining an appropriate transformation from the simplex to real space. Possible transfonnations and outliers strongly interact: parameters of transformations may be influenced particularly by outliers, and the result of goodness-of-fit tests will reflect their presence. Thus, the identification of outliers in compositional datasets and the selection of an appropriate transformation of the same data, are problems that cannot be separated. A robust method for outlier detection together with the likelihood of transformed data is presented as a first approach to solve those problems when the additive-logratio and multivariate Box-Cox transformations are used. Three examples illustrate the proposed methodology.  相似文献   

8.
Proper analysis of transformed data arrays (such as percentages) requires paying special attention to the effects of the transformation process itself. Effects of several commonly used transformations (including percentage formation, row and column normalization, and the square root transformation) have been examined with emphasis placed on changes in the statistical and geometrical properties of column vectors that accompany the application of the transformation. Even though many transformations, including taking the square root, open up the percentage array, this does not allow one to ignore the fact that percentage formation may have considerably modified the statistical and geometrical properties of the columns of the matrix. In preparing to analyze percentages one should give serious consideration to using the row normalized form of the data matrix. The individual elements in such a matrix are the direction cosines of the vector in M-dimensional space, the row vectors are of unit length, and the row normalized matrix computed from the closed array is equal to the row normalized, open matrix that is unobservable. Application of a column transformation (such as range restriction and proportion of the maximum) destroys the equality of the open and percentage row normalized matrices. Despite repeated claims to the contrary, one can not deduce the statistical and geometrical properties of the open matrix given only the statistical and geometrical properties of the closed matrix.  相似文献   

9.
10.
根据奇异值分解原理,分析其在位场去噪中的可行性,提出了一种判断有效阶次k的方法。对多边形厚板模型的重力异常进行奇异值分解去噪和小波去噪效果的对比,证明了奇异值分解去噪的有效性和优势。将其应用于实测重力数据的处理中,取得较好的效果。理论模型和实测资料的处理结果表明:基于奇异值分解的去噪方法可以有效地去除随机噪声,提高数据处理和解释的精度。  相似文献   

11.
罗凡  严加永  付光明  王昊  陶鑫  罗磊 《中国地质》2019,46(4):759-774
华南地区是中国金属矿产资源的“大粮仓”,分布有多个多金属成矿带。多金属成矿带的形成常伴随着地下特殊的深部背景和过程,通过莫霍面深度的计算,对华南地区的地壳厚薄变化所反映的壳幔耦合关系进行研究,可为探索华南地区地下巨量金属资源的形成与演变过程提供参考。本文首先基于球坐标的重力解算方法对高阶卫星重力场模型EIGEN-6C4的数据进行校正,得到华南地区的卫星布格重力异常。然后采用改进的Parker-Oldenburg方法进行变密度界面反演,获得华南地区莫霍面起伏特征。最后结合区内不同成矿带的范围和前人发表的地质、地球化学等资料,探讨华南地区不同成矿带的成矿物质来源与莫霍面起伏的关系。认为长江中下游和钦杭东段处于莫霍面隆起区域的成矿带,幔源物质对其成矿作用起主导地位,形成以铜、铁为主的多金属矿床;南岭、武夷、钦杭西段及鄂西—湘西位于莫霍面隆-陷交替区域的成矿带,成矿与壳、幔源物质的相互作用密切相关,最终形成钨、锡、金、银、铅锌等多金属矿床。  相似文献   

12.
Linear mixing models of compositional data have been developed in various branches of the earth sciences (e.g., geochemistry, petrology, mineralogy, sedimentology) for the purpose of summarizing variation among a series of observations in terms of proportional contributions of (theoretical) end members. Methods of parameter estimation range from relatively straightforward normative partitioning by (nonnegative) least squares, to more sophisticated bilinear inversion techniques. Solving the bilinear mixing problem involves the estimation of both mixing proportions and end-member compositions from the data. Normative partitioning, also known as linear unmixing, thus can be regarded as a special situation of bilinear unmixing with (supposedly) known end members. Previous attempts to model linear mixing processes are reviewed briefly, and a new iterative strategy for solving the bilinear problem is developed. This end-member modeling algorithm is more robust and has better convergence properties than previously proposed numerical schemes. The bilinear unmixing solution is intrinsically nonunique, unless additional constraints on the model parameters are introduced. In situations where no a priori knowledge is available, the concept of an “ optimal ” solution may be used. This concept is based on the trade-off between mathematical and geological feasibility, two seemingly contradictory but equally desirable requirements of the unmixing solution.  相似文献   

13.
和传统欧拉反褶积相比,重力梯度数据联合欧拉反褶积具有更高的计算精度和反演分辨率。为了消除计算产生的发散解,在应用中须使用不同的筛选方法,使得计算流程变得相对繁琐。可见提供有效的筛选方法与开发一个易用的可视化软件有利于提高该方法的准确性、便捷性和使用效果。因此,本文提出基于相关系数边界识别约束的重力梯度数据联合欧拉反褶积,并依据界面直观、功能实用、代码简洁的设计原则,针对算法流程与功能需求,利用Python语言及其函数库设计了一种支持数据/文件管理、二/三维可视化、边界识别、重力梯度数据联合欧拉反褶积等功能的软件系统。通过理论模型与实测数据试验,验证了计算的准确性和软件的实用性,设计的软件系统能够提高应用效果。  相似文献   

14.
油气勘探开发涉及各类数据体,同一数据体可得多种平均值,目前尚无明确而有效的方法以判断何种平均值能客观反映数据体的典型水平。运用数据统计平均分析方法,对勘探开发实践中的孔隙度、渗透率、产量、成本等数据体进行系统分析,确立了加权中位数计算公式和平衡中位数法则。加权中位数计算公式适于分析不同领域正常有序数据体的基本特征,加权中位数为正常有序数据体的平衡点;平衡中位数法则适于确定有足够大数据容量、能满足统计分析基本要求、能选择合理权衡指标的正常有序数据体的典型水平;有明确物理意义的加权平均值也可确定数据体的典型水平。  相似文献   

15.
Hydraulic exponents and unit hydraulic exponents are unit-sum constrained, which requires that they be analyzed by statistical methods designed for compositional data. Though uncertainties remain regarding selection of the best constraining operation and method of handling departures from the unit-sum constraint, neither category of uncertainty should be an impediment to the selection of the appropriate statistical methodology. In a small sample study, the hydraulic geometry of different types of streams were compared: (1) semi-arid: perennial vs. ephemeral; (2) tropical: Puerto Rico vs. West Malaysia; and (3) semi-arid vs. tropical (by pooling the previous data sets). All three comparisons revealed statistically significant differences in either logratio mean vectorsor logratio covariance matrices but not both. All six categories of data had logistic normal distributions. Because the derivatives at a given discharge of curvilinear hydraulic geometry relationships and hydraulic exponents on either side of the breakpoints of piecewise linear relationships are unit-sum constrained, they also can be studied by compositional methods. However, the compositional approach is limited in cases where distributions have large departures from logistic normality and for streams that have negative hydraulic exponents.  相似文献   

16.
银川平原是活动断层发育的地区,活动断层与地震灾害联系密切,研究银川平原的断裂体系对城市发展具有重要意义.笔者对银川平原1∶20万布格重力资料进行二次处理,分析了该地区的布格重力异常特征,并利用重力异常边界识别方法划分了断裂体系;运用断裂参量图法、欧拉反褶积法、最优化反演以及2.5D人机交互反演等方法对研究区83503线剖面做定量反演,并利用地震资料进行约束.笔者利用现有的重力资料,探讨了根据重力资料解释平面断裂体系以及剖面断裂定量反演的方法,得出了银川断裂有北延趋势、黄河断裂部分段相对前人断裂划分结果向西偏移等新认识,为活动断层的研究提供了新的参考资料.  相似文献   

17.
To investigate the formation and early evolution of the lunar mantle and crust we have analysed the oxygen isotopic composition, titanium content and modal mineralogy of a suite of lunar basalts. Our sample set included eight low-Ti basalts from the Apollo 12 and 15 collections, and 12 high-Ti basalts from Apollo 11 and 17 collections. In addition, we have determined the oxygen isotopic composition of an Apollo 15 KREEP (K - potassium, REE - Rare Earth Element, and P - phosphorus) basalt (sample 15386) and an Apollo 14 feldspathic mare basalt (sample 14053). Our data display a continuum in bulk-rock δ18O values, from relatively low values in the most Ti-rich samples to higher values in the Ti-poor samples, with the Apollo 11 sample suite partially bridging the gap. Calculation of bulk-rock δ18O values, using a combination of previously published oxygen isotope data on mineral separates from lunar basalts, and modal mineralogy (determined in this study), match with the measured bulk-rock δ18O values. This demonstrates that differences in mineral modal assemblage produce differences in mare basalt δ18O bulk-rock values. Differences between the low- and high-Ti mare basalts appear to be largely a reflection of mantle-source heterogeneities, and in particular, the highly variable distribution of ilmenite within the lunar mantle. Bulk δ18O variation in mare basalts is also controlled by fractional crystallisation of a few key mineral phases. Thus, ilmenite fractionation is important in the case of high-Ti Apollo 17 samples, whereas olivine plays a more dominant role for the low-Ti Apollo 12 samples.Consistent with the results of previous studies, our data reveal no detectable difference between the Δ17O of the Earth and Moon. The fact that oxygen three-isotope studies have been unable to detect a measurable difference at such high precisions reinforces doubts about the giant impact hypothesis as presently formulated.  相似文献   

18.
根据小波变换的原理,研究了小波阙值去噪方法和位场分离方法在某实测布格重力异常的应用及效果。多矩形棱柱体模型试验表明:选取合适的小波基和小波参数,可以更好地提取出重力异常中的有用信息,为下一步在实测重力数据中的应用提供理论基础。实测布格重力资料处理结果表明:基于小波变换的去噪方法和位场分离方法,能够较好地去除随机噪声,更好地区别出局部异常和区域异常,提高数据处理与解释中有用信号识别的精度。  相似文献   

19.
盘古山地区是于都-赣县矿集区内一个重要的钨锡矿成矿区,研究区断裂构造和花岗岩体与成矿关系密切。利用重、磁资料研究盘古山地区断裂构造及花岗岩体分布,为深部找矿提供地球物理依据。利用重力异常归一化总水平导数垂向导数技术推断断裂构造平面位置;利用欧拉反褶积方法对重力异常垂向一阶导数进行反演计算,推断断裂构造平均深度;利用重磁异常垂向一阶导数技术推断花岗岩体平面位置;利用RGIS软件2.5D重磁剖面人机交互正反演技术推断花岗岩体断面位置,并用钻孔验证解释结果的正确性;利用RGIS软件3D重磁模型编辑模块展示花岗岩体空间位置。研究结果显示盘古山地区既有出露断裂,又有隐伏断裂,既有出露低密度、弱磁性花岗岩体,又有隐伏低密度、强磁性花岗岩体,花岗岩体分布受断裂构造控制,矿体分布受断裂构造和花岗岩体共同控制。  相似文献   

20.
重磁异常数据三维人机联作模拟   总被引:5,自引:0,他引:5  
在研究三角形多面体模型重、磁异常三维正演和反演技术的基础上,吸取人机交互正演模拟的优点,实现了三角形多面体模型重、磁异常数据三维人机联作反演。通过研究三角形多面体模型节点偏导数的计算方法,对目标函数进行线性化处理,形成了计算机自动迭代修改模型体的技术。利用计算机图形技术,在三维空间显示重、磁场和模型体,开发了模型的交互修改技术,使数据解释过程中,可以结合已知信息及人的推断和经验,完成重、磁异常数据的三维模拟,减少了数据解释结果的不确定性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号