首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Data quality control in geochemistry constitutes a fundamental problem that is still to be solved from the application of statistics and computation. We used refined Monte Carlo simulations of 10,000 replications and 190 independent experiments for sample sizes of 5 to 100. Statistical contaminations of 1 to 4 observations were used to compare 9 statistical parameters (4 central tendency—mean, median, trimean, and Gastwirth mean, and 5 dispersion estimates—standard deviation, median absolute deviation, S n , Q n , and \( {\widehat{\sigma}}_n \)). The presence of discordant observations in the data arrays rendered the outlier-based and robust parameters to disagree with each other. However, when the mean and standard deviation (outlier-based parameters) were estimated from censored data arrays obtained after the identification and separation of outlying observations, they generally provided a better estimate of the population than the robust estimates obtained from the original data arrays. This inference is contrary to the general belief, and therefore, reasons for the better performance of the outlier-based methods as compared to the robust methods are suggested. However, when all parameters were estimated from censored arrays and appropriate precise and accurate correction factors put forth in this work were applied, all of them became fully consistent, i.e., the mean agreed with the median, trimean and Gastwirth mean, and the standard deviation with the median absolute deviation, S n , Q n , and \( {\widehat{\sigma}}_n \). An example of inter-laboratory chemical data for a Hawaiian reference material BHVO-1 included sample sizes from 5 to 100, which showed that small samples of up to 20 provide inconsistent estimates, whereas larger samples of 20–100, especially >40, were more appropriate for estimating statistical parameters through robust or outlier-based methods. Although all statistical estimators provided consistent results, our simulation study shows that it is better to use the censored sample mean and population standard deviation as the best estimates.  相似文献   

2.
Optimal discrimination among several groups can be achieved by simultaneous diagonalization of pooled within-group, W, and among-group, A, sums of squares and cross-product matrices formed by utilizing axial-ratio sample statistics of quartz grains belonging to different sieve grades. This method maximizes the ratio of among-group cross products to within-group cross product quadratic forms (V'AV/V'WV)and simultaneously yields discriminant scores whose correlation coefficients are zero for group means as well as for within each group. This procedure enables a simple Euclidean distance measure for partitioning the discriminant space for assignment. Although W–1 and Amatrices are symmetric, the W–1 Amatrix needed for multigroup discrimination is asymmetric and hence the eigenstructure of W–1 Ais obtained by simultaneous diagonalization of Wand Amatrices. The first four sample statistics (mean, standard deviation, skewness, kurtosis) of normalized axial-ratios are required for discrimination, although the mean and standard deviation are the most important discriminators.  相似文献   

3.
Numerical data summaries in many geochemical papers rely on arithmetic means, with or without standard deviations. Yet the mean is the worst average (estimate of location) for those extremely common geochemical data sets which are non-normally distributed or include outliers. The widely used geometric mean, although allowing for skewed distributions, is equally susceptible to outliers. The superior performance of 19 robust estimates of location (simple median, plus various combined, adaptive, trimmed, and skipped,L, M, andW estimates) is illustrated using real geochemical data sets varying in sources of error (pure analytical error to multicomponent geological variability), modality (unimodal to polymodal), size (20 to >2000 data values), and continuity (continuous to truncated in either or both tails). The arithmetic mean tends to overestimate location of many geochemical data sets because of positive skew and large outliers; robust estimates yield consistent smaller averages, although some (e.g., Hampel's and Andrew's) do perform better than others (e.g., Shorth mean, dominant cluster mode). Recommended values for international standard rocks, and for such important geochemical concepts as average chondrite, can be reproduced far more simply via robust estimation on complete interlaboratory data sets than via the rather complicated and subjective methods (e.g., laboratory ratings) so far used in the literature. Robust estimates also seem generally less affected by truncation than the mean; for example, if values below machine detection limits are alternatively treated as missing values or as real values of zero, similar averages are obtained. The standard (and mean) deviations yield consistently larger values of scale for many geochemical data sets than the hinge width (interquartile range) or median absolute deviation from the median. Therefore, summaries of geochemical data should always include at least the simple median and hinge width, to complement the often misleading mean and standard deviation.  相似文献   

4.
Summary A new concept of feature size range of a roughness profile is introduced in the paper. It is shown that this feature size range plays an important role in estimating the fractal dimension,D, accurately using the divider method. Discussions are given to indicate the difficulty of using both the divider and the box methods in estimatingD accurately for self-affine profiles. The line scaling method's capability in quantifying roughness of natural rock joint profiles, which may be self-affine, is explored. Fractional Brownian profiles (self-affine profiles) with and without global trends were generated using known values ofD, input standard deviation, , and global trend angles. For different values of the input parameter of the line scaling method (step sizea 0),D and another associated fractal parameterC were calculated for the aforementioned profiles. Suitable ranges fora 0 were estimated to obtain computedD within ±10% of theD used for the generation. Minimum and maximum feature sizes of the profiles were defined and calculated. The feature size range was found to increase with increasingD and , in addition to being dependent on the total horizontal length of the profile and the total number of data points in the profile. The suitable range fora 0 was found to depend on bothD and , and then, in turn, on the feature size range, indicating the importance of calculating feature size range for roughness profiles to obtain accurate estimates for the fractal parameters. Procedures are given to estimate the suitablea 0 range for a given natural rock joint profile to use with the line scaling method in estimating fractal parameters within ±10% error. Results indicate the importance of removal of global trends of roughness profiles to obtain accurate estimates for the fractal parameters. The parametersC andD are recommended to use with the line scaling method in quantifying stationary roughness. In addition, one or more parameters should be used to quantify the non-stationary part of roughness, if it exists. The estimatedC was found to depend on bothD and and seems to have potential to capture the scale effect of roughness profiles.  相似文献   

5.
Summary A programme of on-site quality control of the as-produced properties of rock armour used for the construction of a beach revetment for coastal protection in south Devon is described. The programme was based on guidelines set out in the recently published CIRIA/CUR Manual,The Use of Rock in Coastal and Shoreline Engineering (1991). The results of the study indicate that the quality control of heavy armourstone block weight, grading and shape would be greatly facilitated by the introduction of standard specifications as recommended in the CIRIA/CUR Manual. A completely satisfactory test for the quality control of block integrity has yet to be devised and accepted, although the drop test breakage index described in the Manual provides a starting point. An improved method of heavy armour block weight estimation by cubing-up, based on the volume of an imaginary enclosing rectanguloid multiplied by the rock density and a fractional weight shape factor, has been developed. Using shape factors computed from weighed sample blocks of known density, the accuracy of these block weight estimates is shown to be in the order of ±5% for mean or median values. These fractional shape factors also allow some control of possible undesirable block shapes, such as the rectanguloid or wedge shape.  相似文献   

6.
The 29Si and 27Al nuclear magnetic resonance (NMR) analysis of synthetic trioctahedral phyllosilicates 2:1, with tetrahedral ratios Al T/(Si + Al T) ranging from 0 to 0.5, has shown that the ditrigonal distortion of tetrahedral rings (angle ) is the main factor controlling chemical shift values of tetrahedral components in both signals. The increase of ditrigonal rotation angle shifts these components towards more positive values. For each sample, the composition of tetrahedral and octahedral sheets determine the value of , and from this parameter, the mean tetrahedral Tot angle and the chemical shift values of components are deduced. For a given environment, variations on ditrigonal angle are responsibles for the observed evolution of chemical shift values with bulk composition. The comparative analysis of micas and saponite samples has demonstrated that the location of compensating charge (interlayer and octahedral sheet) does not affect chemical shift values.  相似文献   

7.
Fourier optics and an optical bench model are used to construct an ensemble of candidate functions representing variational patterns in an undersampled two dimensional functiong(x,y). The known sample functions(x,y) is the product ofg(x,y) and a set of unit impulses on the sample point patternp(x,y) which, from the optical point of view, is an aperture imposing strict mathematical limits on what the sample can tell aboutg(x,y). The laws of optics enforce much needed—and often lacking—conceptual discipline in reconstructing candidate variational patterns ing(x,y). The Fourier transform (FT) ofs(x,y) is the convolution of the FT's ofg(x,y) andp(x,y). If the convolution shows aliasing or confounding of frequencies undersampling is surely present and all reconstructions are indeterminate. Then information from outsides(x,y) is required and it is easily expressed in frequency terms so that the principles of optical filtering and image reconstruction can be applied. In the application described and pictured the FT ofs(x,y) was filtered to eliminate unlikely or uninteresting high frequency amplitude maxima. A menu of the 100 strongest remaining terms was taken as indicating the principle variational patterns ing(x,y). Subsets of 10 terms from the menu were chosen using stepwise regression. By so restricting the subset size both the variance and the span of their inverse transforms were made consistent with those of the data. The amplitudes of the patterns being overdetermined, it was possible to estimate the phases also. The inverse transforms of 9 patterns so selected are regarded as ensembles of reconstructions, that is as stochastic process models, from which estimates of the mean and other moments can be calculated.This paper was presented at Emerging Concepts, MGUS-87 Conference, Redwood City, California, 13–15 April 1987.  相似文献   

8.
In studies that involve a finite sample size of spatial data it is often of interest to test (statistically) the assumption that the marginal (or univariate) distribution of the data is Gaussian (normal). This may be important per se because, for example, a data transformation may be desired if the normality hypothesis is rejected, or it may provide a way of testing other hypotheses, such as lognormality, by testing the normality of the logarithms of the observations. The most commonly used tests, such as the Kolmogorov–Smirnov (K–S), chi-square (2), and Shapiro–Wilks (S–W) tests, are designed on the assumption that the observations are independent and identically distributed (iid). In geostatistical applications, however, this is not usually the case unless the spatial covariance (semivariogram) function is a pure nugget variance. If the covariance structure has a (practical) range greater than the minimum distance between observations, the data are correlated and the standard tests cannot be applied to the probability density function (pdf) or cumulative probability function (cdf) estimated directly from the data. The problem with correlated data arises not from the correlation per se but from cases in which correlated data are clustered rather than being located on a regular grid. In these cases inferences requiring iid assumptions may be seriously biased because of the spatial correlation among the observations. If unbiased (i.e., de-clustered) estimates of the pdf or cdf are obtained, then normality tests, such as K-S, 2, or S–W, can be applied using the unbiased estimates and an effective number of samples equivalent to the iid case. There are three questions to be addressed in these cases: Is the distribution ergodic?  相似文献   

9.
A method for chemical separation of Lu and Hf from rock, meteorite and mineral samples is described, together with a much improved mass spectrometric running technique for Hf. This allows (i) geo- and cosmochronology using the176Lu176Hf+ decay scheme, and (ii) geochemical studies of planetary processes in the earth and moon.Chemical yields for the three-stage ion-exchange column procedure average 90% for Hf. Chemical blanks are <0.2 ng for Lu and Hf. From 1 g of Hf, a total ion current of 0.5×10–11 Ampere can be maintained for 3–5 h, yielding 0.01–0.03% precision on the ratio176Hf/177Hf. Normalisation to179Hf/177Hf=0.7325 is used.Extensive results for the Johnson Matthey Hf standard JMC 475 are presented, and this sample is urged as an international mass spectrometric standard; suitable aliquots, prepared from a single batch of JMC 475, are available from Denver.Lu-Hf analyses of the standard rocks BCR-1 and JB-1 are given. The potential of the Lu-Hf method in isotope geochemistry is assessed.  相似文献   

10.
    
The standard Box and Cox generalized power transform of the form (x l)/ is applied to preprocess hydrogeochemical uranium, sodium, potassium, calcium, magnesium, chlorine, sulphate, carbonate, vanadium, pH, and conductivity data. These data do not reduce to normal form at the optimum value obtained using the three objective functions as discussed by R. J. Howarth and S. A. M. Earle. We use an objective function based on the observed and theoretical normal frequencies of the transformed data: uranium and calcium data reduce to the desired normal form at the values obtained by optimizing this new merit function: vanadium data to approximate normal form: but potassium, chlorine, and sulphate data do not. The other elemental data follow lognormal form. The consequence of the Box and Cox transformation is that if a set of data is reducible to normal form, then the density distribution of the original untransformed data is given by, where and are the mean and standard deviation of the transformed data and is obtained by optimization of the new merit function; an exception is potassium data.  相似文献   

11.
An objective replacement method for censored geochemical data   总被引:1,自引:0,他引:1  
Geochemical data are commonly censored, that is, concentrations for some samples are reported as less than or greater than some value. Censored data hampers statistical analysis because certain computational techniques used in statistical analysis require a complete set of uncensored data. We show that the simple substitution method for creating an uncensored dataset, e.g., replacement by3/4 times the detection limit, has serious flaws, and we present an objective method to determine the replacement value. Our basic premise is that the replacement value should equal the mean of the actual values represented by the qualified data. We adapt the maximum likelihood approach (Cohen, 1961) to estimate this mean. This method reproduces the mean and skewness as well or better than a simple substitution method using3/4 of the lower detection limit or3/4 of the upper detection limit. For a small proportion of less than substitutions, a simple-substitution replacement factor of 0.55 is preferable to3/4; for a small proportion of greater than substitutions, a simple-substitution replacement factor of 1.7 is preferable to4/3, provided the resulting replacement value does not exceed 100%. For more than 10% replacement, a mean empirical factor may be used. However, empirically determined simple-substitution replacement factors usually vary among different data sets and are less reliable with more replacements. Therefore, a maximum likelihood method is superior in general. Theoretical and empirical analyses show that true replacement factors for less thans decrease in magnitude with more replacements and larger standard deviation; those for greater thans increase in magnitude with more replacements and larger standard deviation. In contrast to any simple substitution method, the maximum likelihood method reproduces these variations. Using the maximum likelihood method for replacing less thans in our sample data set, correlation coefficients were reasonably accurately estimated in 90% of the cases for as much as 40% replacement and in 60% of the cases for 80% replacement. These results suggest that censored data can be utilized more than is commonly realized.  相似文献   

12.
The method of Koiwa and Ishioka (Philos Mag A 47:927–938, 1983) is used, with slight modification, to evaluate the correlation factor for vacancy-mediated diffusion of impurity atoms on the sublattice of dodecahedral sites in garnet, as a function of the relevant vacancy-jump frequencies. The required values of the lattice Green’s function were obtained from multiple Monte Carlo simulations in lattices of progressively larger size, extrapolated to an infinite lattice using a model that linearizes the dependence of the functional value on lattice size. As Online Resources, codes are provided that permit evaluation of the correlation factor for any chosen set of vacancy-jump frequencies, for implementation in either Mathematica ® or Matlab ®.  相似文献   

13.
Crystal size distribution (CSD) theory has been applied to drill core samples from Makaopuhi lava lake, Kilauea Volcano, Hawaii. Plagioclase and Fe-Ti oxide size distribution spectra were measured and population densities (n)were calculated and analyzed using a steady state crystal population balance equation: n=n 0 exp(-L/G). Slopes on ln(n) versus crystal size (L) plots determine the parameter G, a. product of average crystal growth rate (G) and average crystal growth time (). The intercept is J/G where J is nucleation rate. Known temperature-depth distributions for the lava lake provide an estimate of effective growth time (), allowing nucleation and growth rates to be determined that are independent of any kinetic model. Plagioclase growth rates decrease with increasing crystallinity (9.9–5.4×10–11 cm/s), as do plagioclase nucleation rates (33.9–1.6×10–3/cm3 s). Ilmenite growth and nucleation rates also decrease with increasing crystallinity (4.9–3.4 ×10–10 cm/s and 15–2.2×10–3/cm3 s, respectively). Magnetite growth and nucleation rates are also estimated from the one sample collected below the magnetite liquidus (G =2.9×10–10 cm/s, J=7.6×10–2/cm3 s). Moments of the population density function were used to examine the change in crystallization rates with time. Preliminary results suggest that total crystal volume increases approximately linearly with time after 50% crystallization; a more complete set of samples is needed for material with <50% crystals to define the entire crystallization history. Comparisons of calculated crystallization rates with experimental data suggests that crystallization in the lava lake occurred at very small values of undercooling. This interpretation is consistent with proposed thermal models of magmatic cooling, where heat loss is balanced by latent heat production to maintain equilibrium cooling.  相似文献   

14.
    
When sample data are divided into groups, and observations consist of the independent variable xand associated dependent variable y,a logical form of analysis is grouped regression. This statistical technique allows testing of the relationship between the two variables and assessment of how the relationship is affected by the grouping. A sedimentologic example illustrates the usefulness of such a technique in classifying environments of deposition based on the size of quartz grains and the quartz content.  相似文献   

15.
The thermal behaviour of microsommite (MC), davyne from Vesuvius (DV) and from Zabargad (DZ) was determined by X-ray single crystal data obtained employing a microfurnace connected to a four-circle diffractometer. Upon heating, the a parameter increased linearly, with similar thermal expansion rates for the three samples: the mean linear expansion coefficients, a , were 10.2(3)·10-6, 13.4(7)·10-6, 15.1(8)·10-1 K-1 for MC, DV and DZ respectively.At about 473 K both MC and DZ showed a discontinuity in the expansion of the c parameter. The mean linear expansion coefficient, c , changed abruptly from 16(4)·10-6 K-1 for both minerals below the discontinuity to 2(1)·10-6 and 3(1)·10-6 K-1 for MC and DZ, respectively, above the discontinuity. In DV, however, the c coefficient was constant between 293 und 827 K and equal to 1(2)·10-6 K-1.  相似文献   

16.
The calculation of a maximum depositional age(MDA)from a detrital zircon sample can provide insight into a variety of geological problems.However,the impact of sample size and calculation method on the accuracy of a resulting MDA has not been evaluated.We use large populations of synthetic zircon dates(N≈25,000)to analyze the impact of varying sample size(n),measurement uncertainty,and the abundance of neardepositional-age zircons on the accuracy and uncertainty of 9 commonly used MDA calculation methods.Furthermore,a new method,the youngest statistical population is tested.For each method,500 samples of n synthetic dates were drawn from the parent population and MDAs were calculated.The mean and standard deviation of each method ove r the 500 trials at each n-value(50-1000,in increments of 50)were compa red to the known depositional age of the synthetic population and used to compare the methods quantitatively in two simulation scenarios.The first simulation scenario varied the proportion of near-depositional-age grains in the synthetic population.The second scenario varied the uncertainty of the dates used to calculate the MDAs.Increasing sample size initially decreased the mean residual error and standard deviation calculated by each method.At higher n-values(>~300 grains),calculated MDAs changed more slowly and the mean resid ual error increased or decreased depending on the method used.Increasing the p roportion of near-depositional-age grains and lowering measurement uncertainty decreased the number of measurements required for the calculated MDAs to stabilize and decreased the standard deviation in calculated MDAs of the 500 samples.Results of the two simulation scenarios show that the most successful way to increase the accuracy of a calculated M DA is by acquiring a large number of low-uncertainty measurements(300300)approach is used if the calculation of accurate MDAs are key to research goals.Other acquisition method s,such as high-to moderate-precision measurement methods(e.g.,1%-5%,2σ)acquiring low-to moderate-n datasets(50300).Additionally,they are most susceptible to producing erroneous MDAs due to contamination in the field or laboratory,or through disturbances of the youngest zircon’s U-Pb systematics(e.g.,lead loss).More conservative methods that still produce accurate MDAs and are less susceptible to contamination or lead loss include:youngest grain cluster at 1σunce rtainty(YGC 1σ),youngest grain clusterat 2σuncertainty(YGC 2σ),and youngest statistical population(YSP).The ages calculated by these methods may be more useful and appealing when fitting calculated MDAs in to pre-existing chronostratigraphic frameworks,as they are less likely to be younger than the true depositional age.From the results of our numerical models we illustrate what geologic processes(i.e.,tectonic or sedimentary)can be resolved using MDAs derived from strata of different ages.  相似文献   

17.
To simultaneously evaluate the decay constant of 40K () and the age of a standard (t std) using isotopic data from geologic materials, we applied a series of statistical methods. The problem of estimating the most probable intercept of many nonlinear curves in and t std space is formulated by an errors-in-variables nonlinear regression model. Then a maximum likelihood method is applied to the model for a point estimate, which is equivalent to the nonlinear least square method when measurement error distributions are Gaussian. Uncertainties and confidence regions of the estimates can be approximated using three methods: the asymptotic normal approximation, the parametric bootstrap method and Bonferroni confidence regions. Five pairs of published data for samples with ages from 2 ka to 4.5 Ga were used to estimate and the age of Fish Canyon sanidine (t FCs). The statistical procedure yields most probable estimates of (5.4755 ± 0.0170 × 10–10 (1)/year) and t FCs (28.269 ± 0.0661 (1) Ma) which are in between previously published values. These results indicate the power of our approach to provide improved constraints on these parameters, although the preliminary nature of some of the input data require further review before the values can be adopted.  相似文献   

18.
A thermodynamic analysis of the intermediate solid solution (Iss) of near-cubanite composition has been attempted by considering an Fe–Zn exchange equilibrium between Iss and sphalerite. The interchange free-energy parameter of Fe–Zn mixing in Iss (WIss) and the free energy of the exchange equilibrium (G1,T ) have been deduced at 500, 600, 700 and 723° C using the compositional data of sphalerite and Iss from phase equilibrium experiments and by the standard method of linear regression analysis. For sphalerite, two independent activity-composition models have been chosen. The extracted values of G1,T and WIss, using both models, are compared. Although the values match, the errors in the extracted parameters are relatively larger when Hutcheon's model is used. Both G1,T and WIss show linear variations with temperature, as given by the following relations: G1,T = –35.41 + 0.033 T in kcal (SE=0.229)WISS= 48.451 – 0.041 T in kcal (SE=0.565) Activity-composition relations and different mixing parameters have been calculated for the Iss phase. A large positive deviation from ideality is observed in Iss on the join CuFe2S3–CuZn2S3. No geothermometric application has been attempted in this study, even though Iss of cubanite composition (isocubanite) in association with sphalerite, pyrite and pyrrhotite is reported from seafloor hydrothermal deposits. This is due to the fact that: (a) the temperatures of formation of these deposits are significantly lower than 500° C, the lower limit of appropriate experimental data base; (b) microprobe data of the coexisting isocubanite and sphalerite in the relevant natural assemblages are not available.Symbols a J i activity of component i in phase J - G1, T standard free energy change of reaction (cal) - GIM free energy of ideal mixing (cal) - GEM free energy of excess mixing (cal) - G M ex free energy of mixing (cal) - G i excess free energy of mixing at infinite dilution (cal) - i J activity coefficient of component i in phase J - i J, 0 standard chemical potential of component i in phase J (cal) - ; i J chemical potential of component i in phase J (cal) - R universal gas constant (1.98717 cal/K·mol) - T temperature in degree (K) - WJ interchange free energy of phase J in (cal) - X J i mole fraction of component i in phase J  相似文献   

19.
Creep and saltation are the primary modes of surface transport involved in the fluid‐like movement of aeolian sands. Although numerous studies have focused on saltation, few studies have focused on creep, primarily because of the experimental difficulty and the limited amount of theoretical information available on this process. Grain size and its distribution characteristics are key controls on the modes of sand movement and their transport masses. Based on a series of wind tunnel experiments, this paper presents new data regarding the saltation flux, obtained using a flat sampler, and on the creeping mass, obtained using a specifically designed bed trap, associated with four friction velocities (0·41, 0·47, 0·55 and 0·61 m sec?1). These data yielded information regarding creeping and saltating sand grains and their particle size characteristics at various heights, which led to the following conclusions: (i) the creeping masses increased as a power function (q = ?1·02 + 14·19u*3) of friction wind velocities, with a correlation (R2) of 0·95; (ii) the flux of aeolian sand flow decreases exponentially with increasing height (q = a exp(–z/b)) and increases as a power function (q = ?26·30 + 428·40 u*3) of the friction wind velocity; (iii) the particle size of creeping sand grains is ca 1·15 times of the mean diameter of salting sand grains at a height of 0 to 2 cm, which is 1·14 times of the mean diameter of sand grains in a bed; and (iv) the mean diameter of saltating sand grains decreases rapidly with increasing height whereas, while at a given height, the mean diameter of saltating sand grains is positively correlated with the friction wind velocity. Although these results require additional experimental validation, they provide new information for modelling of aeolian sand transport processes.  相似文献   

20.
40Ar/39Ar incremental-release analyses were carried out on whole-rock and constituent white mica (illite)-rich size fractions (0.63–1 to 6.3–20 m) within two very-low grade, penetratively cleaved metatuffs of contrasting anchizonal metamorphic grade (northeastern Rheinisches Schiefergebirge, Federal Republic of Germany). One sample from the upper anchizone displays internally concordant 40Ar/39Ar spectra with plateau ages ranging between ca. 316 and 325 Ma. These are similar to conventional K-Ar ages determined for the whole-rock and size fractions. Together the isotopic results suggest that cleavage formed at ca. 320 Ma during a concomitant very-low grade metamorphism. This is consistent with biostratigraphic controls which suggest that metamorphism and cleavage formation occurred during the Westphalian.A metatuff sample from the middle anchizone records more internally discordant 40Ar/39Ar age spectra with total-gas ages ranging from 366 to 372 Ma. These are ca. 35–45 Ma older than corresponding conventional K-Ar ages, indicating marked recoil-loss of 39Ar occurred during irradiation. Transmission electron microscopy reveals that white mica grains within size fractions from the upper anchizone sample have clearly defined, straight edges whereas those within the middle anchizone samples are embayed and diffuse. This results in an increase in surface/volume ratio and therefore greater susceptibility for recoil-loss of 39Ar in the middle anchizone sample. Grain-edge morphology appears to be a major factor in determining the extent of recoil-loss of 39Ar during 40Ar/39Ar analysis of fine-grained size fractions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号