首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Weights of evidence modeling for combining indicator patterns in mineral resource evaluation is based on an application of Bayes' rule. Two weights are defined for each indicator pattern and Bayes' rule is applied repeatedly to combine indicator patterns. If all patterns are conditionally independent with respect to deposits, the logit of the posterior probability can be calculated as the sum of the logit of the prior probability plus the weights of the overlay patterns. The information to be integrated for gold exploration in Xiong-er Mountain Region comes from a geological map, an interpreted map of a Thematic Mapper (TM) image, and the locations of known gold deposits. Favorable stratigraphic units, structural control factors, and alteration factors are considered. The work was conducted on an S600 I2S image-processing system. FORTRAN programs were developed for creating indicator patterns, statistical calculations, and pattern integration. Six indicator patterns were selected to predict mineral potential. They are conditionally independent according to pairwiseG 2 tests, and an overall chi-square test. The potential area predicted using the 32 known deposits generally coincides with the prospect areas determined by geological fieldwork.  相似文献   

2.
Consideration of order relations is key to indicator kriging, indicator cokriging, and probability kriging, especially for the latter two methods wherein the additional modeling of cross-covariance contributes to an increased chance of violating order relations. Herein, Gaussian-type curves are fit to estimates of the cumulative distribution function (cdf) at data quantiles to: (1) yield smoothed estimates of the cdf; and (2) to correct for violations of order relations (i.e., to correct for situations wherein the estimate of the cdf for a larger quantile is less than that for a smaller quantile). Smoothed estimates of the cdf are sought as a means to improve the approximation to the integral equation for the expected value of the regionalized variable in probability kriging. Experiments show that this smoothing yields slightly improved estimation of the expected value (in probability kriging). Another experiment, one that uses the same variogram for all indicator functions, does not yield improved estimates.Presented at the 25th Anniversary Meeting of the IAMG, Prague, Czech Republic, October 10–15, 1993.  相似文献   

3.
This paper compares the performance of four algorithms (full indicator cokriging. adjacent cutoffs indicator cokriging, multiple indicator kriging, median indicator kriging) for modeling conditional cumulative distribution functions (ccdf).The latter three algorithms are approximations to the theoretically better full indicator cokriging in the sense that they disregard cross-covariances between some indicator variables or they consider that all covariances are proportional to the same function. Comparative performance is assessed using a reference soil data set that includes 2649 locations at which both topsoil copper and cobalt were measured. For all practical purposes, indicator cokriging does not perform better than the other simpler algorithms which involve less variogram modeling effort and smaller computational cost. Furthermore, the number of order relation deviations is found to be higher for cokriging algorithms, especially when constraints on the kriging weights are applied.  相似文献   

4.
Empirical Maximum Likelihood Kriging: The General Case   总被引:4,自引:0,他引:4  
Although linear kriging is a distribution-free spatial interpolator, its efficiency is maximal only when the experimental data follow a Gaussian distribution. Transformation of the data to normality has thus always been appealing. The idea is to transform the experimental data to normal scores, krige values in the “Gaussian domain” and then back-transform the estimates and uncertainty measures to the “original domain.” An additional advantage of the Gaussian transform is that spatial variability is easier to model from the normal scores because the transformation reduces effects of extreme values. There are, however, difficulties with this methodology, particularly, choosing the transformation to be used and back-transforming the estimates in such a way as to ensure that the estimation is conditionally unbiased. The problem has been solved for cases in which the experimental data follow some particular type of distribution. In general, however, it is not possible to verify distributional assumptions on the basis of experimental histograms calculated from relatively few data and where the uncertainty is such that several distributional models could fit equally well. For the general case, we propose an empirical maximum likelihood method in which transformation to normality is via the empirical probability distribution function. Although the Gaussian domain simple kriging estimate is identical to the maximum likelihood estimate, we propose use of the latter, in the form of a likelihood profile, to solve the problem of conditional unbiasedness in the back-transformed estimates. Conditional unbiasedness is achieved by adopting a Bayesian procedure in which the likelihood profile is the posterior distribution of the unknown value to be estimated and the mean of the posterior distribution is the conditionally unbiased estimate. The likelihood profile also provides several ways of assessing the uncertainty of the estimation. Point estimates, interval estimates, and uncertainty measures can be calculated from the posterior distribution.  相似文献   

5.
The recent recognition that long period (i.e., of the order of hours) electromagnetic induction studies could play a major role in the detection of the asthenosphere has led to much interest amongst the geophysical and geological communities of the geomagnetic response functions derived for differing tectonic environments. Experiments carried out on the ocean bottom have met with considerable success in delineating the “electrical asthenosphere”, i.e., a local maximum in electrical conductivity (minimum in electrical resistivity) in the upper mantle.In this paper, observations of the time-varying magnetic field recorded in three regions of Scandinavia, northern Sweden (Kiruna—KIR), northern Finland/northeastern Norway (Kevo—KEV) and southern Finland (Sauvamaki—SAU), are analysed in order to obtain estimates of the inductive response function, C(ω), for each region. The estimated response functions are compared with one from the centre of the East European Platform (EEP), and it is shown that the induced eddy currents, at periods of the order of 103–104 s, in the three regions flow much closer to the surface than under the platform centre. Specifically, at a period of ~3000 s, these currents are flowing at depths of the order of: KEV—120 km; KIR—180 km; SAU—210 km; EEP—280 km; implying that the transition to a conducting zone, of σ -0.2 S/m, occurs at around these depths. One-dimensional inversion of and shows that there must exist a good conducting zone, of σ = 0.1–1.0 S/m, under each of the two regions, of 40 km minimum thickness, at depths of: KEV 105–115 km; KIR 160–185 km. This is to be contrasted with EEP, where the ρ-d profile displays a monotonically decreasing resistivity with depth, reaching σ~0.1 S/m at > 300 km.Finally, a possible temperature range for the asthenosphere, consistent with the deduced conducvitity, is discussed. It is shown that, at present, there is insufficient knowledge of the conditions (water content, melt fraction, etc.) likely to prevail in the asthenosphere to narrow down the probable range of 900°–1500°C.  相似文献   

6.
Probability kriging is implemented in a general cokriging procedure (c.f. Myers, 1982) for estimatingboth the indicator and uniform transforms. Paired-sum semi-variograms are used to facilitate the modeling of the cross-covariance between the uniform transform and each indicator transform. Estimates of the uniform transform are averaged over all cutoffs, the average used to derive an estimate of the original data. This estimate can be biased with respect to the mean data value, but is unbiased with respect to the data median.  相似文献   

7.
Macromolecular organic material, called “polymeric acids”, has been isolated from Black Trona Water by exhaustive dialysis and characterized as the sodium salt in 0.10 M sodium carbonate, pH 10, by several physico-chemical methods. Analysis by gel filtration chromatography on Sepharose-CL 6B indicates that the “polymeric acids” are polydisperse and composed of species of relatively high molecular weight ( 4 × 105, using proteins as standards). With this method, the range of molecular weights appears to be rather narrow. If “polymeric acids” are transferred from sodium carbonate, pH 10, into distilled water, selfassociation occurs and all species elute in the void volume. The weight-average molecular weight determined in 0.10 M sodium carbonate, pH 10, by the light scattering method is 1.7 × 105. Sedimentation velocity analysis at 20°C with the analytical ultracentrifuge gives a value for S20,w of 5.4 and the shape of the Schlieren patterns suggest a polydisperse sample with a relatively narrow range of sizes. Analysis of the molecular weight distribution by a sedimentation equilibrium method indicates that the range of molecular weights is 8 × 104 to 2.1 × 105. The partial specific volume ( ) of “polymeric acids” is 0.874 ml/g. Viscosity measurements yield a value for [η] of 2.5 ml/g, which indicates that the “polymeric acids” are compact (spherical or ellipsoidal) in shape.  相似文献   

8.
An extreme value model is developed for the situation where a cloud of sediment particles moves away from the boundary of a defined source area while undergoing constant depletion due to deposition of the larger particles. Taking the particles deposited at distance xfrom the source boundary to represent a distribution of largest extremes derived from a parent distribution of smallest extremes, it is possible to express the mean size of the deposited sediment in terms of the parameters of the original distribution at the source area. Thickness functions can be obtained as the product of expected diameter and particle frequency. If the spatial distribution f(x)of particle frequency along a linear transect can be inferred from a physical process, then this provides sufficient information for the construction of particle size and bed thickness prediction equations. Alternatively, the model places some restrictions on distribution selection if an empirical choice of f(x)is necessary. Some generalizations are obtained for trends in the mean and variance of the deposited particles on the basis of the hazard function of f(x).  相似文献   

9.
In this work, the recently developed “second-order” self-consistent method [Liu, Y., Ponte Castañeda, P., 2004a. Second-order estimates for the effective behavior and field fluctuations in viscoplastic polycrystals. J. Mech. Phys. Solids 52 467–495] is used to simulate texture evolution in halite polycrystals. This method makes use of a suitably optimized linear comparison polycrystal and has the distinguishing property of being exact to second order in the heterogeneity contrast. The second-order model takes into consideration the effects of hardening and of the evolution of both crystallographic and morphological texture to yield reliable predictions for the macroscopic behavior of the polycrystal. Comparisons of these predictions with full-field numerical simulations [Lebensohn, R.A., Dawson, P.R., Kern, H.M., Wenk, H.R., 2003. Heterogeneous deformation and texture development in halite polycrystals: comparison of different modeling approaches and experimental data. Tectonophysics 370 287–311], as well as with predictions resulting from the earlier “variational” and “tangent” self-consistent models, included here for comparison purposes, provide insight into how the underlying assumptions of the various models affect slip in the grains, and therefore the texture predictions in highly anisotropic and nonlinear polycrystalline materials. The “second-order” self-consistent method, while giving a softer stress-strain response than the corresponding full-field results, predicts a pattern of texture evolution that is not captured by the other homogenization models and that agrees reasonably well with the full-field predictions and with the experimental measures.  相似文献   

10.
Toward more realistic formulations for the analysis of laccoliths   总被引:1,自引:0,他引:1  
The published laccolith analyses are based on the linear plate bending theory and the a priori assumption that the width of the laccolith is fixed. This is not the case in an actual situation. The dimension of the laccolith in the horizontal plane has to result from an additional matching condition at the separation lines. The published analyses are generalized by dropping the a priori assumption that the width of the laccolith is prescribed, by assuming that the magmatic pressure is not constant, and by taking into consideration the vertical compressibility of the overburden “plate” and base in the contact region. In order to determine the magnitude of the magmatic pressure, a condition is postulated that equates the measured volume of the intruded magma in a laccolith with the corresponding analytical expression for the volume. The obtained closed-form solution appears to satisfy many of the intuitive expectations. It was evaluated numerically and the results are presented as graphs. It may be concluded that even very small laccoliths may exist, provided the magmatic pressure is sufficiently larger than the overburden weight. We also show the dependence of the laccolith size on its stratigraphic position; the thicker the overburden h the larger the size of the laccolith, for an overburden plate of given thickness, the larger the volume V of the intruded magma, the larger the laccolith width 2a and its height. The paper concludes by discussing a published analysis for laccolith with flexible underburden and overburden. It is shown that this analysis is based on a formulation that is of questionable validity.  相似文献   

11.
Gunhild Setten   《Geoforum》2008,39(3):1097-1104
Since the turn of the millennium, human geography has witnessed the publishing of an increasing number encyclopaedias and dictionaries as well as books under the headings of “handbooks”, “readers” and “companions” to different fields within the discipline. In the present paper, I take as a point of departure this encyclopaedic “frenzy” in order to speculate on the works and values of a long-standing and authoritative geographical companion, The Dictionary of Human Geography (DHG) [Johnston, R.J., Gregory, D., Haggett, P., Smith, D.M., Stoddart, D.R. (Eds.), 1981. The Dictionary of Human Geography. Blackwell, Oxford; Johnston, R.J., Gregory, D., Smith, D.M. (Eds.), 1986a. The Dictionary of Human Geography, second ed. Blackwell, Oxford; Johnston, R.J., Gregory, D., Smith, D.M. (Eds.), 1994. The Dictionary of Human Geography, third ed. Blackwell, Oxford; Johnston, R.J., Gregory, D., Pratt, G., Watts, M. (Eds.), 2000a. The Dictionary of Human Geography, fourth ed. Blackwell, Oxford]. Apart from being subject to regular book reviews, the DHG has escaped attention from geographers critically engaged in debating the works of the discipline. It is argued here that this is due to the fact that the DHG appears to have established itself as an apparently objective recording of human geographers’ myriad of interests. The DHG is, however, a product of complex webs of subjective, situated concerns and thus a version of the discipline deserving of debate.  相似文献   

12.
This paper documents and investigates an important source of inaccuracy when paleoecological equations calibrated on modern biological data are applied downcore: fossil assemblages for which there are no modern analogs. Algebraic experiments with five calibration techniques are used to evaluate the sensitivity of the methods with respect to no-analog conditions. The five techniques are: species regression; principal-components regression [e.g., Imbrie, J., and Kipp, N. G. (1971). In “The Late Cenozoic Ages,” 71–181]; distance-index regression [Hecht, A. D. (1973). Micropaleontology19, 68–77]; diversity-index regression (Williams, D. F., and Johnson, W. C. (1975). Quaternary Research5, 237–250]; weighted-average method [Jones, J. I. (1964). Unpublished Ph. D. Thesis, Univ. of Wisconsin]. The experiments indicate that the four regression techniques extrapolate under no-analog conditions, yielding erroneous estimates. The weighted-average technique, however, does not extrapolate under no-analog conditions and consequently is more accurate than the other techniques. Methods for recognizing no-analog conditions downcore are discussed, and ways to minimize inaccuracy are suggested. Using several equations based on different calibration techniques is recommended. Divergent estimates suggest that no-analog conditions occur and that estimates are unreliable. The value determined by the weighted-average technique, however, may well be the most accurate.  相似文献   

13.
The theory of variational inequalities enables us to formulate and solve free boundary problems in fixed domains, while most other methods assume the position of the unknown domain in solving the problem. Here the problem of seepage flow through a rectangular dam with a free boundary is formulated as a vertical inequality following the ideas of Baiocchi. In order to demonstrate the essential ideas of extending the domain of the solution of problems with free boundaries, the problem of the deflection, of a string on a rigid support is first examined. Next, variational inequalities are derived which are associated with several cases of seepage problems. An approximation theory, including a priori error estimates, is developed using finite element methods, and an associated numerical scheme is given. It is shown that for linear and quadratic finite element methods, the rates of convergence are 0(h) and 0(h1.25-δ), 0 < δ < 0.25, respectively, if the permeability is constant.  相似文献   

14.
A geostatistically based approach is developed for the identification of aquifer transmissivities in Yolo Basin, California. The approach combines weighted least-squares with universal kriging and cokriging techniques in an overall scheme that (1)considers a prioriknown information on aquifer transmissivity and specific capacities of wells, (2)considers uncertainties in water level and transmissivity data, and (3)estimates the reliability of the generated transmissivity values. Minimization of a global least-squares function that incorporates calibration and plausibility criteria leads to a transmissivity map that shows a good agreement with pumping-test results.  相似文献   

15.
Despite a missing definition of equivalence of mathematical models or methods by Zhang et al. (Math Geosci, 2013), an “equivalence” (Zhang et al., Math Geosci, 2013, p. 6,7,8,14) of modified weights-of-evidence (Agterberg, Nat Resour Res 20:95–101, 2011) and logistic regression does not generally exist. Its alleged proof is based on a previously conjectured linear relationship between weights of evidence and logistic regression parameters (Deng, Nat Resour Res 18:249–258, 2009), which does not generally exist either (Schaeben and van den Boogaart, Nat Resour Res 20:401–406, 2011). In fact, an extremely simple linear relationship exists only if the predictor variables are conditionally independent given the target variable, in which case the contrasts, i.e., the differences of the weights, are equal to the logistic regression parameters. Thus, weights-of-evidence is the special case of logistic regression if the predictor variables are binary and conditionally independent given the target variable.  相似文献   

16.
In the present work a detailed seismotectonic study of the broader area of the Mygdonia basin (N. Greece) is performed. Digital data for earthquakes which occurred in the broader Mygdonia basin and were recorded by the permanent telemetric network of the Geophysical Laboratory of the Aristotle University of Thessaloniki during the period 1989–1999 were collected and fault plane solutions for 50 earthquakes which occurred in the study area were calculated with a modified first motions approach which incorporates amplitude and radiation pattern information. Fault plane solutions for the 3 main shocks of Volvi (23/05/78, MW = 5.8 and 20/06/78, MW = 6.5) and Arnaia (04/05/95, MW = 5.8) events and the 1978 aftershock sequence were additionally used. Moreover, data from two local networks established in the Mygdonia basin were also incorporated in the final dataset.Determination of the stress field was realized by the use of the method of Gephart and Forsyth [Gephart, J.W., Forsyth, D.W., 1984. An improved method for determining the regional stress tensor using earthquake focal mechanism data: application to the San Fernando earthquake sequence: Jour. Geophys. Res., v.89, no. B11, p. 9305–9320] for the stress tensor inversion and the results were compared with independent estimates based on the calculation of the average moment tensor [Papazachos, C.B.,Kiratzi, A.A., 1992. A formulation for reliable estimation of active crustal deformation and its application to central Greece. Geophys. J. Int. 111, 424–432]. The obtained stress results show a relatively good agreement between the two approaches, with differences in the azimuth of the dominant extension axis of the order of 10°. Furthermore, comparison with independent information for the mean stress axes provided by the study of kinematics on neotectonic faults [Mountrakis, D., Kilias, A., Tranos, M., Thomaidou, E., Papazachos, C., Karakaisis, G., Scordilis, E., Chatzidimitriou, P., Papadimitriou, E., Vargemezis, G., Aidona, E., Karagianni, E., Vamvakaris, D. Skarlatoudis, A. 2003. Determination of the settings and the seismotectonic behavior of the main seismic-active faults of Northern Greece area using neotectonic and seismological data. Earthquake Planning and Protection Organisation (OASP) (in Greek)] shows a similar agreement with typical misfit of the order 10°. The stress inversion method was modified in order to select one or both nodal planes of the focal mechanism which corresponds to the “true” fault plane of the occurred earthquakes and was able to select a single fault plane in the majority of examined cases. Using this approach, the obtained fault plane rose diagrams are in agreement with results from various neotectonic studies. Moreover, several secondary active fault branches were identified, which are still not clearly observed in the field.  相似文献   

17.
For any distribution of grades, a particular cutoff grade is shown here to exist at which the indicator covariance is proportional to the grade covariance to a very high degree of accuracy. The name “mononodal cutoff” is chosen to denote this grade. Its importance for robust grade variography in the presence of a large coefficient of variation—typical of precious metals—derives from the fact that the mononodal indicator variogram is then linearly related to the grade variogram yet is immune to outlier data and is found to be particularly robust under data information reduction. Thus, it is an excellent substitute to model in lieu of a difficult grade variogram. A theoretical expression for the indicator covariance is given as a double series of orthogonal polynomials that have the grade density function as weight function. Leading terms of this series suggest that indicator and grade covariances are first-order proportional, with cutoff grade dependence being carried by the proportionality factor. Kriging equations associated with this indicator covariance lead to cutoff-free kriging weights that are identical to grade kriging weights. This circumstance simplifies indicator kriging used to estimate local point-grade histograms, while at the same time obviating order relations problems.  相似文献   

18.

We consider the finite element (FE) approximation of the two dimensional shallow water equations (SWE) by considering discretizations in which both space and time are established using a stable FE method. Particularly, we consider the automatic variationally stable FE (AVS-FE) method, a type of discontinuous Petrov-Galerkin (DPG) method. The philosophy of the DPG method allows us to establish stable FE approximations as well as accurate a posteriori error estimators upon solution of a saddle point system of equations. The resulting error indicators allow us to employ mesh adaptive strategies and perform space-time mesh refinements, i.e., local time stepping. We establish a priori error estimates for the AVS-FE method and linearized SWE and perform numerical verifications to confirm corresponding asymptotic convergence behavior. In an effort to keep the computational cost low, we consider an alternative space-time approach in which the space-time domain is partitioned into finite sized space-time slices. Hence, we can perform adaptive mesh refinements on each individual slice to preset error tolerances as needed for a particular application. Numerical verifications comparing the two alternatives indicate the space-time slices are superior for simulations over long times, whereas the solutions are indistinguishable for short times. Multiple numerical verifications show the adaptive mesh refinement capabilities of the AVS-FE method, as well the application of the method to some commonly applied benchmarks for the SWE.

  相似文献   

19.
Weights of evidence and logistic regression are two of the most popular methods for mapping mineral prospectivity. The logistic regression model always produces unbiased estimates, whether or not the evidence variables are conditionally independent with respect to the target variable, while the weights of evidence model features an easy to explain and implement modeling process. It has been shown that there exists a model combining weights of evidence and logistic regression that has both of these advantages. In this study, three models consisting of modified fuzzy weights of evidence, fuzzy weights of evidence, and logistic regression are compared with each other for mapping mineral prospectivity. The modified fuzzy weights of the evidence model retains the advantages of both the fuzzy weights of the evidence model and the logistic regression model; the advantages being (1) the predicted number of deposits estimated by the modified fuzzy weights of evidence model is nearly equal to that of the logistic regression model, and (2) it can deal with missing data. This method is shown to be an effective tool for mapping iron prospectivity in Fujian Province, China.  相似文献   

20.
The nature of the petrogenetic links between carbonatites and associated silicate rocks is still under discussion (i.e., [Gittins J., Harmer R.E., 2003. Myth and reality of the carbonatite–silicate rock “association”. Period di Mineral. 72, 19–26.]). In the Paleozoic Kola alkaline province (NW Russia), the carbonatites are spatially and temporally associated to ultramafic cumulates (clinopyroxenite, wehrlite and dunite) and alkaline silicate rocks of the ijolite–melteigite series [(Kogarko, 1987), (Kogarko et al., 1995), (Verhulst et al., 2000), (Dunworth and Bell, 2001) and (Woolley, 2003)]. In the small (≈ 20 km2) Vuoriyarvi massif, apatite is typically a liquidus phase during the magmatic evolution and so it can be used to test genetic relationships. Trace elements contents have been obtained for both whole rocks and apatite (by LA-ICP-MS). The apatites define a single continuous chemical evolution marked by an increase in REE and Na (belovite-type of substitution, i.e., 2Ca2+ = Na+ + REE3+). This evolution possibly reflects a fractional crystallisation process of a single batch of isotopically homogeneous, mantle-derived magma.The distribution of REE between apatite and their host carbonatite have been estimated from the apatite composition of a carbonatite vein, belonging to the Neskevara conical-ring-like vein system. This carbonatite vein is tentatively interpreted as a melt. So, the calculated distribution coefficients are close to partition coefficients. Rare earth elements are compatible in apatite (D > 1) with a higher compatibility for the middle REE (DSm : 6.1) than for the light (DLa : 4.1) and the heavy (DYb : 1) REE.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号