首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
Properties of statistical self-affinity are explored and explained. Semi-variogram analysis can be used for identifying statistical self-affine behavior. This is, however, not the only method available for such an analysis. Some error and interpretation is involved; therefore, estimating the Hurst dimension (and from this the fractal dimension) from the semi-variogram can be misleading. Simulations based on statistical self-affine properties are alternatively used to develop an empirical approach to assessing statistical self-affine behavior. Analyzing simulations using semi-variograms, and comparing these semi-variograms to those from actual data, offers an alternate and perhaps superior approach to the understanding of the statistical self-affine properties of a geologic phenomenon. This empirical approach offers a method of reverse modeling for verifying estimates of Hurst dimension from semi-variograms.  相似文献   

2.
Nearly all the data in exploration geochemistry and remote sensing represent composites. However, composites may arise implicitly or be created explicitly. Bearing in mind that a common exploration task is the classification of data as being above or below some predetermined threshold the size of the composite may be critical to the recognition of a relatively rare, subcomposite, anomalous event. Two approaches are developed, based on statistical, and cost-analytical considerations. The statistical model allows for spatial correlation in the data, of importance when sampling is undertaken continuously along a drill core or flight line. Tables are presented for optimal composite sample size selection based on both models. The procedure is illustrated by an example taken from a drilling program. In general, the cost-analytical model leads to smaller composites than the statistical model. When spatial independence may be assumed the cost-optimal composite sizes are almost always smaller than those suggested by the statistical approach.  相似文献   

3.
Probabilistic analysis has been used as an effective tool to evaluate uncertainty so prevalent in variables governing rock slope stability. In this study a probabilistic analysis procedure and related algorithms were developed by extending the Monte Carlo simulation. The approach was used to analyze rock slope stability for Interstate Highway 40 (I-40), North Carolina, USA. This probabilistic approach consists of two parts: analysis of available geotechnical data to obtain random properties of discontinuity parameters; and probabilistic analysis of slope stability based on parameters with random properties. Random geometric and strength parameters for discontinuities were derived from field measurements and analysis using the statistical inference method or obtained from experience and engineering judgment of parameters. Specifically, this study shows that a certain amount of experience and engineering judgment can be utilized to determine random properties of discontinuity parameters. Probabilistic stability analysis is accomplished using statistical parameters and probability density functions for each discontinuity parameter. Then, the two requisite conditions, kinematic and kinetic instability for evaluating rock slope stability, are determined and evaluated separately, and subsequently the two probabilities are combined to provide an overall stability measure. Following the probabilistic analysis to account for variation in parameters, results of the probabilistic analyses were compared to those of a deterministic analysis, illustrating deficiencies in the latter procedure. Two geometries for the cut slopes on I-40 were evaluated, the original 75° slope and the 50° slope which has developed over the past 40 years of weathering.  相似文献   

4.
Z~R关系法和6种雷达雨量计联合法反演的区域降水量与雨量计观测得到的降水场存在较大的误差,将这7种降水估测结果作为信息源,采用统计权重矩阵法对上述7种反演结果进行集成分析,提出了一种改进雷达估测降水的方法。结果表明:在被集成资料中,Z~R关系法估测的降雨场具有明显的偏低现象,精度最差,6种雷达雨量计联合法的估测精度明显优于Z~R关系法。通过统计权重矩阵集成后,精度比集成前所有方法均有明显提高,尤其是降水场分布形势和降水中心的强度都与雨量计场非常吻合。集成得到的降水空间分布场能够较真实地反映地面的降水情况,可以在估测区域降水量中进行业务试用。  相似文献   

5.
Sedimentary deposits are often characterized by various distinct facies, with facies structure relating to the depositional and post-depositional environments. Permeability (k) varies within each facies, and mean values in one facies may be several orders of magnitude larger or smaller than those in another facies. Empirical probability density functions (PDFs) of log(k) increments from multi-facies structures often exhibit properties well modeled by the Levy PDF, which appears unrealistic physically. It is probable that the statistical properties of log(k) variations within a facies are very different from those between facies. Thus, it may not make sense to perform a single statistical analysis on permeability values taken from a mix of distinct facies. As an alternative, we employed an indicator simulation approach to generate large-scale facies distributions, and a mono-fractal model, fractional Brownian motion (fBm), to generate the log(k) increments within facies. Analyses show that the simulated log(k) distributions for the entire multi-facies domain produce apparent non-Gaussian log(k) increment distributions similar to those observed in field measurements. An important implication is that Levy-like behavior is not real in a statistical sense and that rigorous statistical measures of the log(k) increments will have to be extracted from within each individual facies. Electronic Publication  相似文献   

6.
The statistical analysis of compositional data is based on determining an appropriate transformation from the simplex to real space. Possible transfonnations and outliers strongly interact: parameters of transformations may be influenced particularly by outliers, and the result of goodness-of-fit tests will reflect their presence. Thus, the identification of outliers in compositional datasets and the selection of an appropriate transformation of the same data, are problems that cannot be separated. A robust method for outlier detection together with the likelihood of transformed data is presented as a first approach to solve those problems when the additive-logratio and multivariate Box-Cox transformations are used. Three examples illustrate the proposed methodology.  相似文献   

7.
Mathematical models applied to urban and regional planning have been widely developed during the sixties. Since that time the scientific and technologic developments have deeply transformed the field of spatial modelling. There has been a reaction against the idea that reality could be reduced to deterministic models. The paradigms of complexity, chaos, self-organisation, fractal geometry have made obvious the unpredictability of complex socio-economic systems. At the same time the progress of computation has led to the substitution of simulation methods to analytic solutions of mathematical models. In such a context, models are loosing in generality and reproducibility what they earn in adaptation to empirical situations. An important challenge is also to confirm the pertinence and specificity of the geographical approach. In that respect the spatial analysis programs must prove the evidence of a common methodology dealing either with physical or human and economic domain. We are working, for instance, on cellular automata programs applied to the historical evolution of an urban space and also to the run-off process in an elementary basin. The spatial structure of the models may be slightly different: rectangular or hexagonal tessellations in the “Human Geography” program, TIN structure, closer to the physical reality, in the other. The relations between the cells may also differ: they are often defined by a distance matrix for the socio-economic models, but a contiguity matrix is of course needed for the streaming process. But, beyond these technical differences, it appears that the geographical programs are developed on a macro-level, that is on aggregate statistical units. The elementary particle is always (or should be for a geographer...) a material, spatial unit, unlike the drop of water of the hydrologists, or the individual “agents” of the sociologists' multi-agents systems. The difference between the micro and macro level is not a question of scale, but a difference of logic. The simulation approach has a requisite, which is a need of systematic validation by a permanent comparison with the actual situation, but the objective is not prediction. The scientific concern is, before all, a precise understanding of the past and recent evolutions, more than a forecasting, which escapes to the specific field of scientific research. What is scientific is what can be measured. The possible prediction may rely on the scientific research, but belongs strictly to the domain of intellectual and personal thinking. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

8.
Stochastic modeling based on deterministic formulation: An example   总被引:1,自引:0,他引:1  
In the context of seismology, an example of stochastic modeling on the basis of an established deterministic formulation is presented. The advantages of this approach to modeling over those based solely on statistical fit are discussed. It is demonstrated that the result of applying this procedure is a model whose main parameter has a physical interpretation, and therefore a validation based on criteria other than statistical goodness of fit is also possible. Statistical inference together with some demonstrating examples are also included.  相似文献   

9.
Weights of evidence modeling for combining indicator patterns in mineral resource evaluation is based on an application of Bayes' rule. Two weights are defined for each indicator pattern and Bayes' rule is applied repeatedly to combine indicator patterns. If all patterns are conditionally independent with respect to deposits, the logit of the posterior probability can be calculated as the sum of the logit of the prior probability plus the weights of the overlay patterns. The information to be integrated for gold exploration in Xiong-er Mountain Region comes from a geological map, an interpreted map of a Thematic Mapper (TM) image, and the locations of known gold deposits. Favorable stratigraphic units, structural control factors, and alteration factors are considered. The work was conducted on an S600 I2S image-processing system. FORTRAN programs were developed for creating indicator patterns, statistical calculations, and pattern integration. Six indicator patterns were selected to predict mineral potential. They are conditionally independent according to pairwiseG 2 tests, and an overall chi-square test. The potential area predicted using the 32 known deposits generally coincides with the prospect areas determined by geological fieldwork.  相似文献   

10.
The past few years have seen an increasing application of computer-based procedures in the processing and interpretation of geochemical data. In many cases this has been carried out by non-geologists using large, general purpose computers remote from the exploration effort. In such situations, appreciation of the geological nature of the problems is often inadequate and in a number of cases misapplication of procedures has resulted. The present paper describes the Q'GAS system which consists of a minicomputer and a series of compatible interactive programs. These programs can be used independently by geologists who have a minimum of experience with computers allowing them the opportunity to carry out personally the processing and interpretation of geochemical data and to ensure that the methodology is based on a firm geological framework. The expense of such a system is relatively low: approximately $25,000 capital costs, and $3500 per year running costs including a maintenance contract. In addition approximately one man-month per year is required for supervision and up-keep.The system has the capability of: (a) selecting subgroups of samples that meet specified criteria (e.g. specific rock type or value ranges); (b) transforming data (addition, subtraction, multiplication, division, logarithms, exponentiation and random number generation); (c) making statistical computations; and (d) producing graphical displays (e.g. histograms, X-Y plots, symbol maps etc.). Flexible diskettes are used for data storage and communication with the system is achieved through a video terminal. Hard-copy output is produced on a small printer. The system presently includes programs for data management, univariate statistics with histograms, correlation analysis, X-Y plots, line printer symbol maps, line printer geochemical profiles, multiple linear regression, discriminant analysis, and R-mode factor analysis. Attention is presently being given to developing programs that use a pen plotter for producing better quality maps and diagrams at any scale.Experience has shown that the simpler programs for construction of maps, graphs, and diagrams can provide an immediate improvement in the quality, thoroughness, and speed of data interpretation as well as significantly reducing the tedium associated with manual methods. The multivariate statistical techniques, as always, require a higher level of expertise and many more man hours if they are to be used successfully.  相似文献   

11.
A system of interactive graphic computer programs for multivariate statistical analysis of geoscience data (SIMSAG)has been developed to facilitate the construction of statistical models to evaluate potential mineral and energy resources from geoscience data. The system provides an integrated interactive package for graphic display, data management, and multivariate statistical analysis. It is specifically designed to analyze and display spatially distributed information which includes the geographic locations of observations. SIMSAG enables the users not only to perform several different types of multivariate statistical analysis but also to display the data selected or the results of analyses in map form. In the analyses of spatial data, graphic displays are particularly useful for interpretation, because the results can be easily compared with known spatial characteristics of the data. The system also permits the user to modify variables and select subareas imposed by cursor. All operations and commands are performed interactively via a graphic computer terminal. A case study is presented as an example. It consists of the construction of a statistical model for evaluating potential areas for explorations of uranium from geological, geophysical, geochemical, and mineral occurrence map data quantified for equalarea cells in Kasmere Lake area in Manitoba, Canada.  相似文献   

12.
Youth preparedness for disasters is a growing area of research. However, studies to date have relied on cross-sectional, correlational research designs. The current study replicated aspects of the one other study to date that has used a quasi-experimental strategy to evaluate youth preparedness for disasters. This study evaluated whether children were more knowledgeable and prepared for hazards generally but also in more specific relation to the rollout of a new tsunami warning system. Using a pretest–posttest with benchmarking design, the study found that following a brief school education program, supplementing a larger community-wide effort, children reported significant gains in preparedness indicators including increased knowledge as well as increases in physical and psychosocial preparedness. Within group effect sizes compared favorably with those from the previous experimental study in this area used to benchmark current intervention-produced findings and produced hints that combining school education programs with larger community preparedness efforts can enhance preparedness. Given that this is only one of two experimentally-based studies in an area of research largely dominated by cross-sectional designs, future research should consider the use of experimental designs, including those that are pragmatic and fit with needs of the school. The current approach has limitations that need to be considered. However, it also has some real advantages, including being used more extensively in fieldwork studies that evaluate various types of interventions. Through increased use of experimental design strategies, researchers can then also have increased confidence that educational programs are the source of increases in disaster resilience in youth and their families.  相似文献   

13.
In this paper, four selected numerical approaches for the cyclic axial loading analysis of vertical piles are briefly described. Two of the programs utilise a load transfer type of approach while the other two utilise a simplified boundary element continuum approach. The predictions obtained from the four programs are compared for the hypothetical case of an offshore drilled and grouted pile subjected to a storm-loading condition. It is shown that the initial static responses of the pile obtained from the four programs compare favourably while the cyclic loading predictions, particularly the accumulated pile displacement, show a much greater variation.  相似文献   

14.
Standard multivariate statistical techniques, such as principal components analysis and hierarchical cluster analysis, have been widely used as unbiased methods for extracting meaningful information from groundwater quality data. However, these classical multivariate methods deal with two-way matrices, usually parameters × sites or parameters × time, while often the dataset resulting from qualitative water monitoring programs should be seen as a datacube parameters × sites × time. Three-way matrices, such as the one proposed here, are difficult to handle and to analyse by classical multivariate statistical tools and thus should be treated with approaches dealing with three-way data structures. One possible analysis approach is the use of Partial Triadic Analysis (PTA). Applied to the dataset of the Luxembourg Sandstone aquifer, the PTA appears to be a new promising statistical instrument for hydrogeologists, for characterization of temporal or spatial hydrochemical variations induced by natural and anthropogenic factors. This new approach for groundwater management offers potential for (1) identifying a common multivariate spatial structure, (2) untapping the different hydrochemical patterns and explaining their controlling factors and (3) analyzing the temporal variability of this structure and grasping hydrochemical changes.  相似文献   

15.
A totally objective procedure involving sixteen statistical tests (a total of thirty four single or multiple outlier versions of these tests) for outlier detection and rejection in a univariate sample is applied to a data base of sixty four elements in a recently issued international geochemical reference material (RM), a microgabbro PM-S from Scotland. This example illustrates the relative importance and usefulness of these tests in processing modern geochemical data for possible outliers and obtaining mean concentration and other statistical parameters from a final normal sample of univariate data. The final mean values are more reliable (characterized by smaller standard deviations and narrower confidence limits) than those obtained earlier using an accommodation approach (robust techniques) applied to this data base. Very high quality (certified value equivalent, cve) mean data are now obtained for eleven elements as well as high quality recommended values (rv) for thirty three elements in PM-S. Earlier work using the accommodation approach failed to establish even one cve value for any of the sixty four elements compiled here. The present procedure of outlier detection and elimination is therefore recommended in the study of RMs  相似文献   

16.
17.
The impact of the air pollution generated by any industrial activities may be further aggravated if the location of the industrial area is exposed to certain atmospheric characteristics. Under such conditions, the likelihood of accumulation of local air pollution is high. This paper uses two approaches (statistical and numerical simulation) to investigate the contribution of atmospheric processes towards degradation of air quality. A case study of the two approaches was conducted over Sohar Industrial Area in the Sultanate of Oman. Measured wind data were used to account for specific atmospheric characteristics such as stagnation, ventilation, and recirculation using the statistical approach. In the second approach, numerical weather prediction model was used to simulate mesoscale circulation phenomena such as sea breeze and its contribution to the processes affecting the air quality. The study demonstrates that the atmospheric processes appear to contribute substantially to the degradation of air quality in the Sohar Industrial Area. The statistical analysis shows that the atmospheric dilution potential of Sohar Industrial Area is prone to stagnation and recirculation, rather than ventilation. Moreover, model simulation shows that there is a seasonal variation in the contribution of atmospheric processes to the degradation of the air quality at Sohar Industrial Area.  相似文献   

18.
A database of analyses of C1 and C7 hydrocarbons from an oil and gas producing region in Mexico has been assembled from gas samples collected at depths of 3, 15 and 30 meters from surficial holes drilled in traverses over producing and barren structures. The surface consisted of subtropical swamps; depth to structure was 3500 to 5800 meters.Hydrocarbon analysis from six structures (three producing and three barren) selected from the database were subjected to multiple discriminant function analysis to produce a retrospective statistical test of the ability of geochemical prospecting to distinguish producing from non-producing structures. The hydrocarbon spectra from 3 meters depth yielded ambiguous results; those from 30 meters produced clear distinction between barren and producing structures. Further, the discriminant functions established a base of geochemical characteristics, founded on known areas (retrospective), to which additional unknown areas (prospective) may be compared for classification. This suggests a bootstrapping approach to exploration in which an increasing number of “known” results can be used to continually update and refine the predictive power of the discriminant function.This indicates the practical ability of a combined geochemical-multivariate statistical prospecting approach as an exploration tool, particularly within a single geochemical/geological province. Geochemical prospecting, perhaps with relatively deep (30 m) penetration, combined with multivariate data analysis is a rapid, potent and relatively inexpensive additional tool for petroleum exploration.  相似文献   

19.
20.
Various uncertainties arising during acquisition process of geoscience data may result in anomalous data instances(i.e.,outliers)that do not conform with the expected pattern of regular data instances.With sparse multivariate data obtained from geotechnical site investigation,it is impossible to identify outliers with certainty due to the distortion of statistics of geotechnical parameters caused by outliers and their associated statistical uncertainty resulted from data sparsity.This paper develops a probabilistic outlier detection method for sparse multivariate data obtained from geotechnical site investigation.The proposed approach quantifies the outlying probability of each data instance based on Mahalanobis distance and determines outliers as those data instances with outlying probabilities greater than 0.5.It tackles the distortion issue of statistics estimated from the dataset with outliers by a re-sampling technique and accounts,rationally,for the statistical uncertainty by Bayesian machine learning.Moreover,the proposed approach also suggests an exclusive method to determine outlying components of each outlier.The proposed approach is illustrated and verified using simulated and real-life dataset.It showed that the proposed approach properly identifies outliers among sparse multivariate data and their corresponding outlying components in a probabilistic manner.It can significantly reduce the masking effect(i.e.,missing some actual outliers due to the distortion of statistics by the outliers and statistical uncertainty).It also found that outliers among sparse multivariate data instances affect significantly the construction of multivariate distribution of geotechnical parameters for uncertainty quantification.This emphasizes the necessity of data cleaning process(e.g.,outlier detection)for uncertainty quantification based on geoscience data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号