首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Difficulties in applying disjunctive kriging (D.K.) with an anamorphosis to a normal distribution have led to an interest in D.K. based on other distributions. After reviewing Gaussian D.K., this paper reviews other types of D.K. based on other infinitely divisible distributions (gamma, Poisson, and negative binomial).  相似文献   

2.
On the basis of the negative binomial distribution of the duration of wet periods calculated per day, an asymptotic model is proposed for distributing the maximum daily rainfall volume during the wet period, having the form of a mixture of Frechet distributions and coinciding with the distribution of the positive degree of a random variable having the Fisher–Snedecor distribution. The method of proving the corresponding result is based on limit theorems for extreme order statistics in samples of a random volume with a mixed Poisson distribution. The adequacy of the models proposed and methods of their statistical analysis is demonstrated by the example of estimating the extreme distribution parameters based on real data.  相似文献   

3.
The two-dimensional spatial distribution of precious stones, such as diamonds in alluvial and coastal deposits, shows a high degree of clustering. Usually, stones tend to gather in relatively small clusters or traps, made by potholes, gullies, or small depressions in the rough bedcock. Therefore, when taking samples of such deposits, discrete distributions of the number of stones counted in each sample yield an extreme skewness. Most samples have no stones, whereas samples containing a few hundred stones are not unusual. This paper constructs a model and a method for fitting a new and general family of counting distributions based on the Neyman-Scott cluster model and the mixed Poisson process, which can be used to model a differing degree of clustering. General recursion equations for the discrete probabilities of these distributions are derived. Application of this model to simulated data shows that information such as cluster size, number of point events per cluster, and number of clusters per measurement unit can be extracted easily from this model. Fitting the model to data of two real diamond deposits of a totally different nature—small rich clusters of Namibia versus larger but less rich clusters of Guinea—demonstrates its flexibility.  相似文献   

4.
Estimating the occurrence probability of volcanic eruptions with VEI ??3 is challenging in several aspects, including data scarcity. A?suggested approach has been to use a simple model, where eruptions are assumed to follow a Poisson process, augmenting the data used to estimate the eruption onset rate with that from several analog volcanoes. In this model the eruption onset rate is a random variable that follows a gamma distribution, the parameters of which are estimated by an empirical Bayes analysis. The selection of analog volcanoes is an important step that needs to be explicitly considered in this model, as we show that the analysis is not always feasible due to the required over-dispersion in the resulting negative binomial distribution for the numbers of eruptions. We propose a modification to the method which allows for both over-dispersed and under-dispersed data, and permits analog volcanoes to be chosen on other grounds than mathematical tractability.  相似文献   

5.
A stochastic model of the magmatic differentiation by fractional crystallization is given. The probability distribution of the quantity of crystallized solid removed from silicate melt was assumed to be binomial at any stage of the differentiation. According to this model, when concentrations of an element are transformed into their powers by using the reciprocal of the difference between bulk-partition coefficient for the element and unity as an exponent, the resultant frequency-distribution pattern becomes absolutely normal. The patterns of concentrations, however, cannot be expressed by a particular type of function but are variable according to the values of bulk-partition coefficients. Frequency distributions of element concentrations were computed under selected conditions on the basis of the model. The result shows that the skewed pattern often observed in frequency distributions of minor-element concentrations is explained by this fractional crystallization process. The frequency distributions of Ni and Cr concentrations in the geosynclinal basalt of southwestern Japan were examined in terms of this model. It was concluded that the model can be applied to the formation of the basalt concerning at least these two elements.  相似文献   

6.
A common issue in spatial interpolation is the combination of data measured over different spatial supports. For example, information available for mapping disease risk typically includes point data (e.g. patients’ and controls’ residence) and aggregated data (e.g. socio-demographic and economic attributes recorded at the census track level). Similarly, soil measurements at discrete locations in the field are often supplemented with choropleth maps (e.g. soil or geological maps) that model the spatial distribution of soil attributes as the juxtaposition of polygons (areas) with constant values. This paper presents a general formulation of kriging that allows the combination of both point and areal data through the use of area-to-area, area-to-point, and point-to-point covariances in the kriging system. The procedure is illustrated using two data sets: (1) geological map and heavy metal concentrations recorded in the topsoil of the Swiss Jura, and (2) incidence rates of late-stage breast cancer diagnosis per census tract and location of patient residences for three counties in Michigan. In the second case, the kriging system includes an error variance term derived according to the binomial distribution to account for varying degree of reliability of incidence rates depending on the total number of cases recorded in those tracts. Except under the binomial kriging framework, area-and-point (AAP) kriging ensures the coherence of the prediction so that the average of interpolated values within each mapping unit is equal to the original areal datum. The relationships between binomial kriging, Poisson kriging, and indicator kriging are discussed under different scenarios for the population size and spatial support. Sensitivity analysis demonstrates the smaller smoothing and greater prediction accuracy of the new procedure over ordinary and traditional residual kriging based on the assumption that the local mean is constant within each mapping unit.  相似文献   

7.
The pattern of natural size distributions   总被引:4,自引:0,他引:4  
  相似文献   

8.
The paper describes the development of a technique to simulate triaxial tests on specimens of railway ballast numerically at the particle scale and its validation with reference to physical test data. The ballast particles were modelled using potential particles and the well‐known discrete element method. The shapes of these elemental particles, the particle size distribution and the number of particles (N = 2800) in each numerical triaxial specimen all matched closely to the real ballast material being modelled. Confining pressures were applied to the specimen via a dynamic triangulation of the outer particle centroids. A parametric study was carried out to investigate the effects on the simulation of timestep, strain rate, damping, contact stiffness and inter‐particle friction. Finally, a set of parameters was selected that provided the best fit to experimental triaxial data, with very close agreement of mobilized friction and volumetric strain behaviour. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
This paper presents a theoretical formulation of Ostwald ripening of garnet and discusses the importance of the process during high pressure and low temperature (high P/T) metamorphism. The growth rate of garnet due to Ostwald ripening is formulated for the system consisting of minerals and an intergranular medium. Crystal size distribution (CSD) of garnets are examined and compared with the theoretical distribution for Ostwald ripening. Two types of CSDs are recognized. One is consistent with the theoretical prediction of size distribution while the other is wider than the theoretical distribution. The former CSD applies to samples in which garnets show homogeneous spatial distributions. The latter CSD applies to samples in which garnets show heterogeneous spatial distributions such as in clusters or layers. These relations suggest that the heterogeneity of spatial distributions results in a heterogeneity of concentration of garnet, causing the wide distributions. The mean diameter (dg) has a large variation in samples having narrow distributions. Ostwald ripening explains well the similar patterns of CSD in these samples with different dg because of a scaling law. Compositional profiles of garnets with different size are consistent with Ostwald ripening rather than nucleation and growth kinetics. This suggests that the CSDs result from Ostwald ripening. Magnitude of heating rate will determine which mechanism controls CSD. Nucleation and growth kinetics are dominant when heating rate is large. On the other hand, Ostwald ripening is dominant when heating rate is small. CSDs of garnets in high P/T metamorphic rocks are consistent with the latter case.  相似文献   

10.
This paper introduces four kinds of novel bivariate maximum entropy distributions based on bivariate normal copula, Gumbel–Hougaard copula, Clayton copula and Frank copula. These joint distributions consist of two marginal univariate maximum entropy distributions. Four types of Poisson bivariate compound maximum entropy distributions are developed, based on the occurrence frequency of typhoons, on these novel bivariate maximum entropy distributions and on bivariate compound extreme value theory. Groups of disaster-induced typhoon processes since 1949–2001 in Qingdao area are selected, and the joint distribution of extreme water level and corresponding significant wave height in the same typhoon processes are established using the above Poisson bivariate compound maximum entropy distributions. The results show that all these four distributions are good enough to fit the original data. A novel grade of disaster-induced typhoon surges intensity is established based on the joint return period of extreme water level and corresponding significant wave height, and the disaster-induced typhoons in Qingdao verify this grade criterion.  相似文献   

11.
There is no single method available for estimating the seismic risk in a given area, and as a result most studies are based on some statistical model. If we denote by Z the random variable that measures the maximum magnitude of earthquakes per unit time, the seismic risk of a value m is the probability that this value will be exceeded in the next time units, that is, R(m)=P(Z>m). Several approximations can be made by adjusting different theoretical distributions to the function R, assuming different distributions for the magnitude of earthquakes. A related method used to treat this problem is to consider the difference between the times of occurrence of consecutive earthquakes, or inter-event times. The hazard function, or failure rate function, of this variable measures the instantaneous risk of occurrence of a new earthquake, supposing that the last earthquake happened at time 0. In this paper, we will consider the estimation of the variable that measures the inter-event time and apply nonparametric techniques; that is, we do not consider any theoretical distribution. Moreover, because the stochastic process associated with this variable can sometimes be non-stationary, we condition each time by the previous ones. We then work with a multidimensional estimation, and consider each multidimensional variable as a functional datum. Functional data analysis deals with data consisting of curves or multidimensional variables. Nonparametric estimation can be applied to functional data, to describe the behavior of seismic zones and their associated instantaneous risk. The applications of estimation techniques are shown by applying them to two different regions and data catalogues: California and southern Spain.  相似文献   

12.
To provide constraints on the speciation of bacterial surface functional groups, we have conducted potentiometric titrations using the gram-positive aerobic species Bacillus subtilis, covering the pH range 2.1 to 9.8. Titration experiments were conducted using an auto-titrator assembly, with the bacteria suspended in fixed ionic strength (0.01 to 0.3 M) NaClO4 solutions. We observed significant adsorption of protons over the entire pH range of this study, including to the lowest pH values examined, indicating that proton saturation of the cell wall did not occur under any of the conditions of the experiments. Ionic strength, over the range studied here, did not have a significant effect on the observed buffering behavior relative to experimental uncertainty. Electrophoretic mobility measurements indicate that the cell wall is negatively charged, even under the lowest pH conditions studied. These experimental results necessitate a definition of the zero proton condition such that the total proton concentration at the pH of suspension is offset to account for the negative bacterial surface charge that tends towards neutrality at pH <2.The buffering intensity of the bacterial suspensions reveals a wide spread of apparent pKa values. This spread was modeled using three significantly different approaches: a Non-Electrostatic Model, a Constant Capacitance Model, and a Langmuir-Freundlich Model. The approaches differ in the manner in which they treat the surface electric field effects, and in whether they treat the proton-active sites as discrete functional groups or as continuous distributions of related sites. Each type of model tested, however, provides an excellent fit to the experimental data, indicating that titration data alone are insufficient for characterizing the molecular-scale reactions that occur on the bacterial surface. Spectroscopic data on the molecular-scale properties of the bacterial surface are required to differentiate between the underlying mechanisms of proton adsorption inherent in these models. The applicability and underlying conceptual foundation of each model is discussed in the context of our current knowledge of the structure of bacterial cell walls.  相似文献   

13.
    
Discovery sampling is a simple sampling inspection technique which has been used by internal auditors during the last 30 years. Occasionally it has found application in agricultural research. It will be shown that an amended version of this technique can be used in the sampling of orebodies for the presence of particles of a mineral such as gold, silver, uranite, and so forth. While with the classical Discovery sampling a simple discrete distribution (Multinomial, Poisson, etc.)is involved, in our case convolutions of a discrete (Poisson)and continuous distributions (gamma, lognormal)occur, because of the indirect procedure used in detecting the presence of the mineral in the orebody. Illustrative examples using two distributions of gold particles are given.  相似文献   

14.
Computations of uplift capacity of pile anchors in cohesionless soil   总被引:3,自引:2,他引:1  
A method of analysis for the uplift capacity of pile anchors in cohesionless soil is proposed using Kötter’s equation that facilitates computation of the distribution of soil reaction on the axis-symmetric failure surface, which is assumed to be the frustum of a cone with a characteristic angle of inclination with the pile–soil interface. A closed-form solution for the uplift capacity is obtained with no requirement of any charts or tables. Empirical relations using available literature are proposed for expressing critical embedment ratio and computation of net uplift capacity. The results are compared with a set of experimental data for 28 cases, ranging from loose to dense cohesionless soil up to maximum embedment ratio of 40, vis-à-vis available theoretical solutions. The proposed method leads to the predictions that are in good agreement with the experimental results. It further demonstrates the successful application of Kötter’s equation in the estimation of uplift capacity of pile anchors.  相似文献   

15.
Transmission tomography methods show a great sensibility to data variability, which eventually includes data errors, often present in field experiments. Local optimization methods, traditionally used to solve this inverse problem, are very sensitive to these difficulties, failing to converge properly in the presence of spurious data. Regularization methods partially cope with these weaknesses, damping the instabilities.A complementary approach, adopted here, is to perform a structured analysis of data variability before the inversion, oriented to discriminate the contribution of errors from that of true geological heterogeneities. The key concept of mean traveltime curves ( and ) is introduced and described. Their analytical equations are deduced for isotropic homogeneous media and any recording geometry. Empirical mean traveltime curves can be inferred based solely on traveltime data, using the corresponding discrete estimators. The methodology proposed here proceeds through a user-defined subdivision of the domain of interest into isotropic homogeneous areas. Least squares velocity estimations and associated data misfits are used to scrutinize the behaviour of the implied source-receiver sets and of the ray-swept part of the geologic medium. Data are considered suspicious if zonal estimated velocities are non-consistent with a priori information. Also, independent fitting of both empirical curves helps to classify the genesis of the residuals: some situations are illustrated.Finally, we show the application of this technique to a data set from the Grimsel test site in Switzerland. Using this methodology, we detect some anomalous gathers, which may be responsible for the large range of velocities found in the initial imaging with this data set. Also, we give some indications of the possible sources of these anomalies. This approach offers a quick data variability analysis in the pre-processing stage, which, even if no data editing algorithms are finally used, always improves the understanding of the data structure.  相似文献   

16.
Kriging without negative weights   总被引:1,自引:0,他引:1  
Under a constant drift, the linear kriging estimator is considered as a weighted average ofn available sample values. Kriging weights are determined such that the estimator is unbiased and optimal. To meet these requirements, negative kriging weights are sometimes found. Use of negative weights can produce negative block grades, which makes no practical sense. In some applications, all kriging weights may be required to be nonnegative. In this paper, a derivation of a set of nonlinear equations with the nonnegative constraint is presented. A numerical algorithm also is developed for the solution of the new set of kriging equations.  相似文献   

17.
The statistical analysis of compositional data based on logratios of parts is not suitable when zeros are present in a data set. Nevertheless, if there is interest in using this modeling approach, several strategies have been published in the specialized literature which can be used. In particular, substitution or imputation strategies are available for rounded zeros. In this paper, existing nonparametric imputation methods—both for the additive and the multiplicative approach—are revised and essential properties of the last method are given. For missing values a generalization of the multiplicative approach is proposed.  相似文献   

18.
Interpretation of geophysical data or other indirect measurements provides large-scale soft secondary data for modeling hard primary data variables. Calibration allows such soft data to be expressed as prior probability distributions of nonlinear block averages of the primary variable; poorer quality soft data leads to prior distributions with large variance, better quality soft data leads to prior distributions with low variance. Another important feature of most soft data is that the quality is spatially variable; soft data may be very good in some areas while poorer in other areas. The main aim of this paper is to propose a new method of integrating such soft data, which is large-scale and has locally variable precision. The technique of simulated annealing is used to construct stochastic realizations that reflect the uncertainty in the soft data. This is done by constraining the cumulative probability values of the block average values to follow a specified distribution. These probability values are determined by the local soft prior distribution and a nonlinear average of the small-scale simulated values within the block, which are all known. For each realization to accurately capture the information contained in the soft data distributions, we show that the probability values should be uniformly distributed between 0 and 1. An objective function is then proposed for a simulated annealing based approach to enforce this uniform probability constraint. The theoretical justification of this approach is discussed, implementation details are considered, and an example is presented.  相似文献   

19.
Summary  The block sizes in a rock mass play an important role in many rock engineering projects and therefore the assessment of in-situ block size distribution (IBSD) has been an increasing pursuit of researchers in mining, quarrying and highway cutting operations. This paper discusses further developments in the assessment of IBSD which build upon a broadly accessible approach for engineers published previously by the Geomaterials Unit. The original research provided look-up tables appropriate for field data, with theoretical joint set spacing distributions and an assumption that discontinuities extend indefinitely. The developments reported in the paper include: the prediction of IBSD with special reference to discontinuity sets with fractal spacing distributions; the influence of impersistence of discontinuities on the prediction of IBSD; and the use of grey correlation analysis when selecting a closely fitting theoretical distribution for discontinuity spacing data. Various approaches to IBSD assessment are discussed.  相似文献   

20.
Very little work has been done in generating alternatives to the Poisson process model. The work reported here deals with alternatives to the Poisson process model for the earthquakes and checks them using empirical data and the statistical hypothesis testing apparatus. The strategy used here for generating hypotheses is to compound the Poisson process. The parameter of the Poisson process is replaced by a random variable having prescribed density function. The density functions used are gamma, chi and extended (gamma/chi). The original distribution is then averaged out with respect to these density functions. For the compound Poisson processes the waiting time distributions for the future events are derived. As the parameters for the various statistical models for earthquake occurrences are not known, the problem is basically of composite hypothesis testing. One way of designing a test is to estimate these parameters and use them as true values. Momentmatching is used here to estimate the parameters. The results of hypothesis testing using data from Hindukush and North East India are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号