This article discusses statistical models for the solar flare interval distribution in individual active regions. We analyzed
solar flare data in 55 active regions that are listed in the Geosynchronous Operational Environmental Satellite (GOES) soft X-ray flare catalog for the years from 1981 to 2005. We discuss some problems with a conventional procedure to
derive probability density functions from any data set and propose a new procedure, which uses the maximum likelihood method
and Akaike Information Criterion (AIC) to objectively compare some competing probability density functions. Previous studies
of the solar flare interval distribution in individual active regions only dealt with constant or time-dependent Poisson process
models, and no other models were discussed. We examine three models – exponential, lognormal, and inverse Gaussian – as competing
models for probability density functions in this study. We found that lognormal and inverse Gaussian models are more likely
models than the exponential model for the solar flare interval distribution in individual active regions. The possible solar
flare mechanisms for the distribution models are briefly mentioned. We also briefly investigated the time dependence of probability
density functions of the solar flare interval distribution and found that some active regions show time dependence for lognormal
and inverse Gaussian distribution functions. The results suggest that solar flares do not occur randomly in time; rather,
solar flare intervals appear to be regulated by solar flare mechanisms. Determining a solar flare interval distribution is
an essential step in probabilistic solar flare forecasting methods in space weather research. We briefly mention a probabilistic
solar flare forecasting method as an application of a solar flare interval distribution analysis. The application of our distribution
analysis to a probabilistic solar flare forecasting method is one of the main objectives of this study. 相似文献
Sampling satellite images presents some specific characteristics: images overlap and many of them fall partially outside the studied region. A careless sampling may introduce an important bias. This paper illustrates the risk of bias and the efficiency improvements of systematic, pps (probability proportional to size) and stratified sampling.A sampling method is proposed with the following criteria: (a) unbiased estimators are easy to compute; (b) it can be combined with stratification; (c) within each stratum, sampling probability is proportional to the area of the sampling unit; and (d) the geographic distribution of the sample is reasonably homogeneous. Thiessen polygons computed on image centres are sampled through a systematic grid of points. The sampling rates in different strata are tuned by dividing the systematic grid into subgrids or replicates and taking for each stratum a certain number of replicates.The approach is illustrated with an application to the estimation of the geometric accuracy of Image2000, a Landsat ETM+ mosaic of the European Union. 相似文献
Positional error is the error produced by the discrepancy between reference and recorded locations. In urban landscapes, locations typically are obtained from global positioning systems or geocoding software. Although these technologies have improved the locational accuracy of georeferenced data, they are not error free. This error affects results of any spatial statistical analysis performed with a georeferenced dataset. In this paper we discuss the properties of positional error in an address matching exercise and the allocation of point locations to census geography units. We focus on the error's spatial structure, and more particularly on impacts of error propagation in spatial regression analysis. For this purpose we use two geocoding sources, we briefly describe the magnitude and the nature of their discrepancies, and we evaluate the consequences that this type of locational error has on a spatial regression analysis of pediatric blood lead data for Syracuse, NY. Our findings include: (1) the confirmation of the recurrence of spatial clustering in positional error at various geographic resolutions; and, (2) the identification of a noticeable but not shockingly large impact from positional error propagation in spatial auto‐binomial regression analysis results for the dataset analyzed. 相似文献
Bayesian frameworks for comparing water quality information to a pre-specified standard or goal and comparing water quality characteristics among two different entities are presented and illustrated using chloride and total dissolved solids (TDS) measurements obtained in the shallower Chicot and the deeper Evangeline formations of the Gulf coast aquifer underlying Refugio County, TX. The Bayesian approach seeks to present evidence in favor of the competing hypotheses which are weighed equally and unlike classical statistics do not make a decision in favor of one hypothesis. When comparing water quality information to a specified goal, the Bayesian approach addresses the more practical question—given all the information, what is the probability of meeting the goal? Similarly, when comparing the water quality between two entities, the approach simply emphasizes the nature and extent of differences and as such is better suited for evaluative studies. Bayesian analysis indicated that average chloride concentrations in the Evangeline formation was 1.65 times the concentrations in the Chicot formation while the corresponding TDS concentration ratio was close to unity. The probability of identifying water with TDS ≤1,000 g/m3 was extremely low, especially in the more prolific Evangeline formation. The probability of groundwater supplies with mean chloride concentrations ≤500 g/m3 was relatively high in the Chicot formation but very low in the Evangeline formation indicating the possible need for blending groundwater with other sources to meet municipal water quality goals. 相似文献
Résumé On propose un mode de calcul des directions principales d'un ensemble de directions non orientées et des tests de précision. On propose des applications à l'analyse de nuages de pôles et à la recherche d'intersection de cercles de réaimantation.
Statistical analysis of non-orientated directions. A computing mode of principal directions of a set of non-orientated directions or lineations and statistical tests are presented. An application to the analysis of a set of poles and for finding the intersection of remagnetisation circles is shown.
Statisticalt tests were used to determine lead, copper, and chromium enrichment in sediments from the Lower Branch of the Rouge River in southeast Michigan, USA. Both absolute metal concentrations and ratios of trace metal to conservative metal concentrations were used to compare sampled sites along the Lower Branch of the Rouge River to background sites in the headwaters region. Concentration ratios were used to reduce the effects of certain chemical and physical characteristics on the level of metal contained in a given sediment. Results from the comparison of sample sites to the background reveal metal enrichment at several sites, particularly along the highly urbanized, downstream section of the river. This section of the Lower Branch of the Rouge River exhibits significant lead and copper contamination, as well as measurable chromium enrichment when using either concentrations alone or ratios as methods of comparison. The areas of metal enrichment appear to coincide closely with areas of known anthropogenic activities. Of particular interest, however, is the enrichment of lead and copper at two upstream sites where the statistical tests suggest an anthropogenic source for the enrichment, but where no previously known cultural activities existed. These data prompted a historical search of records, which discovered several abandoned landfills immediately upstream of the metal enrichment sites. 相似文献
Maps of 25 groundwater quality variables were obtained by estimating 4 km × 4 km block median concentrations. Estimates were presented as approximate 95% confidence intervals related to four concentration levels mostly obtained from critical levels for human consumption. These maps were based on measurements from 425 monitoring sites of national and provincial groundwater quality monitoring networks. The estimation procedure was based on a stratification by soil type and land use. Within each soil-land use category, measurements were interpolated. Spatial dependence between measurements and regional differences in mean level were taken into account. Stratification turned out to be essential: no or partial stratification (using either soil type or land use) results in essentially different maps. The effect of monitoring network density was studied by leaving out the 173 monitoring sites of the provincial monitoring networks. Important changes in resulting maps were assigned to loss of information on short-distance variation, as well as loss of location-specific information. For 12 variables, maps of changes in groundwater quality were made by spatial interpolation of short-term predictions calculated for each well screen from time series of yearly measurements over 5–7 years, using a simple regression model for variation over time and taking location-specific time-prediction uncertainties into account.
From a policy point of view, the resulting maps can be used either for quantifying diffuse groundwater contamination and location-specific background concentrations (in order to assist local contamination assessment) or for input and validation of policy supporting regional or national groundwater quality models. The maps can be considered as a translation of point information obtained from the monitoring networks into information on spatial units, the size of which is used in regional groundwater models. The maps enable location-specific network optimization. In general, the maps give little reason for reducing the monitoring network density (wide confidence intervals). 相似文献
Summary In mining and geotechnical engineering, it is usually necessary to carry out field measurements in order to obtain information. Parameters are often measured indirectly and calculated based on certain relationships to the measured quantities. More often, the number of measurements taken is greater than the minimum required, in order to increase the reliability of results. However, some data points are less reliable than others for reasons such as measurement errors; a solution which best fits the measurement data is obtained accordingly. As a result, there is a residual or a difference between the individual quantities measured and those predicted from the best-fit solution. This brings about a question of how big a residual is acceptable for a solution to be reliable. It is also important to know whether the data point with the largest residual is the most erroneous, whether those data points with large residuals should be deleted and how many of them should be deleted. Standard deviation may provide a measure of the data divergence but it is questionable if this parameter can be used as a measure of the reliability of solution. In order to solve these problems, the author has done extensive study in this area, especially as part of geotechnical data analysis. In this paper, the statistical multiple regression method is introduced to analyse the measurement data. The method is applied to the analysis ofin situ stress measurement and can be easily adopted to analyse data from other field measurements and laboratory tests. An example is included which illustrates the analysis procedure and shows the advantages of the method. 相似文献