首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 33 毫秒
1.
Seismic reflection methods measure the time a seismic wave takes to travel through the ground, from the user defined source to a series of signal monitoring sensors known as geophones. The measured times need to be depth converted to allow for integration with other geological data. In order to convert from time to depth, an estimate of the rock volume velocity field must be made. The velocity field estimate can be made by assignment of velocity estimates to a geological model independent of the seismic processing. This article presents the results of using the acoustic geophysical log data extrapolated via sequential Gaussian simulation to derive the velocity field. The uncertainties associated with the velocity estimates were significant and provided the means to assess confidence limits for the actual depth determination. The technique is assessed by application to a major coal deposit, approximately 2.1 m thick and 210 m deep. Considering only the uncertainty associated with estimating the velocity field, half of the confidence interval values showed approximately 1 m of uncertainty in depth. The application of sequential Gaussian simulation to model the 3D distribution of acoustic velocity can be extended to other geophysical log parameters or derived estimates.  相似文献   

2.
Summary. In this paper we discuss some aspects of estimating t * from short-period body waves and present some limits on t* (f) models for the central and south-western United States (CUS and SWUS). We find that for short-period data, with frequencies above 1 or 2 Hz, while the average spectral shape is stable, the smaller details of the spectra are not; thus, only an average t *, and not a frequency-dependent t *, can be derived from such information. Also, amplitudes are extremely variable for short-period data, and thus a great deal of data from many stations and azimuths must be used when amplitudes are included in attenuation studies.
The predictions of three pairs of models for t* (f) in the central and south-western United States are compared with time domain observations of amplitudes and waveforms and frequency domain observations of spectral slopes to put bounds on the attenuation under the different parts of the country. A model with the t * values of the CUS and SWUS converging at low frequencies and differing slightly at high frequencies matches the spectral domain characteristics, but not the time domain amplitudes and waveforms of short-period body waves. A model with t * curves converging at low frequencies, but diverging strongly at high frequencies, matches the time domain observations, but not the spectral shapes. A model with nearly-parallel t* (f) curves for the central and south-western United States satisfies both the time and frequency domain observations.
We conclude that use of both time and frequency domain information is essential in determining t* (f) models. For the central and south-western United States, a model with nearly-parallel t* (f) curves, where Δ t *∼ 0.2 s, satisfies both kinds of data in the 0.3–2 Hz frequency range.  相似文献   

3.
We studied the existence of dynamical stochastic relations in the evolution of the am index. A first analysis of the autocorrelation functions showed evidence of several seasonalities. We first used linear (ARMA) models, and it was found that these do not account for the whole internal dynamics of the data series. We then used various non-linear models to provide a better fit to reality. The forecast performances of the non-linear models are not significantly different from those of the linear model. We give a tentative explanation for the failure of the non-linear predictions. Finally, ARCH models were used in order to take into account the fact that the confidence interval for the predicted value depends on past observations.  相似文献   

4.
This paper explores three theoretical approaches for estimating the degree of correctness to which the accuracy figures of a gridded Digital Elevation Model (DEM) have been estimated depending on the number of checkpoints involved in the assessment process. The widely used average‐error statistic Mean Square Error (MSE) was selected for measuring the DEM accuracy. The work was focused on DEM uncertainty assessment using approximate confidence intervals. Those confidence intervals were constructed both from classical methods which assume a normal distribution of the error and from a new method based on a non‐parametric approach. The first two approaches studied, called Chi‐squared and Asymptotic Student t, consider a normal distribution of the residuals. That is especially true in the first case. The second case, due to the asymptotic properties of the t distribution, can perform reasonably well with even slightly non‐normal residuals if the sample size is large enough. The third approach developed in this article is a new method based on the theory of estimating functions which could be considered much more general than the previous two cases. It is based on a non‐parametric approach where no particular distribution is assumed. Thus, we can avoid the strong assumption of distribution normality accepted in previous work and in the majority of current standards of positional accuracy. The three approaches were tested using Monte Carlo simulation for several populations of residuals generated from originally sampled data. Those original grid DEMs, considered as ground data, were collected by means of digital photogrammetric methods from seven areas displaying differing morphology employing a 2 by 2 m sampling interval. The original grid DEMs were subsampled to generate new lower‐resolution DEMs. Each of these new DEMs was then interpolated to retrieve its original resolution using two different procedures. Height differences between original and interpolated grid DEMs were calculated to obtain residual populations. One interpolation procedure resulted in slightly non‐normal residual populations, whereas the other produced very non‐normal residuals with frequent outliers. Monte Carlo simulations allow us to report that the estimating function approach was the most robust and general of those tested. In fact, the other two approaches, especially the Chi‐squared method, were clearly affected by the degree of normality of the residual population distribution, producing less reliable results than the estimating functions approach. This last method shows good results when applied to the different datasets, even in the case of more leptokurtic populations. In the worst cases, no more than 64–128 checkpoints were required to construct an estimate of the global error of the DEM with 95% confidence. The approach therefore is an important step towards saving time and money in the evaluation of DEM accuracy using a single average‐error statistic. Nevertheless, we must take into account that MSE is essentially a single global measure of deviations, and thus incapable of characterizing the spatial variations of errors over the interpolated surface.  相似文献   

5.
A new statistical approach to the alignment of time series   总被引:1,自引:0,他引:1  
Summary. Much research in the Earth Sciences is centred on the search for similarities in waveforms or amongst sets of observations. For example, in seismology and palaeomagnetism, this matching of records is used to align several series of observations against one another or to compare one set of observations against a master series. This paper gives a general mathematical and statistical formulation of the problem of transforming, linearly or otherwise, the time-scale or depth-scale of one series of data relative to another. Existing approaches to this problem, involving visual matching or the use of correlation coefficients, are shown to have several serious deficiencies, and a new statistical procedure, using least-squares cubic splines, is presented. The new method provides not only a best estimate of the 'stretching function' defining the relative alignment of the two series of observations, but also a statement, by means of confidence regions, of the precision of this transformation. The new procedure is illustrated by analyses of artificially generated data and of palaeomagnetic observations from two cores from Lake Vuokonjarvi, Finland. It may be applied in a wide variety of situations, wherever the observations satisfy the general underlying mathematical model.  相似文献   

6.
Summary. In palaeomagnetic studies the analysis of multicomponent magnetizations has evolved from the eye-ball, orthogonal plot, and vector difference methods to the more elaborate computer-based methods such as principle component analysis (PCA), linearity spectrum analysis (LSA), and the recent package called LINEFIND. the errors involved in estimating a particular direction in a multicomponent system from a single specimen are fundamental to PCA, LSA, and LINEFIND, yet these errors are not used in estimating an overall direction from a number of observations of a particular component (other than in some acceptance or rejection criterion). the distribution of errors relates very simply to a Fisher distribution, and so these errors may be included fairly naturally in the overall analysis. In the absence of a rigorous theory to cover all situations, we consider here approximate methods for the use of these errors in estimating overall directions and cones of confidence. Some examples are presented to demonstrate the application of these methods.  相似文献   

7.
Mineral-potential mapping is the process of combining a set of input maps, each representing a distinct geo-scientific variable, to produce a single map which ranks areas according to their potential to host mineral deposits of a particular type. The maps are combined using a mapping function that must be either provided by an expert (knowledge-driven approach), or induced from sample data (data-driven approach). Current data-driven approaches using multilayer perceptrons (MLPs) to represent the mapping function have several inherent problems: they are highly sensitive to the selection of training data; they do not utilize the contextual information provided by nondeposit data; and there is no objective interpretation of the values output by the MLP. This paper presents a new approach by which MLPs can be trained to output values that can be interpreted strictly as representing posterior probabilities. Other advantages of the approach are that it utilizes all data in the construction of the model, and thus eliminates any dependence on a particular selection of training data. The technique is applied to mapping gold mineralization potential in the Castlemaine region of Victoria, Australia, and results are compared with a method based on estimating probability density functions.  相似文献   

8.

The temperature distribution at depth is a key variable when assessing the potential of a supercritical geothermal resource as well as a conventional geothermal resource. Data-driven estimation by a machine-learning approach is a promising way to estimate temperature distributions at depth in geothermal fields. In this study, we developed two methodologies—one based on Bayesian estimation and the other on neural networks—to estimate temperature distributions in geothermal fields. These methodologies can be used to supplement existing temperature logs, by estimating temperature distributions in unexplored regions of the subsurface, based on electrical resistivity data, observed geological/mineralogical boundaries, and microseismic observations. We evaluated the accuracy and characteristics of these methodologies using a numerical model of the Kakkonda geothermal field, Japan, where a temperature above 500 °C was observed below a depth of about 3.7 km. When using geological and geophysical knowledge as prior information for the machine learning methods, the results demonstrate that the approaches can provide subsurface temperature estimates that are consistent with the temperature distribution given by the numerical model. Using a numerical model as a benchmark helps to understand the characteristics of the machine learning approaches and may help to identify ways of improving these methods.

  相似文献   

9.
基于GAMLSS 模型的宜昌站年径流序列趋势分析   总被引:7,自引:1,他引:6  
江聪  熊立华 《地理学报》2012,67(11):1505-1514
在变化环境下, 研究非一致性水文序列的变化趋势具有重要意义。常规的趋势分析方法一般只能分析水文序列均值的线性趋势。本文引入GAMLSS模型对宜昌站1882-2009 年间的年平均流量序列和年最小月流量序列分别进行趋势分析, 将序列的趋势分析从均值扩展至均方差(或变差系数)、偏态系数等其他统计参数。研究发现宜昌站年平均流量序列的均值有明显线性减少的趋势, 而宜昌站年最小月流量序列线性趋势不明显。在此基础上, 建立基于多项式回归的GAMLSS模型, 结果表明宜昌站年最小月流量序列并非平稳序列, 其均值表现为非线性的趋势变化, 偏态系数呈现线性的趋势变化。  相似文献   

10.
水文气象序列趋势分析与变异诊断的方法及其对比   总被引:5,自引:0,他引:5  
日趋频繁的极端天气和水文事件对经济发展和人类生命安全构成重大危害,水文气象序列的趋势变化分析与预测研究是避免和控制这些破坏性全球环境变化的前提,也是目前亟待解决的科学问题之一。基于现代数学和统计学理论,气象学和水文学研究人员对水文气象要素趋势检验和突变点识别的方法做了大量的研究。针对当今普遍采用的参数统计、非参数秩检验和小波分析方法及其本质原理,在分类阐述的基础上,系统归纳总结了各个方法在应用过程中存在的问题及解决方案,并以黑河流域托勒气象站年平均气温为实例对比分析各方法计算结果的差异性,凝练出水文气象序列趋势分析与变异诊断的理论与方法系统体系,为今后理论方法的进一步改进及应用发展提供参考。  相似文献   

11.
Identification of steps and pools from stream longitudinal profile data   总被引:1,自引:0,他引:1  
Field research on step–pool channels has largely focused on the dimensions and sequence of steps and pools and how these features vary with slope, grain sizes and other governing variables. Measurements by different investigators are frequently compared, yet no means to identify steps and pools objectively have been used. Automated surveying instruments record the morphology of streams in unprecedented detail making it possible to objectively identify steps and pools, provided an appropriate classification procedure can be developed.To achieve objective identification of steps and pools from long profile survey data, we applied a series of scale-free geometric rules that include minimum step length (2.25% of bankfull width (Wb)), minimum pool length (10% of Wb), minimum residual depth (0.23% of Wb), minimum drop height (3.3% of Wb), and minimum step slope (10° greater than the mean slope). The rules perform as well as the mean response of 11 step–pool researchers who were asked to classify four long profiles, and the results correspond well with the channel morphologies identified during the field surveys from which the long profiles were generated. The method outperforms four other techniques that have been proposed. Sensitivity analysis shows that the method is most sensitive to the choice of minimum pool length and minimum drop height.Detailed bed scans of a step–pool channel created in a flume reveal that a single long profile with a fixed sampling interval poorly characterizes the steps and pools; five or more long profiles spread across the channel are required if a fixed sampling interval is used and the data suggest that survey points should be located more frequently than the diameter of the step-forming material. A single long profile collected by a surveyor who chooses breaks in slope and representative survey points was found to adequately characterize the mean bed profile.  相似文献   

12.
Summary. This paper describes the statistical techniques available to the experimenter in palaeomagnetic work. The theory of these methods is based on an assumed probability distribution of errors. It is shown that the mathematical requirements of this distribution are obeyed by the observations from rock samples which are known to possess a stable magnetization; observations on rocks with unstable magnetization however do not conform to it. A theoretical derivation is given for this probability distribution.
The problem of estimating the mean direction of magnetization of a geological formation has in recent years become a matter of the greatest geophysical interest since it is from such estimates that the position of the pole of the Earth in past geological ages is determined. This problem is largely one of the judicious choice of samples and a procedure is suggested whereby such estimates may be achieved with the greatest sample economy.  相似文献   

13.
Lead-210 assay and dating are subject to several sources of error, including natural variation, the statistical nature of measuring radioactivity, and estimation of the supported fraction. These measurable errors are considered in calculating confidence intervals for 210Pb dates. Several sources of error, including the effect of blunders or misapplication of the mathematical model, are not included in the quantitative analysis. First-order error analysis and Monte Carlo simulation (of cores from Florida PIRLA lakes) are used as independent estimates of dating uncertainty. CRS-model dates average less than 1% older than Monte Carlo median dates, but the difference increases non-linearly with age to a maximum of 11% at 160 years. First-order errors increase exponentially with calculated CRS-model dates, with the largest 95% confidence interval in the bottommost datable section being 155±90 years, and the smallest being 128±8 years. Monte Carlo intervals also increase exponentially with age, but the largest 95% occurrence interval is 152±44 years. Confidence intervals calculated by first-order methods and ranges of Monte Carlo dates agree fairly well until the 210Pb date is about 130 years old. Older dates are unreliable because of this divergence. Ninety-five per cent confidence intervals range from about 1–2 years at 10 years of age, 10–20 at 100 years, and 8–90 at 150 years old.This is the third of a series of papers to be published by this journal which is a contribution of the Paleoecological Investigation of Recent Lake Acidification (PIRLA) project. Drs. D.F. Charles and D.R. Whitehead are guest editors for this series.  相似文献   

14.
Low-background gamma counting: applications for210Pb dating of sediments   总被引:1,自引:0,他引:1  
Sediment cores from three lakes were dated with210Pb using a constant rate of supply (CRS) model. We used low-background gamma counting to measure naturally occurring levels of210Pb,226Ra, and137Cs in sediment samples because sample preparation is simple and non-destructive,226Ra activity provides a direct measure of supported210Pb activity for each sample analyzed, and137Cs activity may provide an independent age marker for the 1962–1963 peak in atmospheric fallout of this radionuclide. In one core supported210Pb activity was estimated equally well from226Ra activity of each sampling interval or from the mean total210Pb activity of constant activity samples at depth. Supported210Pb activity was constant with depth in this core. In a short freeze core, determining226Ra activity of every sample proved advantageous in estimating supported210Pb activity because supported210Pb activity could be estimated from210Pb measurements only at the deepest sampling interval. Supported210Pb activity estimated from226Ra activity also yielded more precise estimates of highly variable sedimentation rates. In the third core226Ra activity exceeded210Pb activity at the top of the core and varied 20 fold with depth. This high input of226Ra in disequilibrium with210Pb is attributed to recent erosion of radium-bearing materials in the drainage basin. These data invalidate the assumption that supported210Pb activity is constant in sediment cores and can be estimated from the mean total210Pb activity at depths where210Pb activity is constant. We recommend using gamma counting or another independent assay of226Ra to validate the assumption of constant supported210Pb activity in sediment cores if there is reason to expect that226Ra activity varies with depth.This is the fourth of a series of papers to be published by this journal following the 20 th anniversary of the first application of210Pb dating of lake sediments. Dr. P.G. Appleby is guest editing this series.  相似文献   

15.
In this study, two sampling protocols using a model-based and a design-based framework were juxtaposed to evaluate their precision in the estimation of C stock in the Ludikhola watershed, Nepal. The model-based approach exploits the spatial dependencies in the sampled variable and may therefore be attractive over the design-based approach as it reduces the substantial costs of survey and effort required in the latter. Scales of spatial variability for C stock which resulted in a grid resolution of 10,000 m2 were determined using a reconnaissance variogram. Akaike information criterion was used for the selection of the best linear model of feature space for use in kriging with external drift (KED). Among the five tested covariates, distance, elevation, and aspect were statistically significant, with the best model of feature space accounting for 87.7% variability of C stock. An ANOVA established significance differences in mean C stocks (P = 0.00017). KED using the best model of feature space was found to be more precise, (9.89 ± 0.17) sqrt mg C/ha, than a pure-based approach of ordinary kriging and the design-based approach, (9.91 ± 0.8) sqrt mg C/ha. The confidence bounds of the two estimators showed that their confidence intervals will overlap 99.7% of the time, with both confidence intervals falling within the 95% confidence bounds of each other. There is less uncertainty around the mean C stock estimated using the model-based approach than the mean C stock estimated using the design-based approach. The model-based approach is a prospective option for the REDD framework.  相似文献   

16.
17.
Our main purpose is to collect all magnetic intensity data observed in the vicinity of London and to adjust these to a common site (Greenwich) to complement the 400-year series of declination ( D ) and inclination ( I ) data of Malin & Bullard (1981 ). The present series is necessarily shorter, since a method for the measurement of intensity in absolute units was not devised until 1832. We have also supplemented the D and I series of Malin & Bullard with recently acquired data.
We have also made observations of D , I and total intensity ( F ) at a number of the sites, partly to bring the series up to date and partly to check on the site differences. With the increasing urbanization of London it is necessary to seek data from remoter sites. It is shown that the site differences change significantly with time, but that allowance can be made for this.
We present curves of our best estimates of the variation of D , I , F and the horizontal intensity ( H ) that define the complete geomagnetic vector at Greenwich for the interval 1820–1998. Frequency analysis shows little support for a 60-year line in the power spectrum. Within the uncertainty of their determinations, there is good continuity between archaeomagnetic intensity measures and the present results. The moving eddy hypothesis of Malin & Bullard is found to be untenable.  相似文献   

18.
Ordinary kriging (OK) has been used widely for ore-reserve estimation because of its superior characteristics in relation to other methods. One of these characteristics is related to the quantification of uncertainty by the kriging variance. However, the kriging variance does not recognize local data variability, which is an important issue in the process of ore-reserve estimation, when heterogeneous mineral deposits with richer and poorer parts are being evaluated. This paper proposes the use of interpolation variance as a reliable measure of local data variability and, therefore, adequate for ore-reserve classification. With a reliable measurement of data variability, local confidence can be calculated using the classical confidence interval around an estimate. Errors derived from local confidence then are used to assign classes according to a degree of certainty within some confidence level. Comparative tests using both OK variance and interpolation variance are carried out using exploration data from Chapada Copper Deposit, State of Goiás, Brazil. Results show that the interpolation variance provides a better way to measure uncertainty and consequently to classify reserves.  相似文献   

19.
Fault plane solutions using relative amplitudes of P and pP   总被引:2,自引:1,他引:1  
Summary. One way of finding the fault plane orientations of small shallow earthquakes is by the generation of theoretical P -wave seismograms to match those observed at several distant stations. Here, a technique for determining the uniqueness of fault plane solutions computed using the modelling method of Douglas et al . is described. Relative amplitudes of P and pP , and their polarities if unambiguous, are measured on the observed seismograms to be modelled, and appropriate confidence limits are assigned to each measurement. A systematic search is then made for all fault plane orientations which satisfy these observations.
Examples show that if P and pP are not severely contaminated by other arrivals, a well-defined and unique fault plane orientation can often be computed using as few as three stations well distributed in azimuth. Further, even if pP is not identifiable on a particular seismogram, then an upper bound on its amplitude – deduced from the observed coda – still places a significantly greater constraint on the fault plane orientation than would be provided by a P onset polarity alone. Modelling takes account of all such information, and is able to further eliminate incompatible solutions (e.g. by the correct simulation of sP ). It follows that if solutions can be found which satisfy many observed seismograms, this places high significance on the validity of the assumed double-couple source mechanism.
This relative amplitude technique is contrasted with the familiar first motion method of fault plane determination which requires many polarity readings, whose reliabilities are difficult to quantify. It is also shown that fault plane orientations can be determined for earthquakes below the magnitude at which first motion solutions become unreliable or impossible.  相似文献   

20.
This article introduces a quantitative methodology for analyzing contested map borders. The article applies the new analytical technique to a data set of thirty maps showing Bulgaria in ca. 800 CE, a disputed state and period in medieval historiography with relevance to modern national politics and territorial claims. Based on the data set, we generate a series of new maps that make explicit the fluid medieval boundaries and general disagreement among geographers and historiographers. Our analysis begins with a simple point-in-polygon procedure to create a majority map that depicts the points included within the borders of the Bulgarian polity in sixteen or more of the maps (>50 percent). The majority map is then combined with percentage maps, confidence interval map boundaries, and cluster maps. The confidence interval maps are created via a spatial bootstrapping procedure and measure the uncertainty in the majority map. The cluster maps are developed via a radial basis function and provide insight into the potential affectivity based on the cartographers' countries of origin. The final map reflects the general modern consensus of the borders of the Bulgarian polity around 800 CE. Besides its quantitative contribution to medieval and modern cartographic, historiographical, and political debates, this article has developed a widely applicable methodology for synthesizing map borders and territories in cases of cartographic disagreement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号