首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The chemical composition of the global ocean is governed by biological, chemical, and physical processes. These processes interact with each other so that the concentrations of carbon, oxygen, nitrogen (mainly from nitrate, nitrite, ammonium), and phosphorous (mainly from phosphate), vary in constant proportions, referred to as the Redfield ratios. We construct here the generalized total least squares estimator of these ratios. The significance of our approach is twofold; it respects the hydrological characteristics of the studied areas, and it can be applied identically in any area where enough data are available. The tests applied to Atlantic Ocean data highlight a variability of the Redfield ratios, both with geographical location and with depth. This variability emphasizes the importance of local and accurate estimates of Redfield ratios.  相似文献   

2.
Stochastic Environmental Research and Risk Assessment - A Bayesian importance sampling method is developed to efficiently and accurately calibrate the parameters of non-linear and non-Gaussian...  相似文献   

3.
The application of the weighted and unweighted least-squares method to the analysis of the individual concentrations of short-lived radon daughters in the open atmosphere, where, unlike in the mines the levels are low, is discussed. The method can be optimized to give minimum counting errors by proper choice of collection times and counting periods. By analysing a large number of samples it is shown that the least squares method gives better accuracy than the simultaneous equations method generally in use. The calculation can be simplified by using the unweighted least-squares analysis without significant loss in accuracy. The levels of RaA, RaB and RaC in surface air at Bombay during the period January-June 1975, calculated using the least-square method, are presented. The activity ratios of RaC/RaB are shown to have an average value around 0.6–0.9 even at 18 m height. The RaB/RaA activity ratios however show a much larger scatter than the RaC/RaB values. The possible reasons for these differences are discussed. The application of the least-squares method to the analysis of Th-B and Th-C is also described.  相似文献   

4.
It is expected that episodic tremors and slips (ETS) as a type of slow earthquakes with periodic property will be detected using harmonic estimation techniques. The principal goal of this paper is the detection of these kinds of earthquakes using least squares harmonic estimation (LSHE). To accomplish this, the raw time series of 38 permanent GPS stations of the Pacific Northwest Geodetic Array have been analyzed. Previously, some independent techniques could confirm the occurrences of the aforementioned quakes at these stations. However, the current research intends to evaluate the spectrum of each of the de-trended time series using the LS-HE method. In each station, the period of the detected harmonic with the maximum power spectrum is equivalent to the average period previously reported for these events. According to the obtained results, the recurrence interval of these events ranges from 9 months to 3 years. In sum, the study confirms this method being efficient for investigating the occurrences of ETSs when the length of the GPS time series is sufficiently large.  相似文献   

5.
除了信噪比、有效子波畸变等,稳健性(Robustness)也是度量滤波方法效果的一个重要的物理量,它刻画了滤波系统应对异常点值的能力.一般用影响函数作为评价稳健性的工具.支持向量机方法已较成功地应用于信号与图像的滤波中,尤其Ricker子波核方法更适于地震勘探信号处理.通过考察Ricker子波核最小二乘支持向量回归(LS-SVR:least squares support vector regression)滤波方法的影响函数,可以证明该方法的稳健性较差,本文用加权方法改善该方法的稳健性.经过大量理论实验得到一种改进的权函数,使加权之后的方法具有比较理想的稳健性.进一步用这个权函数辅助的加权Ricker子波LS-SVR处理含噪的合成与实际地震记录,都得到较好的效果.由具有平方损失函数的LS-SVR信号处理系统的无界影响函数出发,本文所提出的权函数可以有效地应用于具有相似损失函数的处理过程,如消噪、信号检测、提高分辨率与预测等问题.  相似文献   

6.
不规则采样地震数据的重建是地震数据分析处理的重要问题.本文给出了一种基于非均匀快速傅里叶变换的最小二乘反演地震数据重建的方法,在最小二乘反演插值方程中,引入正则化功率谱约束项,通过非均匀快速傅里叶变换和修改周期图的方式,自适应迭代修改约束项,使待插值数据的频谱越来越接近真实的频谱,采用预条件共轭梯度法迭代求解,保证了解的稳定性和收敛速度.理论模型和实际地震数据插值试验证明了本文方法能够去除空间假频,速度快、插值效果好,具有实用价值.  相似文献   

7.
Surface wave methods are becoming increasingly popular in many geotechnical applications and in earthquake seismology due to their noninvasive characteristics.Inverse surface wave dispersion curves are a crucial step in most surface wave methods.Many inversion methods have been applied to surface wave dispersion curve inversion,including linearized inversion and nonlinearized inversion methods.In this study,a hybrid inversion method of Damped Least Squares(DLS) with Very Fast Simulated Annealing(VFSA) is developed for multi-mode Rayleigh wave dispersion curve inversion.Both synthetic and in situ fi eld data were used to verify the validity of the proposed method.The results show that the proposed method is superior to the conventional VFSA method in aiming at global minimum,especially when parameter searching space is adjacent to real values of the parameters.The advantage of the new method is that it retains both the merits of VFSA for global search and DLS for local search.At high temperatures,the global search dominates the runs,while at a low temperatures,the local search dominates the runs.Thus,at low temperatures,the proposed method can almost directly approach the actual model.  相似文献   

8.
利用最小二乘拟合法进行多站地磁日变基值归算   总被引:1,自引:0,他引:1       下载免费PDF全文
多站日变改正技术是解决远海区大范围磁力测量地磁日变改正的关键技术,然而多站日变改正中,为了减小磁场水平差异的影响,分站日变基值必须向主站归算.基于地磁日变化的时空特点,尝试将最小二乘拟合法引入到多站日变基值归算中,并采用多站同步观测数据对方法的有效性进行了验证.结论表明:当主站和分站日变性质相似时,最小二乘拟合法可以取得良好的效果,且对同步观测时段低.但是当日变性质差异较大时,则应当采用传统的同步比对法.  相似文献   

9.
A robust metric of data misfit such as the ?1‐norm is required for geophysical parameter estimation when the data are contaminated by erratic noise. Recently, the iteratively re‐weighted and refined least‐squares algorithm was introduced for efficient solution of geophysical inverse problems in the presence of additive Gaussian noise in the data. We extend the algorithm in two practically important directions to make it applicable to data with non‐Gaussian noise and to make its regularisation parameter tuning more efficient and automatic. The regularisation parameter in iteratively reweighted and refined least‐squares algorithm varies with iteration, allowing the efficient solution of constrained problems. A technique is proposed based on the secant method for root finding to concentrate on finding a solution that satisfies the constraint, either fitting to a target misfit (if a bound on the noise is available) or having a target size (if a bound on the solution is available). This technique leads to an automatic update of the regularisation parameter at each and every iteration. We further propose a simple and efficient scheme that tunes the regularisation parameter without requiring target bounds. This is of great importance for the field data inversion where there is no information about the size of the noise and the solution. Numerical examples from non‐stationary seismic deconvolution and velocity‐stack inversion show that the proposed algorithm is efficient, stable, and robust and outperforms the conventional and state‐of‐the‐art methods.  相似文献   

10.
为提升现地仪器地震烈度预测的准确性与连续性,研究面向地震预警的PGV连续预测模型.以中国仪器地震烈度标准的计算参数:0.1~10 Hz带通滤波三分向矢量合成速度峰值PGV为预测目标,利用日本K-net与KiK-net台网P波触发后1~10 s强震数据,基于人工智能中的机器学习方法-最小二乘支持向量机,选取7种特征参数作为输入构建最小二乘支持向量机PGV预测模型LSSVM-PGV.结果表明,本文建立的LSSVM-PGV模型在训练数据集与测试数据集上的预测误差标准差变化趋于一致,具备泛化性能;P波触发后3 s预测PGV与实测PGV即可整体符合1:1关系,随着时间窗的增长,PGV预测的误差标准差显著减小、并在P波触发后6 s趋向收敛,具备准确连续预测能力;对比同为P波触发后3 s的常用Pd-PGV模型,LSSVM-PGV模型的PGV预测误差标准差明显减小,"小值高估"与"大值低估"现象明显改善,预测准确性得到提升.熊本地震序列的震例分析表明,对于6.5级以下地震,LSSVM-PGV模型最多在P波触发后3 s即可预测出与实测PGV整体符合1:1关系的PGV;对于7.3级主震,由于其破裂过程的复杂性,P波触发后3 s的预测结果出现一定程度的低估,但随着时间窗增长至6 s时,预测PGV与实测PGV符合1:1关系、并直到10 s整体趋势保持一致.本文构建的LSSVM-PGV模型可用于现地地震预警仪器地震烈度的预测.  相似文献   

11.
震源参数反演及精度评定的Bootstrap方法   总被引:1,自引:0,他引:1  

在震源参数反演理论研究中,地表形变与震源参数之间为复杂多维的非线性关系,针对传统泰勒级数展开的精度评定方法可能无法适用于震源参数的精度评定问题,本文将Bootstrap方法引入到震源参数非线性反演及精度评定研究中.通过对GPS地表形变观测数据实施Bootstrap重采样获取自助样本,使用遗传算法(Genetic Algorithm,GA)搜索震源参数,设计并给出了震源参数精度评定的Bootstrap方法计算流程.将本文方法用于6个模拟地震、Amatrice地震及Visso地震实验中,通过反演震源参数、获取参数的置信区间及中误差,并与Jackknife方法、Monte Carlo方法进行对比分析.实验结果表明,通过执行本文精度评定方法能够获取比Jackknife方法更加可靠的震源参数置信区间以及更加精确的精度信息.实验验证了将Bootstrap方法用于震源参数精度评定的有效性和可靠性,为研究震源参数精度评定理论研究提供了一种新的采样思路.

  相似文献   

12.
建立了利用扰动重力梯度张量Tzz分量和Txx+Tyy、Tzz-Txx-Tyy组合分量确定地球重力场的调和分析法模型,进一步推导了扰动重力梯度张量对角线三分量的自协方差和互协方差函数的级数展开式,推导了单分量、组合分量与重力位系数之间协方差函数的实用计算公式,给出了利用单分量和组合分量解算地球重力场模型的最小二乘配置法基本原理公式.结果表明,最小二乘配置法具有一定的抗差能力,随着观测数据误差的不断增大,其恢复的重力场模型有效阶次不断降低,精度也不断下降;Tzz-Txx-Tyy组合分量解算重力场模型的精度最高,其次为Tzz分量,Txx+Tyy组合分量最差.  相似文献   

13.
Spatial data are commonly minimal and may have been collected in the process of confirming the profitability of a mining venture or investigating a contaminated site. In such situations, it is common to have measurements preferentially taken in the most critical areas (sweet spots, allegedly contaminated areas), thus conditionally biasing the sample. While preferential sampling makes good practical sense, its direct use leads to distorted sample moments and percentiles. Spatial clusters are a problem that has been identified in the past and solved with approaches ranging from ad hoc solutions to highly elaborate mathematical formulations, covering mostly the effect of clustering on the cumulative frequency distribution. The method proposed here is a form of resample, free of special assumptions, does not use weights to ponder the measurements, does not find solutions by successive approximation and provides variability in the results. The new method is illustrated with a synthetic dataset with an exponential semivariogram and purposely generated to follow a lognormal distribution. The lognormal distribution is both difficult to work with and typical of many attributes of practical interest. Testing of the new solution shows that sample subsets derived from resampled datasets can closely approximate the true probability distribution and the semivariogram, clearly outperforming the original preferentially sampled data.  相似文献   

14.
The stochastic continuum (SC) representation is one common approach for simulating the effects of fracture heterogeneity in groundwater flow and transport models. These SC reservoir models are generally developed using geostatistical methods (e.g., kriging or sequential simulation) that rely on the model semivariogram to describe the spatial variability of each continuum. Although a number of strategies for sampling spatial distributions have been published in the literature, little attention has been paid to the optimization of sampling in resource- or access-limited environments. Here we present a strategy for estimating the minimum sample spacing needed to define the spatial distribution of fractures on a vertical outcrop of basalt, located in the Box Canyon, east Snake River Plain, Idaho. We used fracture maps of similar basalts from the published literature to test experimentally the effects of different sample spacings on the resulting semivariogram model. Our final field sampling strategy was based on the lowest sample density that reproduced the semivariogram of the exhaustively sampled fracture map. Application of the derived sampling strategy to an outcrop in our field area gave excellent results, and illustrates the utility of this type of sample optimization. The method will work for developing a sampling plan for any intensive property, provided prior information for a similar domain is available; for example, fracture maps or ortho-rectified photographs from analogous rock types could be used to plan for sampling of a fractured rock outcrop.  相似文献   

15.
16.
The problems of calibrating soil hydraulic and transport parameters are well documented, particularly when data are limited. Programs such as CXTFIT, UUCODE and PEST, based on well established principles of statistical inference, will often provide good fits to limited observations giving the impression that a useful model of a particular soil system has been obtained. This may be the case, but such an approach may grossly underestimate the uncertainties associated with future predictions of the system and resulting dependent variables. In this paper, this is illustrated by an application of CXTFIT within the generalised likelihood uncertainty estimation (GLUE) approach to model calibration which is based on a quite different philosophy. CXTFIT gives very good fits to the observed breakthrough curves for several different model formulations, resulting in very small parameter uncertainty estimates. The application of GLUE, however, shows that much wider ranges of parameter values can provide acceptable fits to the data. The wider range of potential outcomes should be more robust in model prediction, especially when used to constrain field scale models.  相似文献   

17.
This study compared 2 standardized protocols to monitor subtidal rocky shores. We tested 2 sampling methods (temporal unit and quadrat) to assess the efficiency of extracting biota parameters (diversity, abundance, and biomass) of macroalgae, Mollusca, and Porifera with respect to time–cost and the number of sampling units. Species richness and occurrence of rocky subtidal habitats were better described by visual censuses than by quadrats. The same estimated richness was provided by the 2 methods. The association of a visual census and a quadrat was the most efficient way for responding to the requirements. A minimum of 5 sampling units per discrete area is recommended for accurately describing habitats. Then, we tested the sensitivity of the proposed protocol on the Bizeux Islet to study the variations of community structures according to depth and station. Based on the results, recommendations for monitoring purposes have been proposed according to European directives.  相似文献   

18.
The effect of the sample size on prediction quality is well understood. Recently, studies have assessed this relationship using near‐continuous water quality samples. However, this is rarely possible because of financial constraints, and therefore, many studies have relied on simulation‐based methods utilizing more affordable surrogates. A limitation of simulation‐based methods is the requirement of a good relationship, which is often not present. Therefore, catchment managers require a direct method to estimate the effect of sample size on the mean using historical water quality data. One measure of prediction quality is the precision with which a mean is estimated; this is the focus of this work. By characterizing the effect of sample size on the precision of the mean, it is possible for catchment managers to adjust the sample size in relation to both the cost and the precision. Historical data are often sparse and generally collected using several different sampling schemes, all without inclusion probabilities. This means that an approach is needed to obtain unbiased estimates of the variance of the mean using a model‐based approach. With the use of total phosphorus data from 17 sub‐catchments in southeastern Australia, the ability of a model‐based approach to estimate the effect of sample size on the precision of event and base‐flow mean concentrations. The results showed that for estimating annual base‐flow mean concentration, little gain in precision was achieved above 12 observations per year. Sample sizes greater than 12 samples per event improved event‐based estimates; however, the inclusion of more than 12 samples per event did not greatly reduce the event mean concentration uncertainties. The precision of the base‐flow estimates was most correlated to percentage urban cover, whereas the precision of the event mean estimates was most correlated with catchment size. The method proposed in this work could be readily applied to other water quality variables and other monitoring sites. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
Because of their fast response to hydrological events, small catchments show strong quantitative and qualitative variations in their water runoff. Fluxes of solutes or suspended material can be estimated from water samples only if an appropriate sampling scheme is used. We used continuous in‐stream measurements of the electrical conductivity of the runoff in a small subalpine catchment (64 ha) in central Switzerland and in a very small (0·16 ha) subcatchment. Different sampling and flux integration methods were simulated for weekly water analyses. Fluxes calculated directly from grab samples are strongly biased towards high conductivities observed at low discharges. Several regressions and weighted averages have been proposed to correct for this bias. Their accuracy and precision are better, but none of these integration methods gives a consistently low bias and a low residual error. Different methods of peak sampling were also tested. Like regressions, they produce important residual errors and their bias is variable. This variability (both between methods and between catchments) does not allow one to tell a priori which sampling scheme and integration method would be more accurate. Only discharge‐proportional sampling methods were found to give essentially unbiased flux estimates. Programmed samplers with a fraction collector allow for a proportional pooling and are appropriate for short‐term studies. For long‐term monitoring or experiments, sampling at a frequency proportional to the discharge appears to be the best way to obtain accurate and precise flux estimates. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
This paper deals with the investigation of the Mars subsurface by means of data collected by the Mars Advanced Radar for Subsurface and Ionosphere Sounding working at few megahertz frequencies. A data processing strategy, which combines a simple inversion model and an accurate procedure for data selection is presented. This strategy permits to mitigate the theoretical and practical difficulties of the inverse problem arising because of the inaccurate knowledge of the parameters regarding both the scenario under investigation and the radiated electromagnetic field impinging on the Mars surface. The results presented in this paper show that it is possible to reliably retrieve the electromagnetic properties of deeper structures if such strategy is accurately applied. An example is given here, where the analysis of the data collected on Gemina Lingula, a region of the North Polar layer deposits, allowed us to retrieve permittivity values for the basal unit in agreement with those usually associated to the Earth basaltic rocks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号