首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Compositional Bayesian indicator estimation   总被引:1,自引:1,他引:0  
Indicator kriging is widely used for mapping spatial binary variables and for estimating the global and local spatial distributions of variables in geosciences. For continuous random variables, indicator kriging gives an estimate of the cumulative distribution function, for a given threshold, which is then the estimate of a probability. Like any other kriging procedure, indicator kriging provides an estimation variance that, although not often used in applications, should be taken into account as it assesses the uncertainty of the estimate. An alternative approach to indicator estimation is proposed in this paper. In this alternative approach the complete probability density function of the indicator estimate is evaluated. The procedure is described in a Bayesian framework, using a multivariate Gaussian likelihood and an a priori distribution which are both combined according to Bayes theorem in order to obtain a posterior distribution for the indicator estimate. From this posterior distribution, point estimates, interval estimates and uncertainty measures can be obtained. Among the point estimates, the median of the posterior distribution is the maximum entropy estimate because there is a fifty-fifty chance of the unknown value of the estimate being larger or smaller than the median; that is, there is maximum uncertainty in the choice between two alternatives. Thus in some sense, the latter is an indicator estimator, alternative to the kriging estimator, that includes its own uncertainty. On the other hand, the mode of the posterior distribution estimator, assuming a uniform prior, is coincidental with the simple kriging estimator. Additionally, because the indicator estimate can be considered as a two-part composition which domain of definition is the simplex, the method is extended to compositional Bayesian indicator estimation. Bayesian indicator estimation and compositional Bayesian indicator estimation are illustrated with an environmental case study in which the probability of the content of a geochemical element in soil being over a particular threshold is of interest. The computer codes and its user guides are public domain and freely available.  相似文献   

2.
The main objective of the AVO inversion is to obtain posterior distributions for P-wave velocity, S-wave velocity and density from specified prior distributions, seismic data and well-log data. The inversion problem also involves estimation of a seismic wavelet and the seismic-noise level. The noise model is represented by a zero mean Gaussian distribution specified by a covariance matrix. A method for joint AVO inversion, wavelet estimation and estimation of the noise level is developed in a Bayesian framework. The stochastic model includes uncertainty of both the elastic parameters, the wavelet, and the seismic and well-log data. The posterior distribution is explored by Markov-chain Monte-Carlo simulation using the Gibbs' sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The use of a coloured seismic-noise model resulted in about 10% lower uncertainties for the P-wave velocity, S-wave velocity and density compared with a white-noise model. The uncertainty of the estimated wavelet is low. In the Heidrun example, the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results.  相似文献   

3.
Bayesian inference for the Errors-In-Variables model   总被引:1,自引:0,他引:1  
We discuss the Bayesian inference based on the Errors-In-Variables (EIV) model. The proposed estimators are developed not only for the unknown parameters but also for the variance factor with or without prior information. The proposed Total Least-Squares (TLS) estimators of the unknown parameter are deemed as the quasi Least-Squares (LS) and quasi maximum a posterior (MAP) solution. In addition, the variance factor of the EIV model is proven to be always smaller than the variance factor of the traditional linear model. A numerical example demonstrates the performance of the proposed solutions.  相似文献   

4.
《国际泥沙研究》2019,34(6):577-590
Bayesian and discriminant function analysis (DFA) models have recently been used as tools to estimate sediment source contributions. Unlike existing multivariate mixing models, the accuracy of these two models remains unclear. In the current study, four well-distinguished source samples were used to create artificial mixtures to test the performance of Bayesian and DFA models. These models were tested against the Walling-Collins model, a credible model used in estimation of sediment source contributions estimation, as a reference. The artificial mixtures were divided into five groups, with each group consisting of five samples with known source percentages. The relative contributions of the sediment sources to the individual and grouped samples were calculated using each of the models. The mean absolute error (MAE) and standard error of (SE) MAE were used to test the accuracy of each model and the robustness of the optimized solutions. For the individual sediment samples, the calculated source contributions obtained with the Bayesian (MAE = 7.4%, SE = 0.6%) and Walling-Collins (MAE = 7.5%, SE = 0.7%) models produced results which were closest to the actual percentages of the source contributions to the sediment mixtures. The DFA model produced the worst estimates (MAE = 18.4%, SE = 1.4%). For the grouped sediment samples, the Walling-Collins model (MAE = 5.4%) was the best predictor, closely followed by the Bayesian model (MAE = 5.9%). The results obtained with the DFA model were similar to the values for the individual sediment samples, with the accuracy of the source contribution value being the poorest obtained with any of the models (MAE = 18.5%). An increase in sample size improved the accuracies of the Walling-Collins and Bayesian models, but the DFA model produced similarly inaccurate results for both the individual and grouped sediment samples. Generally, the accuracy of the Walling-Collins and Bayesian models was similar (p > 0.01), while there were significant differences (p < 0.01) between the DFA model and the other models. This study demonstrated that the Bayesian model could provide a credible estimation of sediment source contributions and has great practical potential, while the accuracy of the DFA model still requires considerable improvement.  相似文献   

5.
The forecasting of large aftershocks is a preliminary and critical step in seismic hazard analysis and seismic risk management. From a statistical point of view, it relies entirely on the estimation of the properties of aftershock sequences using a set of laws with well-defined parameters. Since the frequentist and Bayesian approaches are common tools to assess these parameter values, we compare the two approaches for the Modified Omori Law and a selection of mainshock–aftershock sequences in the Iranian Plateau. There is a general agreement between the two methods, but the Bayesian appears to be more efficient as the number of recorded aftershocks decreases. Taking into account temporal variations of the b-value, the slope of the frequency-size distribution, the probability for the occurrence of strong aftershock, or larger main shock has been calculated in a finite time window using the parameters of the Modified Omori Law observed in the Iranian Plateau.  相似文献   

6.
Flood frequency analysis is usually based on the fitting of an extreme value distribution to the local streamflow series. However, when the local data series is short, frequency analysis results become unreliable. Regional frequency analysis is a convenient way to reduce the estimation uncertainty. In this work, we propose a regional Bayesian model for short record length sites. This model is less restrictive than the index flood model while preserving the formalism of “homogeneous regions”. The performance of the proposed model is assessed on a set of gauging stations in France. The accuracy of quantile estimates as a function of the degree of homogeneity of the pooling group is also analysed. The results indicate that the regional Bayesian model outperforms the index flood model and local estimators. Furthermore, it seems that working with relatively large and homogeneous regions may lead to more accurate results than working with smaller and highly homogeneous regions.  相似文献   

7.
Stochastic Environmental Research and Risk Assessment - In this study, we propose a regional Bayesian hierarchical model for flood frequency analysis. The Bayesian method is an alternative to the...  相似文献   

8.
Spatial heterogeneity in groundwater system introduces significant challenges in groundwater modeling and parameter calibration. In order to mitigate the modeling uncertainty, data assiilation...  相似文献   

9.
We focus on the Bayesian estimation of strongly heterogeneous transmissivity fields conditional on data sampled at a set of locations in an aquifer. Log-transmissivity, Y, is modeled as a stochastic Gaussian process, parameterized through a truncated Karhunen–Loève (KL) expansion. We consider Y fields characterized by a short correlation scale as compared to the size of the observed domain. These systems are associated with a KL decomposition which still requires a high number of parameters, thus hampering the efficiency of the Bayesian estimation of the underlying stochastic field. The distinctive aim of this work is to present an efficient approach for the stochastic inverse modeling of fully saturated groundwater flow in these types of strongly heterogeneous domains. The methodology is grounded on the construction of an optimal sparse KL decomposition which is achieved by retaining only a limited set of modes in the expansion. Mode selection is driven by model selection criteria and is conditional on available data of hydraulic heads and (optionally) Y. Bayesian inversion of the optimal sparse KLE is then inferred using Markov Chain Monte Carlo (MCMC) samplers. As a test bed, we illustrate our approach by way of a suite of computational examples where noisy head and Y values are sampled from a given randomly generated system. Our findings suggest that the proposed methodology yields a globally satisfactory inversion of the stochastic head and Y fields. Comparison of reference values against the corresponding MCMC predictive distributions suggests that observed values are well reproduced in a probabilistic sense. In a few cases, reference values at some unsampled locations (typically far from measurements) are not captured by the posterior probability distributions. In these cases, the quality of the estimation could be improved, e.g., by increasing the number of measurements and/or the threshold for the selection of KL modes.  相似文献   

10.
Non-linear numerical models of the injection phase of a carbon sequestration (CS) project are computationally demanding. Thus, the computational cost of the calibration of these models using sampling-based solutions can be formidable. The Bayesian adaptive response surface method (BARSM)—an adaptive response surface method (RSM)—is developed to mitigate the cost of sampling-based, continuous calibration of CS models. It is demonstrated that the adaptive scheme has a negligible effect on accuracy, while providing a significant increase in efficiency. In the BARSM, a meta-model replaces the computationally costly full model during the majority of the calibration cycles. In the remaining cycles, the full model is used and samples of these cycles are utilized for adaptively updating the meta-model. The idea behind the BARSM is to take advantage of the fact that sampling-based calibration algorithms typically tend to sample more frequently from areas with a larger posterior density than from areas with a smaller posterior density. This behavior of the sampling-based calibration algorithms is used to adaptively update the meta-model and to make it more accurate where it is most likely to be evaluated. The BARSM is integrated with Unscented Importance Sampling (UIS) (Sarkarfarshi and Gracie, Stoch Env Res Risk Assess 29: 975–993, 2015), which is an efficient Bayesian calibration algorithm. A synthesized case of supercritical CO2 injection in a heterogeneous saline aquifer is used to assess the performance of the BARSM and to compare it with a classical non-adaptive RSM approach and Bayesian calibration method UIS without using RSM. The BARSM is shown to reduce the computational cost compared to non-adaptive Bayesian calibration by 87 %, with negligible effect on accuracy. It is demonstrated that the error of the meta-model fitted using the BARSM, when samples are drawn from the posterior parameter distribution, is negligible and smaller than the monitoring error.  相似文献   

11.
A regularized version of the direct interaction approximation closure (RDIA) is compared with ensemble averaged direct numerical simulations (DNS) for decaying two-dimensional turbulence at large-scale Reynolds numbers ranging between low (≈?50) and high (≈?4000). The regularization localizes transfer by removing the interaction between large-scale and small-scale eddies depending on a specified cut-off ratio α. It thus eliminates spurious convection effects of small-scale eddies by large-scale eddies in the Eulerian direct interaction approximation (DIA) that causes the underestimation of small-scale kinetic energy by the DIA. Cumulant update versions of the RDIA closure that have comparable performance but are much more efficient computationally have also been analyzed. Both the closures and DNS use discrete wavenumber representations relevant to flows on a doubly periodic domain. This means that any differences between them are intrinsic and not partly due to using continuous wavenumber formulation for the closures.

Comparisons between the regularized closures and DNS have focused on evolved kinetic energy and palinstrophy spectra and as well on enstrophy flux spectra and on the evolution of skewness which depends sensitively on small-scale differences. All of these diagnostics compare quite well when α = 6. And this is the case for runs started from each of three initial spectra, for the range of evolved large-scale Reynolds numbers ranging from ≈?50 to ≈?4000 and for regularized DIA closures with, and particularly without, cumulant update restarts. The performance of the RDIA compared with quasi-Lagrangian closure models is discussed.  相似文献   

12.
Guotao Cui 《水文科学杂志》2017,62(13):2222-2237
A Green-Ampt type model for sloping layered soils (GASLS) was developed to investigate infiltration processes. We introduced a factor c, which is the same for all layers and represents the ratio of effective hydraulic conductivity over saturated hydraulic conductivity. Guidelines to estimate the factor c were established based on 234 scenarios under various conditions. The model with the estimated factor c can describe infiltration processes better than that with c = 1. For fine soils, or layered formations with finer soils on the top, c is smaller than 1. The factor c for coarse soils, or layer formations with coarse soils on the top is close to 1. Comparison with laboratory experiments on a sloping surface indicated that the GASLS model with a slope factor that is adjusted by the sine of the slope angle can represent the sloping surface effects. The GASLS model can incorporate any slope factor.  相似文献   

13.
In ground water flow and transport modeling, the heterogeneous nature of porous media has a considerable effect on the resulting flow and solute transport. Some method of generating the heterogeneous field from a limited dataset of uncertain measurements is required. Bayesian updating is one method that interpolates from an uncertain dataset using the statistics of the underlying probability distribution function. In this paper, Bayesian updating was used to determine the heterogeneous natural log transmissivity field for a carbonate and a sandstone aquifer in southern Manitoba. It was determined that the transmissivity in m2/sec followed a natural log normal distribution for both aquifers with a mean of -7.2 and - 8.0 for the carbonate and sandstone aquifers, respectively. The variograms were calculated using an estimator developed by Li and Lake (1994). Fractal nature was not evident in the variogram from either aquifer. The Bayesian updating heterogeneous field provided good results even in cases where little data was available. A large transmissivity zone in the sandstone aquifer was created by the Bayesian procedure, which is not a reflection of any deterministic consideration, but is a natural outcome of updating a prior probability distribution function with observations. The statistical model returns a result that is very reasonable; that is homogeneous in regions where little or no information is available to alter an initial state. No long range correlation trends or fractal behavior of the log-transmissivity field was observed in either aquifer over a distance of about 300 km.  相似文献   

14.
In this study, we propose and implement a Bayesian model to estimate a central equivalent dose from a set of luminescence measurements. This model is based on assumptions similar to the ones used in the standard statistical pipeline (typically implemented in the Analyst software followed by a subsequent central equivalent dose analysis) but tackles some of its main limitations. More specifically, it consists of a three-stage hierarchical model that has two main advantages over the standard approach: first, it avoids the introduction of auxiliary variables (typically mean and variance), at each step of the inference process, which are likely to fail to characterise the distributions of interest; second, it ensures a homogeneous and consistent inference with respect to the overall model and data. As a Bayesian model, our model requires the specification of prior distributions; we discuss such informative and non-informative distributions and check the relevance of our choices on synthetic data. Then, we use data derived from Single Aliquot and Regenerative (SAR) dose measurements performed on single grains from laboratory-bleached and dosed samples. The results show that our Bayesian approach offers a promising alternative to the standard one. Finally, we conclude by stressing that, relying on a Bayesian hierarchical model, our approach could be modified to incorporate additional information (e.g. stratigraphic constraints) that is difficult to formalise properly with the existing approaches.  相似文献   

15.
 The data analyzed in this paper are part of the results described in Bueno et al. (2000). Three cytogenetics endpoints were analyzed in three populations of a species of wild rodent – Akodon montensis – living in an industrial, an agricultural, and a preservation area at the Itajaí Valley, State of Santa Catarina, Brazil. The polychromatic/normochromatic ratio, the mitotic index, and the frequency of micronucleated polychromatic erythrocites were used in an attempt to establish a genotoxic profile of each area. It was assumed that the three populations were in the same conditions with respect to the influence of confounding factors such as animal age, health, nutrition status, presence of pathogens, and intra- and inter-populational genetic variability. Therefore, any differences found in the endpoints analyzed could be attributed to the external agents present in each area. The statistical models used in this paper are mixtures of negative-binomials and Poisson variables. The Poisson variables are used as approximations of binomials for rare events. The mixing distributions are beta densities. The statistical analyzes are under the bayesian perspective, as opposed to the frequentist ones often considered in the literature, as for instance in Bueno et al. (2000).  相似文献   

16.
Different from the stacked seismic data, pre-stack data includes abundant information about shear wave and density. Through inversing the shear wave and density information from the pre-stack data, we can determine oil-bearing properties from different incident angles. The state-of-the-art inversion methods obtain either low vertical resolution or lateral discontinuities. However, the practical reservoir generally has sharp discontinuities between different layers in vertically direction and is horizontally smooth. Towards obtaining the practical model, we present an inversion method based on the regularized amplitude-versus-incidence angle (AVA) data to estimate the piecewise-smooth model from pre-stack seismic data. This method considers subsurface stratum as a combination of two parts: a piecewise smooth part and a constant part. To fix the ill-posedness in the inversion, we adopt four terms to define the AVA inversion misfit function: the data misfit itself, a total variation regularization term acting as a sparsing operator for the piecewise constant part, a Tikhonov regularization term acting as a smoothing operator for the smooth part, and the last term to smoothly incorporate a priori information for constraining the magnitude of the estimated model. The proposed method not only can incorporate structure information and a priori model constraint, but also is able to derive into a convex objective function that can be easily minimized using iterative approach. Compared with inversion results of TV and Tikhonov regularization methods, the inverted P-wave velocity, S-wave velocity and density of the proposed method can better delineate the piecewise-smooth characteristic of strata.  相似文献   

17.

本文基于贝叶斯估计采用同时估计海底应答器位置和声速水平梯度的策略处理了水深约3 km的GNSS-A观测数据,分别在单应答器模式、多应答器阵列模式下,分析了航迹几何构型、参数初值以及参数估计策略对海底固定应答器和锚系应答器三维定位精度的影响.结果表明:应答器位置后验不确定度与先验不确定度呈正比关系,其比例系数对于单站模式约为0.1,对于多站模式约为0.05;观测时延残差均方差(RMS)在单站模式下约为0.6 ms,多站模式下约为4.5 ms;圆形测线由于测量时间较短,声速水平梯度变化不大,因此同时估计海底应答器位置和声速水平梯度能够获得较高的定位精度;然而,由于井字形航迹的走航时间较长,声速水平梯度变化较大,该估计策略已不再适用,因而精度最低;圆形测线对参数初值先验精度要求较高;当换能器与GNSS天线的臂长先验精度较高时,对于圆形测线,估计该参数并不能显著提升定位精度.

  相似文献   

18.
This study introduces Bayesian model averaging (BMA) to deal with model structure uncertainty in groundwater management decisions. A robust optimized policy should take into account model parameter uncertainty as well as uncertainty in imprecise model structure. Due to a limited amount of groundwater head data and hydraulic conductivity data, multiple simulation models are developed based on different head boundary condition values and semivariogram models of hydraulic conductivity. Instead of selecting the best simulation model, a variance-window-based BMA method is introduced to the management model to utilize all simulation models to predict chloride concentration. Given different semivariogram models, the spatially correlated hydraulic conductivity distributions are estimated by the generalized parameterization (GP) method that combines the Voronoi zones and the ordinary kriging (OK) estimates. The model weights of BMA are estimated by the Bayesian information criterion (BIC) and the variance window in the maximum likelihood estimation. The simulation models are then weighted to predict chloride concentrations within the constraints of the management model. The methodology is implemented to manage saltwater intrusion in the “1,500-foot” sand aquifer in the Baton Rouge area, Louisiana. The management model aims to obtain optimal joint operations of the hydraulic barrier system and the saltwater extraction system to mitigate saltwater intrusion. A genetic algorithm (GA) is used to obtain the optimal injection and extraction policies. Using the BMA predictions, higher injection rates and pumping rates are needed to cover more constraint violations, which do not occur if a single best model is used.  相似文献   

19.
正贝叶斯概率算法(Beyesian probability algorithm)是一种通用的框架,整合了先验概率和似然概率各自的优点,并能根据数据的具体情况作出相应调整,确保计算结果的合理性.该算法除了用于河流溶解氧含量的估算(Patil,Deng,2011)、桩基础的可靠性评估(Zhang et al,2006)、合成孔径雷达图像的重建(Vu et al,2013)和海上钻井风险性的量化评价(Khakzad et al,2013)以外,也被应用于诸如地震边坡失  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号