首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new wavelet-based estimation methodology, in the context of spatial functional regression, is proposed to discriminate between small-scale and large scale variability of spatially correlated functional data, defined by depth-dependent curves. Specifically, the discrete wavelet transform of the data is computed in space and depth to reduce dimensionality. Moment-based regression estimation is applied for the approximation of the scaling coefficients of the functional response. While its wavelet coefficients are estimated in a Bayesian regression framework. Both regression approaches are implemented from the empirical versions of the scaling and wavelet auto-covariance and cross-covariance operators, characterizing the correlation structure of the spatial functional response. Weather stations in ocean islands display high spatial concentration. The proposed estimation methodology overcomes the difficulties arising in the estimation of ocean temperature field at different depths, from long records of ocean temperature measurements in these stations. Data are collected from The World-Wide Ocean Optics Database. The performance of the presented approach is tested in terms of 10-fold cross-validation, and residual spatial and depth correlation analysis. Additionally, an application to soil sciences, for prediction of electrical conductivity profiles is also considered to compare this approach with previous related ones, in the statistical analysis of spatially correlated curves in depth.  相似文献   

2.
A new approach is described to allow conditioning to both hard data (HD) and soft data for a patch- and distance-based multiple-point geostatistical simulation. The multinomial logistic regression is used to quantify the link between HD and soft data. The soft data is converted by the logistic regression classifier into as many probability fields as there are categories. The local category proportions are used and compared to the average category probabilities within the patch. The conditioning to HD is obtained using alternative training images and by imposing large relative weights to HD. The conditioning to soft data is obtained by measuring the probability–proportion patch distance. Both 2D and 3D cases are considered. Synthetic cases show that a stationary TI can generate non-stationary realizations reproducing the HD, keeping the texture indicated by the TI and following the trends identified in probability maps obtained from soft data. A real case study, the Mallik methane-hydrate field, shows perfect reproduction of HD while keeping a good reproduction of the TI texture and probability trends.  相似文献   

3.
4.
A methodology using ordinal logistic regression is proposed to predict the probability of occurrence of heavy metals in ground water. The predicted probabilities are defined with reference to the background concentration and the maximum contaminant level. The model is able to predict the occurrence due to different influencing variables such as the land use, soil hydrologic group (SHG), and surface elevation. The methodology was applied to the Sumas-Blaine Aquifer located in Washington State to predict the occurrence of five heavy metals. The influencing variables considered were (1) SHG; (2) land use; (3) elevation; (4) clay content; (5) hydraulic conductivity; and (6) well depth. The predicted probabilities were in agreement with the observed probabilities under existing conditions. The results showed that aquifer vulnerability to each heavy metal was related to different sets of influencing variables. However, all heavy metals had a strong influence from land use and SHG. The model results also provided good insight into the influence of various hydrogeochemical factors and land uses on the presence of each heavy metal. A simple economic analysis was proposed and demonstrated to evaluate the cost effects of changing the land use on heavy metal occurrence.  相似文献   

5.
The aim of this study was to apply, verify and compare a multiple logistic regression model for landslide susceptibility analysis in three Korean study areas using a geographic information system (GIS). Landslide locations were identified by interpreting aerial photographs, satellite images and a field survey. Maps of the topography, soil type, forest cover, lineaments and land cover were constructed from the spatial data sets. The 14 factors that influence landslide occurrence were extracted from the database and the logistic regression coefficient of each factor was computed. Landslide susceptibility maps were drawn for these three areas using logistic regression coefficients derived not only from the data for that area but also using those for each of the other two areas (nine maps in all) as a cross‐check of method validity. For verification, the results of the analyses were compared with actual landslide locations. Among the nine cases, the Janghung exercise using the logistic formula and the coefficient for Janghung had the greatest accuracy (88·44%), whereas Janghung results, when considered by the logistic formula and the coefficient for Boeun, had the least accuracy (74·16%). Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

6.
Water temperature has a significant influence on aquatic organisms, including stenotherm fish such as salmonids. It is thus of prime importance to build reliable tools to forecast water temperature. This study evaluated a statistical scheme to model average water temperature based on daily average air temperature and average discharge at the Sainte-Marguerite River, Northern Canada. The aim was to test a non-parametric water temperature generalized additive model (GAM) and to compare its performance to three previously developed approaches: the logistic, residuals regression and linear regression models. Due to its flexibility, the GAM was able to capture some of the nonlinear response between water temperature and the two explanatory variables (air temperature and flow). The shape of these effects was determined by the trends shown in the collected data. The four models were evaluated annually using a cross-validation technique. Three comparison criteria were calculated: the root mean square error (RMSE), the bias error and the Nash-Sutcliffe coefficient of efficiency (NSC). The goodness of fit of the four models was also compared graphically. The GAM was the best among the four models (RMSE = 1.44°C, bias = ?0.04 and NSC = 0.94).  相似文献   

7.
The problem of identification of the modal parameters of a structural model using measured ambient response time histories is addressed. A Bayesian spectral density approach (BSDA) for modal updating is presented which uses the statistical properties of a spectral density estimator to obtain not only the optimal values of the updated modal parameters but also their associated uncertainties by calculating the posterior joint probability distribution of these parameters. Calculation of the uncertainties of the identified modal parameters is very important if one plans to proceed with the updating of a theoretical finite element model based on modal estimates. It is found that the updated PDF of the modal parameters can be well approximated by a Gaussian distribution centred at the optimal parameters at which the posterior PDF is maximized. Examples using simulated data are presented to illustrate the proposed method. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

8.
9.
Stream water temperature plays a significant role in aquatic ecosystems where it controls many important biological and physical processes. Reliable estimates of water temperature at the daily time step are critical in managing water resources. We developed a parsimonious piecewise Bayesian model for estimating daily stream water temperatures that account for temporal autocorrelation and both linear and nonlinear relationships with air temperature and discharge. The model was tested at 8 climatically different basins of the USA and at 34 sites within the mountainous Boise River Basin (Idaho, USA). The results show that the proposed model is robust with an average root mean square error of 1.25 °C and Nash–Sutcliffe coefficient of 0.92 over a 2‐year period. Our approach can be used to predict historic daily stream water temperatures in any location using observed daily stream temperature and regional air temperature data.  相似文献   

10.
微地震资料贝叶斯理论差分进化反演方法   总被引:3,自引:2,他引:1       下载免费PDF全文
微地震监测难以拾取准确初至,为了提高反演定位精度和减小多解性,研究了微地震贝叶斯差分进化反演方法.从分析讨论理论模型反演残差及其协方差分布特征出发,结合对比不加噪音和加入不同程度的噪音后残差协方差极小点位置移动、分布梯度变化特征,提出了先验信息解估计方法.针对后验估计中,由于难以获得先验信息解的方差估计致使无法计算加权系数问题,通过分析残差变化特征和解的变化关系,研究了利用残差求取加权系数的方法.为了加快寻优速度,讨论了差分进化反演方法,在变异操作方面使用差分策略,即利用种群中个体间的差分向量对个体进行扰动,实现个体变异,充分有效利用群体分布特性,提高算法的搜索能力,避免遗传算法中变异方式的不足.通过理论模型测试本方法的反演效果,并且和搜索方法反结果进行比较.测试结果证明本反演方法,对于不同程度初至干扰,反演结果向准确解逼近程度比搜索方法要好得多,实际资料的反演结果也好于搜索方法.  相似文献   

11.
This paper presents a Bayesian approach for fitting the standard power-law rating curve model to a set of stage-discharge measurements. Methods for eliciting both regional and at-site prior information, and issues concerning the determination of prior forms, are discussed. An efficient MCMC algorithm for the specific problem is derived. The appropriateness of the proposed method is demonstrated by applying the model to both simulated and real-life data. However, some problems came to light in the applications, and these are discussed.  相似文献   

12.
Empirical tsunami fragility curves are developed based on a Bayesian framework by accounting for uncertainty of input tsunami hazard data in a systematic and comprehensive manner. Three fragility modeling approaches, i.e. lognormal method, binomial logistic method, and multinomial logistic method, are considered, and are applied to extensive tsunami damage data for the 2011 Tohoku earthquake. A unique aspect of this study is that uncertainty of tsunami inundation data (i.e. input hazard data in fragility modeling) is quantified by comparing two tsunami inundation/run-up datasets (one by the Ministry of Land, Infrastructure, and Transportation of the Japanese Government and the other by the Tohoku Tsunami Joint Survey group) and is then propagated through Bayesian statistical methods to assess the effects on the tsunami fragility models. The systematic implementation of the data and methods facilitates the quantitative comparison of tsunami fragility models under different assumptions. Such comparison shows that the binomial logistic method with un-binned data is preferred among the considered models; nevertheless, further investigations related to multinomial logistic regression with un-binned data are required. Finally, the developed tsunami fragility functions are integrated with building damage-loss models to investigate the influences of different tsunami fragility curves on tsunami loss estimation. Numerical results indicate that the uncertainty of input tsunami data is not negligible (coefficient of variation of 0.25) and that neglecting the input data uncertainty leads to overestimation of the model uncertainty.  相似文献   

13.
Studies have illustrated the performance of at-site and regional flood quantile estimators. For realistic generalized extreme value (GEV) distributions and short records, a simple index-flood quantile estimator performs better than two-parameter (2P) GEV quantile estimators with probability weighted moment (PWM) estimation using a regional shape parameter and at-site mean and L-coefficient of variation (L-CV), and full three-parameter at-site GEV/PWM quantile estimators. However, as regional heterogeneity or record lengths increase, the 2P-estimator quickly dominates. This paper generalizes the index flood procedure by employing regression with physiographic information to refine a normalized T-year flood estimator. A linear empirical Bayes estimator uses the normalized quantile regression estimator to define a prior distribution which is employed with the normalized 2P-quantile estimator. Monte Carlo simulations indicate that this empirical Bayes estimator does essentially as well as or better than the simpler normalized quantile regression estimator at sites with short records, and performs as well as or better than the 2P-estimator at sites with longer records or smaller L-CV.  相似文献   

14.
频率域航空电磁数据变维数贝叶斯反演研究   总被引:3,自引:2,他引:3       下载免费PDF全文
传统的梯度反演方法已经广泛应用于频率域航空电磁数据处理中,然而此类方法受初始模型影响较大,且容易陷入局部极小.为解决这一问题,本文采用改进的变维数贝叶斯反演方法实现航空电磁数据反演.该方法根据建议分布对反演模型进行随机采样,并依据接受概率筛选合理的候选模型,最终获得反演模型的概率分布和不确定度信息.为解决贝叶斯反演方法对深部低阻层反演效果不佳的问题,本文通过引入合理加权系数,调整对反演模型约束强度,在很大程度上改善了反演效果.通过对模型统计方法进行改进,在遵循原有模型采样方法和接受标准的基础上,将满足数据拟合要求的模型纳入统计范围,削弱不合理模型对统计结果的干扰.本文最后通过对含有高斯噪声的理论数据和实测数据进行反演,并与Occam反演结果进行对比,验证了该方法的有效性.  相似文献   

15.
Enhancing the resolution and accuracy of surface ground-penetrating radar (GPR) reflection data by inverse filtering to recover a zero-phased band-limited reflectivity image requires a deconvolution technique that takes the mixed-phase character of the embedded wavelet into account. In contrast, standard stochastic deconvolution techniques assume that the wavelet is minimum phase and, hence, often meet with limited success when applied to GPR data. We present a new general-purpose blind deconvolution algorithm for mixed-phase wavelet estimation and deconvolution that (1) uses the parametrization of a mixed-phase wavelet as the convolution of the wavelet's minimum-phase equivalent with a dispersive all-pass filter, (2) includes prior information about the wavelet to be estimated in a Bayesian framework, and (3) relies on the assumption of a sparse reflectivity. Solving the normal equations using the data autocorrelation function provides an inverse filter that optimally removes the minimum-phase equivalent of the wavelet from the data, which leaves traces with a balanced amplitude spectrum but distorted phase. To compensate for the remaining phase errors, we invert in the frequency domain for an all-pass filter thereby taking advantage of the fact that the action of the all-pass filter is exclusively contained in its phase spectrum. A key element of our algorithm and a novelty in blind deconvolution is the inclusion of prior information that allows resolving ambiguities in polarity and timing that cannot be resolved using the sparseness measure alone. We employ a global inversion approach for non-linear optimization to find the all-pass filter phase values for each signal frequency. We tested the robustness and reliability of our algorithm on synthetic data with different wavelets, 1-D reflectivity models of different complexity, varying levels of added noise, and different types of prior information. When applied to realistic synthetic 2-D data and 2-D field data, we obtain images with increased temporal resolution compared to the results of standard processing.  相似文献   

16.
To reduce the dependence of EM inversion on the choice of initial model and to obtain the global minimum, we apply transdimensional Bayesian inversion to time-domain airborne electromagnetic data. The transdimensional Bayesian inversion uses the Monte Carlo method to search the model space and yields models that simultaneously satisfy the acceptance probability and data fitting requirements. Finally, we obtain the probability distribution and uncertainty of the model parameters as well as the maximum probability. Because it is difficult to know the height of the transmitting source during flight, we consider a fixed and a variable flight height. Furthermore, we introduce weights into the prior probability density function of the resistivity and adjust the constraint strength in the inversion model by changing the weighing coefficients. This effectively solves the problem of unsatisfactory inversion results in the middle high-resistivity layer. We validate the proposed method by inverting synthetic data with 3% Gaussian noise and field survey data.  相似文献   

17.
Categorical data play an important role in a wide variety of spatial applications, while modeling and predicting this type of statistical variable has proved to be complex in many cases. Among other possible approaches, the Bayesian maximum entropy methodology has been developed and advocated for this goal and has been successfully applied in various spatial prediction problems. This approach aims at building a multivariate probability table from bivariate probability functions used as constraints that need to be fulfilled, in order to compute a posterior conditional distribution that accounts for hard or soft information sources. In this paper, our goal is to generalize further the theoretical results in order to account for a much wider type of information source, such as probability inequalities. We first show how the maximum entropy principle can be implemented efficiently using a linear iterative approximation based on a minimum norm criterion, where the minimum norm solution is obtained at each step from simple matrix operations that converges to the requested maximum entropy solution. Based on this result, we show then how the maximum entropy problem can be related to the more general minimum divergence problem, which might involve equality and inequality constraints and which can be solved based on iterated minimum norm solutions. This allows us to account for a much larger panel of information types, where more qualitative information, such as probability inequalities can be used. When combined with a Bayesian data fusion approach, this approach deals with the case of potentially conflicting information that is available. Although the theoretical results presented in this paper can be applied to any study (spatial or non-spatial) involving categorical data in general, the results are illustrated in a spatial context where the goal is to predict at best the occurrence of cultivated land in Ethiopia based on crowdsourced information. The results emphasize the benefit of the methodology, which integrates conflicting information and provides a spatially exhaustive map of these occurrence classes over the whole country.  相似文献   

18.
Spatial heterogeneity in groundwater system introduces significant challenges in groundwater modeling and parameter calibration. In order to mitigate the modeling uncertainty, data assiilation...  相似文献   

19.
We introduce the Bayesian hierarchical modeling approach for analyzing observational data from marine ecological studies using a data set intended for inference on the effects of bottom-water hypoxia on macrobenthic communities in the northern Gulf of Mexico off the coast of Louisiana, USA. We illustrate (1) the process of developing a model, (2) the use of the hierarchical model results for statistical inference through innovative graphical presentation, and (3) a comparison to the conventional linear modeling approach (ANOVA). Our results indicate that the Bayesian hierarchical approach is better able to detect a “treatment” effect than classical ANOVA while avoiding several arbitrary assumptions necessary for linear models, and is also more easily interpreted when presented graphically. These results suggest that the hierarchical modeling approach is a better alternative than conventional linear models and should be considered for the analysis of observational field data from marine systems.  相似文献   

20.
Snow water equivalent prediction using Bayesian data assimilation methods   总被引:1,自引:0,他引:1  
Using the U.S. National Weather Service’s SNOW-17 model, this study compares common sequential data assimilation methods, the ensemble Kalman filter (EnKF), the ensemble square root filter (EnSRF), and four variants of the particle filter (PF), to predict seasonal snow water equivalent (SWE) within a small watershed near Lake Tahoe, California. In addition to SWE estimation, the various data assimilation methods are used to estimate five of the most sensitive parameters of SNOW-17 by allowing them to evolve with the dynamical system. Unlike Kalman filters, particle filters do not require Gaussian assumptions for the posterior distribution of the state variables. However, the likelihood function used to scale particle weights is often assumed to be Gaussian. This study evaluates the use of an empirical cumulative distribution function (ECDF) based on the Kaplan–Meier survival probability method to compute particle weights. These weights are then used in different particle filter resampling schemes. Detailed analyses are conducted for synthetic and real data assimilation and an assessment of the procedures is made. The results suggest that the particle filter, especially the empirical likelihood variant, is superior to the ensemble Kalman filter based methods for predicting model states, as well as model parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号