首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
3.非参数性相关系数上节介绍的相关系数,要求变量服从正态分布.由于地质变量的概率分布类型很复杂,当变量的概率分布未知时,需要用非参数性方法.该法只要求变量是一个连续的分布函数,而不论分布函数属何种类型,所以适用于地质资料的统计分析.下面介绍几种非参数性相关系数的计算方法.  相似文献   

2.
为了分析储水介质非均质性问题,采用了基于马尔柯夫地层序列分析的分布条件模拟方法,用转移概率矩阵的谱分析函数替代传统地质统计学中的变差函数,利用协同指示克里格方法和条件模拟建立结构模型。以华北平原为例,论述了模拟结构建立过程及其在水流模型中的应用。通过对模拟结果与实测数据的分析比较,认为模拟结果能够客观反映整个研究区的沉积规律,以岩性赋水文地质参数的方法更加客观准确地解决储水介质非均质性问题。分析了该方法的优、缺点,并提出了该方法深入研究的方向与思路。  相似文献   

3.
水资源系统等级评价的一种非参数方法   总被引:6,自引:0,他引:6       下载免费PDF全文
水资源系统等级评价是目前水资源系统工程研究的热点和难点之一,常规评价方法需要根据先验知识预先设定评价函数的具体结构形式,难以适应复杂多变的实际评价系统.为此,提出了一种非参数评价新方法(NGEM).NGEM无需假定评价函数的具体表达形式,直接挖掘评价标准数据本身所隐含的评价信息,它用基于加速遗传算法的投影寻踪方法客观地确定各指标权重,用非参数回归方法建立评价函数.实例计算结果说明,NGEM直观简便、有效通用,在各种系统预测与模拟中也具有应用价值.  相似文献   

4.
基于随机配点法的地下水污染风险评价   总被引:1,自引:0,他引:1       下载免费PDF全文
地下水污染事件难以被直接观测,其发生环境通常也具有多种不确定性,风险评价可以量化地下水污染事件的危害性。利用随机配点模型和多项式抽样技术,建立了高效的风险评价模型;考虑了渗透系数、孔隙率和弥散度等多种不确定性因子,探讨了复杂条件下的风险函数的分布特征。研究结果表明:基于随机配点法的风险评价模型避免了多次重复求解对流弥散方程,通过计算成本低廉的随机配点技术得到浓度随机场的拉格朗日多项式,进行多项式抽样获取浓度样本并得到风险函数;与传统的解析算法相比,该方法无需对输入参数和浓度的分布形态做出假设;与传统的蒙特卡罗(Monte Carlo)算法相比,该模型具有明显的效率优势和优越的收敛速度;输入参数的分布类型对风险分布产生显著影响。  相似文献   

5.
李红霞  余震果 《地下水》2014,(1):27-28,34
本文采用径向基函数配点法建立了河渠间地下水非承压稳定流问题的数值模拟模型。径向基函数配点法的计算结果与形状参数的取值密切相关。将计算所得的近似解与解析解对比产生的误差很小,说明径向基函数配点法是一种既有效又有较高精度的求解方法。  相似文献   

6.
为了高效和准确地确定堆石料的非线性本构模型参数,提出了基于响应面方法的参数反演方法。采用有限元方法数值模拟了堆石坝的分层填筑过程。建立了堆石坝变形观测点沉降的响应面函数,确定了多项式响应面函数的系数。根据堆石坝竣工期变形观测数据和确定的响应面函数,采用优化方法反演确定了堆石料本构模型参数。工程实际应用结果表明,该方法具有较高的计算效率,预测的堆石坝沉降变形与现场观测值吻合较好。  相似文献   

7.
堆积层滑坡的岩土体渗透系数具有一定的不确定性,且渗透系数是饱和-非饱和渗流分析的重要参数,开展考虑其空间变异性的库岸堆积层滑坡渗流变形分析具有重要意义。以三峡库区中的白水河滑坡为研究对象,基于地面核磁共振技术获取的岩土体渗透系数,分析滑坡体渗透系数的空间变异特征,采用半变异函数方法求得滑坡体渗透系数的竖直波动范围,在此基础上建立渗透系数的非平稳随机场模型。以非侵入式随机有限元的方式开展库水升降两种工况下不确定模型与确定模型的流固耦合模拟,分析两种模型的渗流场、位移变形特征及其差异。结果表明:相比于确定模型,不确定模型孔压改变的滞后性更为明显,且库水下降工况下整体的位移变形更大,若忽略滑体渗透系数的非平稳空间变异特征将会低估滑坡的实际变形。  相似文献   

8.
堆积层滑坡的岩土体渗透系数具有一定的不确定性,且渗透系数是饱和-非饱和渗流分析的重要参数,开展考虑其空间变异性的库岸堆积层滑坡渗流变形分析具有重要意义。以三峡库区中的白水河滑坡为研究对象,基于地面核磁共振技术获取的岩土体渗透系数,分析滑坡体渗透系数的空间变异特征,采用半变异函数方法求得滑坡体渗透系数的竖直波动范围,在此基础上建立渗透系数的非平稳随机场模型。以非侵入式随机有限元的方式开展库水升降两种工况下不确定模型与确定模型的流固耦合模拟,分析两种模型的渗流场、位移变形特征及其差异。结果表明:相比于确定模型,不确定模型孔压改变的滞后性更为明显,且库水下降工况下整体的变形更大,若忽略滑体渗透系数的非平稳空间变异特征将会低估滑坡的实际变形。  相似文献   

9.
四种湍流模型对空化流动模拟的比较   总被引:4,自引:2,他引:4       下载免费PDF全文
采用四种湍流模型对NACA66水翼空化流动进行模拟,分析了不同湍流模型和壁面函数对空化流动模拟结果的影响.分析结果表明:ASM模型和RNG k-ε模型对空化数不敏感,且计算精度较高;Realizable k-ε模型对空化数的变化很敏感,对不同的空化数计算精度相差很大;标准k-ε模型对空化数不敏感,但精度不高.增强壁面函数法对空化数不敏感,计算精度较高;标准壁面函数法和非平衡壁面函数法对空化数有一定的敏感性,但计算结果差异不大.  相似文献   

10.
胡亚元 《岩土力学》2005,26(Z1):9-12
由于经典的塑性力学无法根据Drucker塑性公设从理论上证明非相关联流动准则,因而从连续介质热力学基本原理出发研究土的弹塑性模型。根据率无关塑性力学理论,通过Gibbs自由能和多个独立耗散函数,建立土的多重屈服准则及其流动准则,证明了屈服准则重数和独立耗散函数个数相等,分析了耗散函数形式对屈服准则和塑性流动准则的影响。分析了一簇新的能够同时考虑相关联流动准则和非相关联流动准则的粘土的Gibbs自由能和耗散函数的表达式,殷宗泽双屈服面模型是其特例,但新模型具有更为明确的物理含义,能考虑非相关联流动准则的情况。根据粘土室内实验选取了模型参数,并与实测应力-应变曲线进行对比,说明新模型可以模拟粘土的多重屈服面本构关系。  相似文献   

11.
王文川  雷冠军  刘宽 《水文》2017,37(5):1-7
水文序列长度有限,频率曲线参数估计存在抽样误差,采用加权适线法能够很好地减小误差。模糊加权优化适线法的诺模图长度有限,而且隶属度函数不是以样本无穷大为前提,对模糊加权优化适线法改进,提出半降正态分布的隶属度函数,运用大样本统计试验方法延长诺模图。以理想数据、蒙特卡洛随机数和实测序列对改进的方法检验,以无偏性和有效性为评价指标,运用评分法和百分率法对统计结果分析评价。结果表明改进的模糊加权优化适线法的统计特性好,适线精度高,能够促进模糊加权优化适线法在实际工程中的推广应用。  相似文献   

12.
A soil deposit subjected to seismic loading can be viewed as a binary system: it will either liquefy or not liquefy. Generalized linear models are versatile tools for predicting the response of a binary system and hence potentially applicable to liquefaction prediction. In this study, the applicability of four generalized linear models (i.e., logistic, probit, log–log, and c-log–log) for liquefaction potential evaluation is assessed and compared. Eight liquefaction models based on the four generalized linear models and two sets of explanatory variables are evaluated. These models are first calibrated with past liquefaction performance data. A weighted-likelihood function method is used to consider the sampling bias in the calibration database. The predicted liquefaction probabilities from various models are then compared. When liquefaction probability is small, the predicted liquefaction probability is sensitive to the regression models used. The effect of sampling bias is more marked in the high cyclic stress ratio region. The eight models are finally ranked using a Bayesian model comparison method. For the generalized linear models examined, the logistic and c-log–log regression models are most supported by the past performance data. On the other hand, the probit and c-log–log regression models are much less applicable to liquefaction prediction.  相似文献   

13.
Estimating Variogram Uncertainty   总被引:10,自引:0,他引:10  
The variogram is central to any geostatistical survey, but the precision of a variogram estimated from sample data by the method of moments is unknown. It is important to be able to quantify variogram uncertainty to ensure that the variogram estimate is sufficiently accurate for kriging. In previous studies theoretical expressions have been derived to approximate uncertainty in both estimates of the experimental variogram and fitted variogram models. These expressions rely upon various statistical assumptions about the data and are largely untested. They express variogram uncertainty as functions of the sampling positions and the underlying variogram. Thus the expressions can be used to design efficient sampling schemes for estimating a particular variogram. Extensive simulation tests show that for a Gaussian variable with a known variogram, the expression for the uncertainty of the experimental variogram estimate is accurate. In practice however, the variogram of the variable is unknown and the fitted variogram model must be used instead. For sampling schemes of 100 points or more this has only a small effect on the accuracy of the uncertainty estimate. The theoretical expressions for the uncertainty of fitted variogram models generally overestimate the precision of fitted parameters. The uncertainty of the fitted parameters can be determined more accurately by simulating multiple experimental variograms and fitting variogram models to these. The tests emphasize the importance of distinguishing between the variogram of the field being surveyed and the variogram of the random process which generated the field. These variograms are not necessarily identical. Most studies of variogram uncertainty describe the uncertainty associated with the variogram of the random process. Generally however, it is the variogram of the field being surveyed which is of interest. For intensive sampling schemes, estimates of the field variogram are significantly more precise than estimates of the random process variogram. It is important, when designing efficient sampling schemes or fitting variogram models, that the appropriate expression for variogram uncertainty is applied.  相似文献   

14.
Training Images from Process-Imitating Methods   总被引:2,自引:2,他引:0  
The lack of a suitable training image is one of the main limitations of the application of multiple-point statistics (MPS) for the characterization of heterogeneity in real case studies. Process-imitating facies modeling techniques can potentially provide training images. However, the parameterization of these process-imitating techniques is not straightforward. Moreover, reproducing the resulting heterogeneous patterns with standard MPS can be challenging. Here the statistical properties of the paleoclimatic data set are used to select the best parameter sets for the process-imitating methods. The data set is composed of 278 lithological logs drilled in the lower Namoi catchment, New South Wales, Australia. A good understanding of the hydrogeological connectivity of this aquifer is needed to tackle groundwater management issues. The spatial variability of the facies within the lithological logs and calculated models is measured using fractal dimension, transition probability, and vertical facies proportion. To accommodate the vertical proportions trend of the data set, four different training images are simulated. The grain size is simulated alongside the lithological codes and used as an auxiliary variable in the direct sampling implementation of MPS. In this way, one can obtain conditional MPS simulations that preserve the quality and the realism of the training images simulated with the process-imitating method. The main outcome of this study is the possibility of obtaining MPS simulations that respect the statistical properties observed in the real data set and honor the observed conditioning data, while preserving the complex heterogeneity generated by the process-imitating method. In addition, it is demonstrated that an equilibrium of good fit among all the statistical properties of the data set should be considered when selecting a suitable set of parameters for the process-imitating simulations.  相似文献   

15.
16.
Logistic regression is a widely used statistical method to relate a binary response variable to a set of explanatory variables and maximum likelihood is the most commonly used method for parameter estimation. A maximum-likelihood logistic regression (MLLR) model predicts the probability of the event from binary data defining the event. Currently, MLLR models are used in a myriad of fields including geosciences, natural hazard evaluation, medical diagnosis, homeland security, finance, and many others. In such applications, the empirical sample data often exhibit class imbalance, where one class is represented by a large number of events while the other is represented by only a few. In addition, the data also exhibit sampling bias, which occurs when there is a difference between the class distribution in the sample compared to the actual class distribution in the population. Previous studies have evaluated how class imbalance and sampling bias affect the predictive capability of asymptotic classification algorithms such as MLLR, yet no definitive conclusions have been reached.  相似文献   

17.
3D geological models are created to integrate a set of input measurements into a single geological model. There are many problems with this approach, as there is uncertainty in all stages of the modelling process, from initial data collection to the approach used in the modelling scheme itself to calculate the geological model. This study looks at the uncertainty inherent in geological models due to data density and introduces a novel method to upscale geological data that optimises the information in the initial dataset. This method also provides the ability for the dominant trend of a geological dataset to be determined at different scales. By using self-organizing maps (SOM's) to examine the different metrics used to quantify a geological model, we allow for a larger range of metrics to be used compared to traditional statistical methods, due to the SOM's ability to deal with incomplete datasets. The classification of the models into clusters based on the geological metrics using k-means clustering provides a useful insight into the models that are most similar and models that are statistical outliers. Our approach is guided and can be calculated on any input dataset of this type to determine the effect that data density will have on a resultant model. These models are all statistical derivations that represent simplifications and different scales of the initial dataset and can be used to interrogate the scale of observations.  相似文献   

18.
Proxy reconstructions of climatic parameters developed using transfer functions are central to the testing of many palaeoclimatic hypotheses on Holocene timescales. However, recent work shows that the mathematical models underpinning many existing transfer functions are susceptible to spatial autocorrelation, clustered training set design and the uneven sampling of environmental gradients. This may result in over‐optimistic performance statistics or, in extreme cases, a lack of predictive power. A new testate amoeba‐based transfer function is presented that fully incorporates the new recommended statistical tests to address these issues. Leave‐one‐out cross‐validation, the most commonly applied method in recent studies to assess model performance, produced over‐optimistic performance statistics for all models tested. However, the preferred model, developed using weighted averaging with tolerance downweighting, retained a predictive capacity equivalent to other published models even when less optimistic performance statistics were chosen. Application of the new statistical tests in the development of transfer functions provides a more thorough assessment of performance and greater confidence in reconstructions based on them. Only when the wider research community have sufficient confidence in transfer function‐based proxy reconstructions will they be commonly used in data comparison and palaeoclimate modelling studies of broader scientific relevance. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
余翔宇  徐义贤 《地球科学》2015,40(3):419-424
地质采样信息不足是制约深部三维地质建模的重要因素, 深部物性探测数据由于其易于获取而能够有效形成可视化模型.结合这一特点, 在地质调查项目工作中探索出了一种基于物性探测数据的三维地质建模方法.它首先利用岩石样品物性测量实验数据提取出物性参数及其对应地质属性的映射关系, 然后将不同地球物理方法所获取到的物性数据进行综合建模并解释, 最后将解释后的可视化模型转换为地质三维模型.实践证明, 该方法能够针对性地解决项目中的一些深部地质三维建模问题.   相似文献   

20.
The conventional paradigm for predicting future reservoir performance from existing production data involves the construction of reservoir models that match the historical data through iterative history matching. This is generally an expensive and difficult task and often results in models that do not accurately assess the uncertainty of the forecast. We propose an alternative re-formulation of the problem, in which the role of the reservoir model is reconsidered. Instead of using the model to match the historical production, and then forecasting, the model is used in combination with Monte Carlo sampling to establish a statistical relationship between the historical and forecast variables. The estimated relationship is then used in conjunction with the actual production data to produce a statistical forecast. This allows quantifying posterior uncertainty on the forecast variable without explicit inversion or history matching. The main rationale behind this is that the reservoir model is highly complex and even so, still remains a simplified representation of the actual subsurface. As statistical relationships can generally only be constructed in low dimensions, compression and dimension reduction of the reservoir models themselves would result in further oversimplification. Conversely, production data and forecast variables are time series data, which are simpler and much more applicable for dimension reduction techniques. We present a dimension reduction approach based on functional data analysis (FDA), and mixed principal component analysis (mixed PCA), followed by canonical correlation analysis (CCA) to maximize the linear correlation between the forecast and production variables. Using these transformed variables, it is then possible to apply linear Gaussian regression and estimate the statistical relationship between the forecast and historical variables. This relationship is used in combination with the actual observed historical data to estimate the posterior distribution of the forecast variable. Sampling from this posterior and reconstructing the corresponding forecast time series, allows assessing uncertainty on the forecast. This workflow will be demonstrated on a case based on a Libyan reservoir and compared with traditional history matching.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号