首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
估计迹长概率分布函数的新方法及其应用   总被引:1,自引:0,他引:1  
本文提出一种估计迹长概率分布函数的新方法。该方法的优点是不需要有节理在测窗上出露长度的数据, 有效的减少了野外工作量。  相似文献   

2.
估计迹长概率分布函数的新方法及其应用   总被引:1,自引:0,他引:1  
本文提出一种估计迹长概率分布函数的新方法。该方法的优点是不需要有节理在测窗上出露长度的数据,有效的减少了野外工作量  相似文献   

3.
《Gondwana Research》2010,17(3-4):512-526
The spatial distribution of deep slow earthquake activity along the strike of the subducting Philippine Sea Plate in southwest Japan is investigated. These events usually occur simultaneously between the megathrust seismogenic zone and the deeper free-slip zone on the plate interface at depths of about 30 km. Deep low-frequency tremors are weak prolonged vibrations with dominant frequencies of 1.5–5 Hz, whereas low-frequency earthquakes correspond to isolated pulses included within the tremors. Deep very-low-frequency earthquakes have long-period (20 s) seismic signals, and short-term slow-slip events are crustal deformations lasting for several days. Slow earthquake activity is not spatially homogeneous but is separated into segments some of which are bounded by gaps in activity. The spatial distribution of each phase of slow earthquake activity is usually coincident, although there are some inconsistencies. Very-low-frequency earthquakes occur mainly at edges of segments. Low-frequency earthquakes corresponding to tremors of relatively large amplitude are concentrated at spots where tremors are densely distributed within segments. The separation of segments by gaps suggests large differences in stick-slip and stable sliding caused by frictional properties of the plate interface. Within each segment, variations in the spatial distribution of slow earthquakes reflected inhomogeneities corresponding to the characteristic scales of events.  相似文献   

4.
Kazushige Obara   《Gondwana Research》2009,16(3-4):512-526
The spatial distribution of deep slow earthquake activity along the strike of the subducting Philippine Sea Plate in southwest Japan is investigated. These events usually occur simultaneously between the megathrust seismogenic zone and the deeper free-slip zone on the plate interface at depths of about 30 km. Deep low-frequency tremors are weak prolonged vibrations with dominant frequencies of 1.5–5 Hz, whereas low-frequency earthquakes correspond to isolated pulses included within the tremors. Deep very-low-frequency earthquakes have long-period (20 s) seismic signals, and short-term slow-slip events are crustal deformations lasting for several days. Slow earthquake activity is not spatially homogeneous but is separated into segments some of which are bounded by gaps in activity. The spatial distribution of each phase of slow earthquake activity is usually coincident, although there are some inconsistencies. Very-low-frequency earthquakes occur mainly at edges of segments. Low-frequency earthquakes corresponding to tremors of relatively large amplitude are concentrated at spots where tremors are densely distributed within segments. The separation of segments by gaps suggests large differences in stick-slip and stable sliding caused by frictional properties of the plate interface. Within each segment, variations in the spatial distribution of slow earthquakes reflected inhomogeneities corresponding to the characteristic scales of events.  相似文献   

5.
A spatial quantile regression model is proposed to estimate the quantile curve for a given probability of non-exceedance, as function of locations and covariates. Canonical vines copulas are considered to represent the spatial dependence structure. The marginal at each location is an asymmetric Laplace distribution where the parameters are functions of the covariates. The full conditional quantile distribution is given using the Joe–Clayton copula. Simulations show the flexibility of the proposed model to estimate the quantiles with special dependence structures. A case study illustrates its applicability to estimate quantiles for spatial temperature anomalies.  相似文献   

6.
舒苏荀  龚文惠 《岩土力学》2015,36(4):1205-1210
岩土参数的随机性会直接影响边坡稳定性评价结果的精度。首先,依据边坡参数的常用分布特征,利用拉丁超立方抽样法生成若干组边坡土性参数和几何参数的随机样本,用有限元强度折减法求解各组样本对应的边坡安全系数。再考虑土性参数的空间变异性,在二维随机场模型下将蒙特卡罗模拟和有限元强度折减法相结合求解各组样本对应的边坡失效概率。然后,利用样本数据及其安全系数和失效概率对径向基函数(RBF)神经网络进行训练和测试,从而建立边坡安全系数和失效概率的预测模型。算例表明,二维随机场模型能相对精确地考虑参数的空间变异性;在此基础上建立的神经网络模型对边坡的安全系数和失效概率具有较高的预测精度,且能极大地节省边坡稳定性分析的时间。  相似文献   

7.
The increased socio-economic significance of landslides has resulted in the application of statistical methods to assess their hazard, particularly at medium scales. These models evaluate where, when and what size landslides are expected. The method presented in this study evaluates the landslide hazard on the basis of homogenous susceptible units (HSU). HSU are derived from a landslide susceptibility map that is a combination of landslide occurrences and geo-environmental factors, using an automated segmentation procedure. To divide the landslide susceptibility map into HSU, we apply a region-growing segmentation algorithm that results in segments with statistically independent spatial probability values. Independence is tested using Moran’s I and a weighted variance method. For each HSU, we obtain the landslide frequency from the multi-temporal data. Temporal and size probabilities are calculated using a Poisson model and an inverse-gamma model, respectively. The methodology is tested in a landslide-prone national highway corridor in the northern Himalayas, India. Our study demonstrates that HSU can replace the commonly used terrain mapping units for combining three probabilities for landslide hazard assessment. A quantitative estimate of landslide hazard is obtained as a joint probability of landslide size, of landslide temporal occurrence for each HSU for different time periods and for different sizes.  相似文献   

8.
Hou  Weisheng  Cui  Chanjie  Yang  Liang  Yang  Qiaochu  Clarke  Keith 《Mathematical Geosciences》2019,51(1):29-51

In each step of geological modeling, errors have an impact on measurements and workflow processes and, so, have consequences that challenge accurate three-dimensional geological modeling. In the context of classical error theory, for now, only spatial positional error is considered, acknowledging that temporal, attribute, and ontological errors—and many others—are part of the complete error budget. Existing methods usually assumed that a single error distribution (Gaussian) exists across all kinds of spatial data. Yet, across, and even within, different kinds of raw data (such as borehole logs, user-defined geological sections, and geological maps), different types of positional error distributions may exist. Most statistical methods make a priori assumptions about error distributions that impact their explanatory power. Consequently, analyzing errors in multi-source and conflated data for geological modeling remains a grand challenge in geological modeling. In this study, a novel approach is presented regarding the analysis of one-dimensional multiple errors in the raw data used for model geological structures. The analysis is based on the relationship between spatial error distributions and different geological attributes. By assuming that the contact points of a geological subsurface are decided by the geological attributes related to both sides of the subsurface, this assumption means that the spatial error of geological contacts can be transferred into specific probabilities of all the related geological attributes at each three-dimensional point, which is termed the “geological attribute probability”. Both a normal distribution and a continuous uniform distribution were transferred into geological attribute probabilities, allowing different kinds of spatial error distributions to be summed directly after the transformation. On cross-points with multiple raw data with errors that follow different kinds of distributions, an entropy-based weight was given to each type of data to calculate the final probabilities. The weighting value at each point in space is decided by the related geological attribute probabilities. In a test application that accounted for the best estimates of geological contacts, the experimental results showed the following: (1) for line segments, the band shape of geological attribute probabilities matched that of existing error models; and (2) the geological attribute probabilities directly show the error distribution and are an effective way of describing multiple error distributions among the input data.

  相似文献   

9.
因子分析与地统计学在化探数据分析中的应用   总被引:1,自引:0,他引:1  
以铜陵矿集区土壤勘查地球化学数据为实例,应用因子分析方法获取了地球化学数据中的多个主因子信息,并利用地统计学方法开展了各主因子的空间变异分析和插值研究。研究结果显示,因子分析得到的各主因子对应于不同的成矿信息,将因子分析与地统计学分析和插值方法相结合,可以更好的展现各主因子得分的空间分布趋势以及与已知成矿信息的关联程度,进而服务于成矿预测和找矿勘探工作。  相似文献   

10.

Spatial data analytics provides new opportunities for automated detection of anomalous data for data quality control and subsurface segmentation to reduce uncertainty in spatial models. Solely data-driven anomaly detection methods do not fully integrate spatial concepts such as spatial continuity and data sparsity. Also, data-driven anomaly detection methods are challenged in integrating critical geoscience and engineering expertise knowledge. The proposed spatial anomaly detection method is based on the semivariogram spatial continuity model derived from sparsely sampled well data and geological interpretations. The method calculates the lag joint cumulative probability for each matched pair of spatial data, given their lag vector and the semivariogram under the assumption of bivariate Gaussian distribution. For each combination of paired spatial data, the associated head and tail Gaussian standardized values of a pair of spatial data are mapped to the joint probability density function informed from the lag vector and semivariogram. The paired data are classified as anomalous if the associated head and tail Gaussian standardized values fall within a low probability zone. The anomaly decision threshold can be decided based on a loss function quantifying the cost of overestimation or underestimation. The proposed spatial correlation anomaly detection method is able to integrate domain expertise knowledge through trend and correlogram models with sparse spatial data to identify anomalous samples, region, segmentation boundaries, or facies transition zones. This is a useful automation tool for identifying samples in big spatial data on which to focus professional attention.

  相似文献   

11.
三维剖面地质界线是构建三维地质结构模型的重要基础数据,其不确定性会影响三维模型的几何形态和属性分布。以单一分布为假设前提的统计学不确定性分析方法掩盖了其他概率分布特征对模型的影响。突破单一误差分布条件的假设前提,本文使用Monte Carlo方法模拟了不同概率分布情况下地质剖面数据中地质界线的抽样采集,以及地质界线空间分布的不确定性;依托地质界线空间位置与地质属性的耦合关系,提出了用地质属性概率分布实现地质界线空间不确定性的定量可视化,并结合实际地质剖面探讨了多种概率分布条件下地质界线的空间不确定性。实例研究表明,基于Monte Carlo模拟的不确定性分析方法可以突破单一误差分布假设条件,结合地质属性概率可充分揭示出建模数据的内在不确定性与模型外在要素形态之间的耦合关系。  相似文献   

12.
节理分布空间变异的地下洞室稳定性概率分析   总被引:1,自引:0,他引:1  
王川  冷先伦  李海轮  李刚 《岩土力学》2021,(1):224-232,244
以岩土材料力学参数空间变异性的"点估计-有限元"分析方法为基础,结合节理分析时自身存在几何模型、网格划分等特性,扩展了该方法在节理分布空间变异性分析方面的适用性,明确了具体的研究步骤与方法。以某抽水蓄能水电站为例,通过分析节理空间变异性对围岩变形与塑性区的影响,验证了扩展后该方法的准确性和合理性。对工程案例开挖揭露的1400余条节理进行概率统计,建立了节理空间变异性的有限元分析模型;采用扩展后的概率分析方法,研究了节理分布对地下洞室群围岩开挖稳定性的影响。研究结果表明:(1)对比概率分析得到的围岩变形概率分布与现场监测结果,发现剔除变形异常点后监测变形量值大部分位于得到的位移概率分布范围内,说明节理的空间变异性是导致监测变形波动的主要影响因素;(2)围岩变形概率分布的标准差能有效识别出围岩开挖变形受节理空间变异性的影响程度,对于所给出的案例依次为:机窝>边墙>顶拱;(3)围岩塑性区的概率分区能合理判断地下洞室群开挖时受节理影响较大的区域和范围,为工程施工的支护设计提供依据。  相似文献   

13.
李诗  陈建平  向杰  张志平  张烨 《地质通报》2019,38(12):2022-2032
在大数据的时代背景下,地质大数据逐渐趋于复杂化的模式与其间的空间关联性为基于机器学习算法的矿产资源定量预测带来了更大的挑战。利用深度卷积网络算法优异的分析性能来提取不同成矿条件下多种二维要素图层的空间分布特征与关联性是一项非常有意义的探索性实验。以松桃—花垣地区沉积型锰矿为例,利用深度卷积神经网络模型AlexNet挖掘Mn元素、沉积相、大塘坡组出露、断裂及水系的空间分布与锰矿矿床的就位空间的耦合相关性,以及不同的控矿要素之间的相关性,以此训练出二维矿产预测分类模型。经过训练后,可以得到验证准确率88.89%,召回率为66.67%,损失值0.08的深度卷积神经网络分类模型。应用该模型对未知区进行二维成矿预测,共圈定出91、96、154、184号4个成矿远景区,其中91号和154号的区域含矿概率为1,96号含矿概率为0.5。由此可见,预测区具有很大概率存在尚未发现的矿床。  相似文献   

14.
基于围岩力学参数概率分布模型的变形敏感性灰关联分析   总被引:1,自引:0,他引:1  
为了给布伦口-公格尔水电站地下洞室某标段围岩稳定性分析的参数选取提供可靠的理论依据,综合考虑岩体参数的空间变异性,针对常规敏感性分析方法所存在的不足,运用三维离散元计算程序,提出了基于围岩力学参数概率分布模型的变形敏感性灰关联分析方法。该方法以岩体密度?、弹性模量E、泊松比?、黏聚力c、内摩擦角φ及节理内摩擦角φj等6个围岩力学参数作为因素序列,拱顶下沉量作为目标序列,分析影响因素在整个定义区间内的变化对围岩拱顶下沉的敏感程度。结果表明:密度是最敏感因素,其次是弹性模量、黏聚力和泊松比,而内摩擦角以及节理内摩擦角敏感性最小。最后,将常规灰关联敏感性分析与该方法计算结果进行对比,结果表明:除密度、内摩擦角和节理内摩擦角一致外,其余参数敏感性均与结论不一致。因此,文中方法在考虑实际参数概率分布的基础上能够更加准确、合理地对参数进行综合评价。  相似文献   

15.
To prepare a landslide susceptibility map is essential to identify hazardous regions, construct appropriate mitigation facilities, and plan emergency measures for a region prone to landslides triggered by rainfall. The conventional mapping methods require much information about past landslides records and contributing terrace and rainfall. They also rely heavily on the quantity and quality of accessible information and subjectively of the map builder. This paper contributes to a systematic and quantitative assessment of mapping landslide hazards over a region. Geographical Information System is implemented to retrieve relevant parameters from data layers, including the spatial distribution of transient fluid pressures, which is estimated using the TRIGRS program. The factor of safety of each pixel in the study region is calculated analytically. Monte Carlo simulation of random variables is conducted to process the estimation of fluid pressure and factor of safety for multiple times. The failure probability of each pixel is thus estimated. These procedures of mapping landslide potential are demonstrated in a case history. The analysis results reveal a positive correlation between landslide probability and accumulated rainfall. This approach gives simulation results compared to field records. The location and size of actual landslide are well predicted. An explanation for some of the inconsistencies is also provided to emphasize the importance of site information on the accuracy of mapping results.  相似文献   

16.
A procedure to estimate the probability of intercepting a contaminant groundwater plume for monitoring network design has been developed and demonstrated. The objective of the procedure is to use all available information in a method that accounts for the heterogeneity of the aquifer and the paucity of data. The major components of the procedure are geostatistical conditional simulation and parameter estimation that are used sequentially to generate flow paths from a suspected contaminant source location to a designated monitoring transect. From the flow paths, a histogram is constructed that represents the spatial probability distribution of plume centerlines. With an independent estimate of the plume width, a relationship between the total cost and the probability of detecting a plume can be made. The method uses geostatistical information from hydraulic head measurements and is conditioned by the data and the physics of groundwater flow. This procedure was developed specifically for the design of monitoring systems at sites where very few, if any, hydraulic conductivity data are available.  相似文献   

17.
Existing methods of strain analysis such as the center-to-center method and the Fry method estimate strain from the spatial relationship between point objects in the deformed state. They assume a truncated Poisson distribution of point objects in the pre-deformed state. Significant deviations occur in nature and diffuse the central vacancy in a Fry plot, limiting the its effectiveness as a strain gauge. Therefore, a generalized center-to-center method is proposed to deal with point objects with the more general Poisson distribution, where the method outcomes do not depend on an analysis of a graphical central vacancy. This new method relies upon the probability mass function for the Poisson distribution, and adopts the maximum likelihood function method to solve for strain. The feasibility of the method is demonstrated by applying it to artificial data sets generated for known strains. Further analysis of these sets by use of the bootstrap method shows that the accuracy of the strain estimate has a strong tendency to increase either with point number or with the inclusion of more pre-deformation nearest neighbors. A poorly sorted, well packed, deformed conglomerate is analyzed, yielding strain estimate similar to the vector mean of the major axis directions of pebbles and the harmonic mean of their axial ratios from a shape-based strain determination method. These outcomes support the applicability of the new method to the analysis of deformed rocks with appropriate strain markers.  相似文献   

18.
为了能够真实反映砭家沟尾矿库的稳定特性,运用Monte Carlo试验原理,考虑库区砂层的空间变异特性,利用Slope/W软件对尾矿库进行了概率分析与敏感性分析。分析表明,抽样方式的不同,使尾矿坝体失稳概率和可靠性指标发生了大幅波动,但是并没有使安全系数产生明显变化,也验证了仅将安全系数作为衡量尾矿坝体稳定性的唯一标准是不合适的;敏感性分析也表明了砂层主要参数对安全系数影响的敏感程度,同时也表明黏聚力是影响库区稳定的主要砂层因素。通过研究,为尾矿库今后的运行与管理提供了理论依据,具有一定的指导意义。  相似文献   

19.
Disruption of segments of roads can have a significant impact on the vulnerability of the entire network. Natural disasters are frequent causes of disruptions of this kind. This article focuses on determining the risk of road disruptions due to landslides. Our approach is based on methodology widely used in the field of epidemiology. We had available data on the location of the landslides, the road network and a list of the disrupted road segments. With the use of a 2 × 2 table, we determined the relationship between landslide data and road segment disruptions and derived the risk coefficient based on the number of landslides in the vicinity of the road and its length. The result is a disruption risk map with risk coefficients ranging from 0 to 47.94. In order to distinguish the most risky segments, we calculated a threshold of 12.40 with the use of a risk breakdown in a group of segments without damage. Nineteen percentage (402 km) of the road network in the Zlín region (Czech Republic), where the methodology was applied, is located beyond this threshold. The benefits of this approach stem from its speed and potential to define the most risky areas on which a detailed geomorphologic analysis can be focused.  相似文献   

20.
Soil erosion is one of most widespread process of degradation. The erodibility of a soil is a measure of its susceptibility to erosion and depends on many soil properties. Soil erodibility factor varies greatly over space and is commonly estimated using the revised universal soil loss equation. Neglecting information about estimation uncertainty may lead to improper decision-making. One geostatistical approach to spatial analysis is sequential Gaussian simulation, which draws alternative, equally probable, joint realizations of a regionalised variable. Differences between the realizations provide a measure of spatial uncertainty and allow us to carry out an error analysis. The objective of this paper was to assess the model output error of soil erodibility resulting from the uncertainties in the input attributes (texture and organic matter). The study area covers about 30 km2 (Calabria, southern Italy). Topsoil samples were collected at 175 locations within the study area in 2006 and the main chemical and physical soil properties were determined. As soil textural size fractions are compositional data, the additive-logratio (alr) transformation was used to remove the non-negativity and constant-sum constraints on compositional variables. A Monte Carlo analysis was performed, which consisted of drawing a large number (500) of identically distributed input attributes from the multivariable joint probability distribution function. We incorporated spatial cross-correlation information through joint sequential Gaussian simulation, because model inputs were spatially correlated. The erodibility model was then estimated for each set of the 500 joint realisations of the input variables and the ensemble of the model outputs was used to infer the erodibility probability distribution function. This approach has also allowed for delineating the areas characterised by greater uncertainty and then to suggest efficient supplementary sampling strategies for further improving the precision of K value predictions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号