首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
基于CUDA的地震相干体并行算法   总被引:1,自引:0,他引:1  
张全 《地质与勘探》2020,56(1):147-153
相干体技术在地震勘探资料解释方面得到了广泛的应用,由于相干体技术处理的对象是三维地震数据体,所以算法运算时间较长。为了缩短解释周期,本文充分发挥GPU并行计算优势,对C3相干体算法进行并行化分析。从硬盘读取数据到GPU上计算相干值并写入硬盘的整个过程进行分析,剔除了冗余数据的读取,完成了C3相干体算法的并行化设计与实现。最后分别对串行算法与并行算法进行性能测试,结果表明本文设计的并行算法在保证精度的前提下达到了16倍左右的加速比,对加快地震资料解释具有重要意义。  相似文献   

2.
The material point method (MPM), which is a combination of the finite element and meshfree methods, suffers from significant computational workload due to the fine mesh that is required in spite of its advantages in simulating large deformations. This paper presents a parallel computing strategy for the MPM on the graphics processing unit (GPU) to boost the method’s computational efficiency. The interaction between a structural element and soil is investigated to validate the applicability of the parallelisation strategy. Two techniques are developed to parallelise the interpolation from soil particles to nodes to avoid a data race; the technique that is based on workload parallelisation across threads over the nodes has a higher computational efficiency. Benchmark problems of surface footing penetration and a submarine landslide are analysed to quantify the speedup of GPU parallel computing over sequential simulations on the central processing unit. The maximum speedup with the GPU used is ∼30 for single-precision calculations and decreases to ∼20 for double-precision calculations.  相似文献   

3.
地球物理勘探技术日新月异,地球物理勘探数据的处理和解释对高性能计算机的要求越来越高.相比于地震勘探,重力、磁法、电法勘探中的并行计算研究还都处于起步阶段.基于GPU的并行计算能够提供强大的计算能力和存储器带宽,同时具有良好的可编程性、较低的成本和较短的开发周期.这里实现了瞬变电磁法一维正演计算中汉克尔变换基于GPU的并行计算,比较了汉克尔变换串行算法和并行算法的计算耗时,基于GPU技术的并行计算相比串行计算,获得了很高的加速比.  相似文献   

4.
基于粗细网格的有限元并行分析方法   总被引:2,自引:0,他引:2  
付朝江  张武 《岩土力学》2006,27(5):807-810
并行计算己成为求解大规模岩土工程问题的一种强大趋势。探讨了粗细网格与预处理共轭梯度法结合的并行有限元算法。从多重网格刚度矩阵推得有效的预处理子。该算法在工作站机群上实现。用地基处理时土体强夯的数值模拟分析进行了数值测试,对其并行性能进行了详细分析。计算结果表明:该算法具有良好的并行加速比和效率,是一种有效的并行算法。  相似文献   

5.
This paper focuses on the efficiency of finite discrete element method (FDEM) algorithmic procedures in massive computers and analyzes the time-consuming part of contact detection and interaction computations in the numerical solution. A detailed operable GPU parallel procedure was designed for the element node force calculation, contact detection, and contact interaction with thread allocation and data access based on the CUDA computing. The emphasis is on the parallel optimization of time-consuming contact detection based on load balance and GPU architecture. A CUDA FDEM parallel program was developed with the overall speedup ratio over 53 times after the fracture from the efficiency and fidelity performance test of models of in situ stress, UCS, and BD simulations in Intel i7-7700K CPU and the NVIDIA TITAN Z GPU. The CUDA FDEM parallel computing improves the computational efficiency significantly compared with the CPU-based ones with the same reliability, providing conditions for achieving larger-scale simulations of fracture.  相似文献   

6.
In this work, we present an efficient matrix-free ensemble Kalman filter (EnKF) algorithm for the assimilation of large data sets. The EnKF has increasingly become an essential tool for data assimilation of numerical models. It is an attractive assimilation method because it can evolve the model covariance matrix for a non-linear model, through the use of an ensemble of model states, and it is easy to implement for any numerical model. Nevertheless, the computational cost of the EnKF can increase significantly for cases involving the assimilation of large data sets. As more data become available for assimilation, a potential bottleneck in most EnKF algorithms involves the operation of the Kalman gain matrix. To reduce the complexity and cost of assimilating large data sets, a matrix-free EnKF algorithm is proposed. The algorithm uses an efficient matrix-free linear solver, based on the Sherman–Morrison formulas, to solve the implicit linear system within the Kalman gain matrix and compute the analysis. Numerical experiments with a two-dimensional shallow water model on the sphere are presented, where results show the matrix-free implementation outperforming an singular value decomposition-based implementation in computational time.  相似文献   

7.
Stochastic inverse modeling deals with the estimation of functions from sparse data, which is a problem with a nonunique solution, with the objective to evaluate best estimates, measures of uncertainty, and sets of solutions that are consistent with the data. As finer resolutions become desirable, the computational requirements increase dramatically when using conventional solvers. A method is developed in this paper to solve large-scale stochastic linear inverse problems, based on the hierarchical matrix (or ? 2 matrix) approach. The proposed approach can also exploit the sparsity of the underlying measurement operator, which relates observations to unknowns. Conventional direct algorithms for solving large-scale linear inverse problems, using stochastic linear inversion techniques, typically scale as ??(n 2 m+nm 2), where n is the number of measurements and m is the number of unknowns. We typically have n ? m. In contrast, the algorithm presented here scales as ??(n 2 m), i.e., it scales linearly with the larger problem dimension m. The algorithm also allows quantification of uncertainty in the solution at a computational cost that also grows only linearly in the number of unknowns. The speedup gained is significant since the number of unknowns m is often large. The effectiveness of the algorithm is demonstrated by solving a realistic crosswell tomography problem by formulating it as a stochastic linear inverse problem. In the case of the crosswell tomography problem, the sparsity of the measurement operator allows us to further reduce the cost of our proposed algorithm from ??(n 2 m) to $\mathcal {O}(n^{2} \sqrt {m} + nm)$ . The computational speedup gained by using the new algorithm makes it easier, among other things, to optimize the location of sources and receivers, by minimizing the mean square error of the estimation. Without this fast algorithm, this optimization would be computationally impractical using conventional methods.  相似文献   

8.
抛物Radon变换法(Parabolic Radon Transform)在地震资料处理中有广泛的应用。PRT可对不同频率的地震数据解耦处理,这一特点使得抛物Radon变换的计算效率比双曲Radon变换有数量级上的提高。在频率域求解时,需要对每一个频率成份求解同样大小的线性方程组。求解抛物Radon正变换的计算方法主要有Levinson递推法、共轭梯度法、Cholesky分解法和直接矩阵求逆法。最小平方抛物Radon正变换所形成的矩阵具有Toeplitz结构,可采用Levinson递推法进行计算。高分辨率抛物Radon正变换所形成矩阵的Toeplitz结构被破坏,一般采用共轭梯度法或Cholesky分解法进行求解。这里详细推导了复Toeplitz矩阵的Levinson递推算法,并分别对求解方程的四种方法进行了讨论,最后给出抛物Radon正变换求解的数值算例,并对所给出的四种方程求解方法的计算效率及计算精度进行了对比。  相似文献   

9.
This work investigates the influence of compression ratio on the performance and emissions of a diesel engine using biodiesel (10, 20, 30, and 50 %) blended-diesel fuel. Test was carried out using four different compression ratios (17.5, 17.7, 17.9 and 18.1). The experiments were designed using a statistical tool known as design of experiments based on response surface methodology. The resultant models of the response surface methodology were helpful to predict the response parameters such as brake specific fuel consumption, brake thermal efficiency, carbon monoxide, hydrocarbon and nitrogen oxides. The results showed that best results for brake thermal efficiency and brake specific fuel consumption were observed at increased compression ratio. For all test fuels, an increase in compression ratio leads to decrease in the carbon monoxide and hydrocarbon emissions while nitrogen oxide emissions increase. Optimization of parameters was performed using the desirability approach of the response surface methodology for better performance and lower emission. A compression ratio 17.9, 10 % of fuel blend and 3.81 kW of power could be considered as the optimum parameters for the test engine.  相似文献   

10.
针对重力勘探中光滑反演存在的分辨率较低的问题,本文提出一种基于地质体埋深、地层倾向等一定先验信息的局部光滑约束三维反演算法,并提出了一种光滑反演中粗糙度矩阵的存储方式,该存储方式可以将M×N维粗糙度矩阵存储为M×2维,减少了计算机计算内存,且详细介绍了该存储方式下粗糙度矩阵与其他矩阵相乘时,粗糙度矩阵所存储的位置信息的读取方式以及与其他矩阵逐列逐步相乘最终得到计算结果的过程。最后,利用文中提出的算法对理论模型和实测数据进行反演试算,结果表明局部光滑反演算法相比于全局光滑反演结果更加准确,且该算法在一定噪声水平下依然稳定,在实际生产中有效可行。  相似文献   

11.
A method to derive a general equation for compression of structured soils is presented. It is shown that the general equation leads to the equation of compression for structured soils as proposed by Liu and Carter (Geotechnique 49(4):43–57, 1999). Using the compression characteristics of the structured and remolded soil, an equation is developed to determine a unique value for the structure degradation exponent term of the Liu-Carter equation. This equation is used to obtain the value of the structure degradation exponent of the Liu-Carter equation from the compression behavior of undisturbed and remolded Mexico City clay. The value of the exponent is used in the Liu-Carter equation to predict the compression behaviour of the clay. Excellent agreement is observed between predictions and experimental data.  相似文献   

12.
大型洞室群软岩置换方案优化的并行实现   总被引:1,自引:1,他引:0  
并行计算己成为求解大规模岩土工程问题的一种强大趋势 。 以水布娅大型洞室群软岩置换方案优化为例 , 探讨了方案优化中的并行计算问题,分析了并行计算中的编程模式 、 任务划分 、负载平衡和编程方法等问题,在 W id n o w s 环境下的 PC 机群上成功实现了软岩置换方案优化的并行计算,并获得了近乎线性的加速比 , 从而大大提高了方案优化的计算速度和效率 , 为岩土工程计算并行化思路提供了重要参考 。  相似文献   

13.
Estimating observation error covariance matrix properly is a key step towards successful seismic history matching. Typically, observation errors of seismic data are spatially correlated; therefore, the observation error covariance matrix is non-diagonal. Estimating such a non-diagonal covariance matrix is the focus of the current study. We decompose the estimation into two steps: (1) estimate observation errors and (2) construct covariance matrix based on the estimated observation errors. Our focus is on step (1), whereas at step (2) we use a procedure similar to that in Aanonsen et al. 2003. In Aanonsen et al. 2003, step (1) is carried out using a local moving average algorithm. By treating seismic data as an image, this algorithm can be interpreted as a discrete convolution between an image and a rectangular window function. Following the perspective of image processing, we consider three types of image denoising methods, namely, local moving average with different window functions (as an extension of the method in Aanonsen et al. 2003), non-local means denoising and wavelet denoising. The performance of these three algorithms is compared using both synthetic and field seismic data. It is found that, in our investigated cases, the wavelet denoising method leads to the best performance in most of the time.  相似文献   

14.
条件模拟是一种计算非常耗时的高精度三维插值算法。针对串行条件模拟算法计算时间过长的问题,提出基于GPU的并行条件模拟算法,并进行储量估算。对条件模拟算法进行并行分析,利用GPU的高度并行性,构建CUDA通用计算开发环境,实现串行条件模拟算法到并行条件模拟算法的转换,使条件模拟算法的时间复杂度从O(n)降至O(logn)。并对西藏甲玛铜矿进行了储量估算。实验结果表明,在安装普通NVIDIA显卡的计算机以及估算精度不下降的情况下,GPU并行条件模拟的计算效率比CPU串行条件模拟的计算效率提高了60倍以上。  相似文献   

15.
鄂尔多斯盆地中南部延长组8油层组主要成岩作用包括压实作用、石英次生加大、自生绿泥石膜生长、次生高岭石化、连晶方解石交代、长石溶蚀。根据铸体薄片,碳氧同位素分析,确定了各种主要成岩产物的空间分布和成因,分析了成岩产物分布与现今总面孔率的关系,从而确定8油层组的物性主要受石英次生加大、连晶方解石、长石溶孔、剩余原生孔隙分布的控制。石英次生加大和连晶方解石发育的地方,储层物性差;具自生绿泥石膜的剩余原生孔隙和长石溶孔发育的地方,储层物性好。  相似文献   

16.
李明  郭培军  李鑫  梁力 《岩土力学》2016,37(12):3591-3597
基于水平集法的基本思想,讨论了含有不同类型包裹体分布的岩石的二维和三维有限元建模方法。对于二维有限元模型的建立,考虑了以椭圆形为例的规则包裹体的周期分布、位置及包裹体大小均随机变化的多种情况。建议了包裹体和基岩之间界面的材料特性过渡处理方法。同时给出了含有非规则形状包裹体的建模方法。对于三维有限元模型的建立,则考虑了以任意大小椭球体为例的包裹体分布情况。该种建模方法的优点是对于不同的含有任意分布的包裹体的岩石试件,均可以采用相同的有限元网格,即,材料特性的变化不受有限元网格的制约。该法缺点是增加了计算资源。最后结合基于弥散裂缝模型的水力压裂数值计算方法,模拟了含有不同包裹体分布的岩石试件的水力压裂传播特点。  相似文献   

17.
余先川  任雅丽 《江苏地质》2014,38(2):238-244
非负矩阵分解(NMF)是重要的矩阵分解算法与数据降维工具。介绍了NMF的背景、定义、原理及特征。在已有NMF算法分类的基础上,总结当前流行的NMF算法及研究进展,综述NMF在地学领域中的应用,主要包括高光谱图像的处理与矿产资源预测。对NMF算法的研究方向进行了预测和展望。  相似文献   

18.
Chang  Ching S.  Deng  Yibing 《Acta Geotechnica》2022,17(7):2675-2696

The energy equation is an expression of the first law of thermodynamics or the law of conservation of energy. According to the first law of thermodynamics, the externally applied work to a system is equal to the sum of dissipation energy and Helmholtz free energy of the system. However, most of the currently available stress–dilatancy relationships are based on the energy equation of Taylor-Cam Clay type, which hypothesizes that the applied plastic work is equal solely to the frictional dissipation energy. The Helmholtz free energy has been completely neglected. Recently, observed from acoustic experiments, it has been recognized that Helmholtz free energy can be caused by deformation mechanisms other than friction between particles. Thus, it is necessary to include additional terms in the energy equation in order to correctly model the stress-dilatancy behavior. This paper addresses the issue regarding the balance of this energy equation. Analyses of experimental results are presented. Specific forms of the frictional energy and Helmholtz free energy are proposed. The proposed energy equation is verified with the experimental data obtained from Silica sand, Ottawa sand, and Nevada sand.

  相似文献   

19.
岩石统计渗流模型和统计损伤本构模型研究   总被引:5,自引:4,他引:1  
韦立德  杨春和  徐卫亚 《岩土力学》2004,25(10):1527-1530
采用Eshelby等效夹杂方法建立考虑渗流的岩石损伤本构模型是一种有效方法,但相关文献目前还极少。利用细观力学的Eshelby等效夹杂方法, 探索了考虑渗流和损伤的Helmholtz自由比能函数的确定,用连续介质损伤力学方法建立了相应的考虑渗流的岩石损伤统计本构模型,提出了考虑渗流和损伤过程的岩石破坏准则,建议了考虑损伤和应变的岩石渗透系数演化方程。与试验结果比较表明,所建立模型是合理的。  相似文献   

20.
In some studies on landslide susceptibility mapping (LSM), landslide boundary and spatial shape characteristics have been expressed in the form of points or circles in the landslide inventory instead of the accurate polygon form. Different expressions of landslide boundaries and spatial shapes may lead to substantial differences in the distribution of predicted landslide susceptibility indexes (LSIs); moreover, the presence of irregular landslide boundaries and spatial shapes introduces uncertainties into the LSM. To address this issue by accurately drawing polygonal boundaries based on LSM, the uncertainty patterns of LSM modelling under two different landslide boundaries and spatial shapes, such as landslide points and circles, are compared. Within the research area of Ruijin City in China, a total of 370 landslides with accurate boundary information are obtained, and 10 environmental factors, such as slope and lithology, are selected. Then, correlation analyses between the landslide boundary shapes and selected environmental factors are performed via the frequency ratio (FR) method. Next, a support vector machine (SVM) and random forest (RF) based on landslide points, circles and accurate landslide polygons are constructed as point-, circle- and polygon-based SVM and RF models, respectively, to address LSM. Finally, the prediction capabilities of the above models are compared by computing their statistical accuracy using receiver operating characteristic analysis, and the uncertainties of the predicted LSIs under the above models are discussed. The results show that using polygonal surfaces with a higher reliability and accuracy to express the landslide boundary and spatial shape can provide a markedly improved LSM accuracy, compared to those based on the points and circles. Moreover, a higher degree of uncertainty of LSM modelling is present in the expression of points because there are too few grid units acting as model input variables. Additionally, the expression of the landslide boundary as circles introduces errors in measurement and is not as accurate as the polygonal boundary in most LSM modelling cases. In addition, the results under different conditions show that the polygon-based models have a higher LSM accuracy, with lower mean values and larger standard deviations compared with the point- and circle-based models. Finally, the overall LSM accuracy of the RF is superior to that of the SVM, and similar patterns of landslide boundary and spatial shape affecting the LSM modelling are reflected in the SVM and RF models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号