首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The degrees of freedom (DOF) in standard ensemble-based data assimilation is limited by the ensemble size. Successful assimilation of a data set with large information content (IC) therefore requires that the DOF is sufficiently large. A too small number of DOF with respect to the IC may result in ensemble collapse, or at least in unwarranted uncertainty reduction in the estimation results. In this situation, one has two options to restore a proper balance between the DOF and the IC: to increase the DOF or to decrease the IC. Spatially dense data sets typically have a large IC. Within subsurface applications, inverted time-lapse seismic data used for reservoir history matching is an example of a spatially dense data set. Such data are considered to have great potential due to their large IC, but they also contain errors that are challenging to characterize properly. The computational cost of running the forward simulations for reservoir history matching with any kind of data is large for field cases, such that a moderately large ensemble size is standard. Realization of the potential in seismic data for ensemble-based reservoir history matching is therefore not straightforward, not only because of the unknown character of the associated data errors, but also due to the imbalance between a large IC and a too small number of DOF. Distance-based localization is often applied to increase the DOF but is example specific and involves cumbersome implementation work. We consider methods to obtain a proper balance between the IC and the DOF when assimilating inverted seismic data for reservoir history matching. To decrease the IC, we consider three ways to reduce the influence of the data space; subspace pseudo inversion, data coarsening, and a novel way of performing front extraction. To increase the DOF, we consider coarse-scale simulation, which allows for an increase in the DOF by increasing the ensemble size without increasing the total computational cost. We also consider a combination of decreasing the IC and increasing the DOF by proposing a novel method consisting of a combination of data coarsening and coarse-scale simulation. The methods were compared on one small and one moderately large example with seismic bulk-velocity fields at four assimilation times as data. The size of the examples allows for calculation of a reference solution obtained with standard ensemble-based data assimilation methodology and an unrealistically large ensemble size. With the reference solution as the yardstick with which the quality of other methods are measured, we find that the novel method combining data coarsening and coarse-scale simulations gave the best results. With very restricted computational resources available, this was the only method that gave satisfactory results.  相似文献   

2.
小波变换在河西地区水文和气候周期变化分析中的应用   总被引:35,自引:1,他引:35  
小波时—频分析由于其局部优化性质而优于傅立叶分析。应用 Meyer小波 ,对甘肃河西地区近 50 a来年径流量、年降水量和年平均气温做周期分析 ,发现河西地区水文气象序列的变化周期基本在 35a、2 2 a、1 1 a、5~ 6a和 2~ 3a左右的时间尺度上浮动。而这些基本周期正是太阳黑子活动周期或海—气相互作用的周期 ,说明河西地区水文、气象序列的周期变化受天体运动变化的影响。天体运动直接影响降水和气温的周期变化 ,进而在一定的下垫面条件下 ,影响径流的周期变化。  相似文献   

3.
唐勇  张辉  刘丛强  饶冰 《地球化学》2010,39(2):184-190
利用江西宜春414岩体中的钠长石花岗岩作为实验初始物,制备含不同1〉20s含量(0.27%-7.71%)的实验玻璃,本次实验研究了100MPa、850℃和800℃条件下Sn在流体与富磷过铝质熔体相间的分配。实验结果显示,Sn在流体与熔体相间的分配系数(Dsofluid/mclt)变化于2.10×10^-4-1.36×10^3之间,指示Sn强烈趋向于在富磷过铝质熔体中富集。随体系中P2Os含量从0.27%增至1.91%,Sn在流体与熔体相间的分配系数逐渐增加,当体系中R2O5含量进一步增加时,Sn在两相间的分配系数呈降低的趋势。本次实验结果表明,P可能不是Sn以流体相形式进行搬运的主要络合剂。  相似文献   

4.
5.
Many problems in mining and civil engineering require using numerical stress analysis methods to repeatedly solve large models. Widespread acceptance of tunneling methods, such as New Austrian Tunneling Method, which depend heavily on numerical stress analysis tools and the fact that the effects of excavation at the face of a tunnel are distinctively three–dimensional (3D), necessitates the use of 3D numerical analysis for these problems. Stress analysis of a practical mining problem can be very lengthy, and the processing time can be measured in days or weeks at times. A framework is developed to facilitate efficient modeling of underground excavations and to create an optimal 3D mesh by reducing the number of surface and volume elements while keeping the result of stress analysis accurate enough at the region of interest, where a solution is sought. Fewer surface and volume elements mean fewer degrees of freedom in the numerical model, which directly translates into savings in computational time and resources. The mesh refinement algorithm is driven by a set of criteria that are functions of distance and visibility of points from the region of interest, and the framework can be easily extended by adding new types of criteria. This paper defines the framework, whereas a second companion paper will investigate its efficiency, accuracy and application to a number of practical mining problems. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
Coarse-scale data assimilation (DA) with large ensemble size is proposed as a robust alternative to standard DA with localization for reservoir history matching problems. With coarse-scale DA, the unknown property function associated with each ensemble member is upscaled to a grid significantly coarser than the original reservoir simulator grid. The grid coarsening is automatic, ensemble-specific and non-uniform. The selection of regions where the grid can be coarsened without introducing too large modelling errors is performed using a second-generation wavelet transform allowing for seamless handling of non-dyadic grids and inactive grid cells. An inexpensive local-local upscaling is performed on each ensemble member. A DA algorithm that restarts from initial time is utilized, which avoids the need for downscaling. Since the DA computational cost roughly equals the number of ensemble members times the cost of a single forward simulation, coarse-scale DA allows for a significant increase in the number of ensemble members at the same computational cost as standard DA with localization. Fixing the computational cost for both approaches, the quality of coarse-scale DA is compared to that of standard DA with localization (using state-of-the-art localization techniques) on examples spanning a large degree of variability. It is found that coarse-scale DA is more robust with respect to variation in example type than each of the localization techniques considered with standard DA. Although the paper is concerned with two spatial dimensions, coarse-scale DA is easily extendible to three spatial dimensions, where it is expected that its advantage with respect to standard DA with localization will increase.  相似文献   

7.
为了加快大地电磁三维正演的求解速度,本文将一种新型的代数多重网格算法——聚集多重网格(aggregation-based algebraic multigrid, AGMG)算法引入大地电磁三维正演模拟中。首先从准静态条件下的麦克斯韦方程出发,利用交错网格有限体积法进行离散,并采用第一类Dirichlet边界条件形成大型稀疏复线性方程组;然后阐述AGMG算法的粗化策略和套迭代技术,并实施3种不同的AGMG求解算法:1)传统的V循环AGMG算法;2)AGMG预处理共轭梯度(AGMG-CG);3)AGMG预处理广义共轭残差法(AGMG-GCR)。最终实现大地电磁法三维正演模拟。对典型地电模型进行正演模拟,并与已有的大地电磁三维正反演程序(ModEM)进行结果对比,以验证本文算法的准确性。另外,不同剖分网格和极化方式正演模拟结果与准残量最小化(QMR)迭代算法的对比表明,AGMG预处理求解算法(AGMG-CG、AGMG-GCR)不仅能够改善算法的稳定性,而且能够快速有效地求解正演问题;其中AGMG-GCR迭代次数更少,求解速度更快,误差衰减曲线更光滑,在144×152×104网格剖分情况下,相对于现有ModEM程序能够提高十几倍的计算速度,尤其适合大规模大地电磁三维正演问题。  相似文献   

8.
The crystal size distributions (CSDs) of plagioclase and amphibolewere determined from andesites of the Soufrière Hillsvolcano, Montserrat. Plagioclase occurs as separate crystalsand as chadocrysts in large amphibole oikocrysts. The chadocrystsrepresent an earlier stage of textural development, preservedby growth of the oikocryst. Seventeen rock and eight chadocrystplagioclase CSDs are considered together as a series of samplesof textural development. All are curved, concave up, and coincident,differing only in their maximum crystal size. Three amphiboleCSDs have a similar shape and behaviour, but at a differentposition from the plagioclase CSDs. A dynamic model is proposedfor the origin of textures in these rocks. Crystallization ofplagioclase started following emplacement of andesite magmaat a depth of at least 5 km. A steep, straight CSD developedby nucleation and growth. This process was interrupted by theinjection of mafic magma into the chamber, or convective overturnof hotter magma. The magma temperature rose until it was buffered,initially by plagioclase solution and later by crystallization.During this period textural coarsening (Ostwald ripening) ofplagioclase and amphibole occurred: small crystals dissolvedsimultaneously with the growth of large crystals. The CSD becameless steep and extended to larger crystal sizes. Early stagesof this process are preserved in coarsened amphibole oikocrysts.Repetitions of this cycle generated the observed family of CSDs.Textural coarsening followed the ‘Communicating Neighbours’model. Hence, each crystal has its own, unique growth–solutionhistory, without appealing to mixing of magmas that crystallizedin different environments. KEY WORDS: Ostwald ripening; textural coarsening; oikocryst; CSD; texture  相似文献   

9.
Diffusive coarsening (Ostwald ripening) of H2O and H2O-CO2 bubbles in rhyolite and basaltic andesite melts was studied with elevated temperature–pressure experiments to investigate the rates and time spans over which vapor bubbles may enlarge and attain sufficient buoyancy to segregate in magmatic systems. Bubble growth and segregation are also considered in terms of classical steady-state and transient (non-steady-state) ripening theory. Experimental results are consistent with diffusive coarsening as the dominant mechanism of bubble growth. Ripening is faster in experiments saturated with pure H2O than in those with a CO2-rich mixed vapor probably due to faster diffusion of H2O than CO2 through the melt. None of the experimental series followed the time1/3 increase in mean bubble radius and time−1 decrease in bubble number density predicted by classical steady-state ripening theory. Instead, products are interpreted as resulting from transient regime ripening. Application of transient regime theory suggests that bubbly magmas may require from days to 100 years to reach steady-state ripening conditions. Experimental results, as well as theory for steady-state ripening of bubbles that are immobile or undergoing buoyant ascent, indicate that diffusive coarsening efficiently eliminates micron-sized bubbles and would produce mm-sized bubbles in 102–10years in crustal magma bodies. Once bubbles attain mm-sizes, their calculated ascent rates are sufficient that they could transit multiple kilometers over hundreds to thousands of years through mafic and silicic melt, respectively. These results show that diffusive coarsening can facilitate transfer of volatiles through, and from, magmatic systems by creating bubbles sufficiently large for rapid ascent.  相似文献   

10.
A new approach based on principal component analysis (PCA) for the representation of complex geological models in terms of a small number of parameters is presented. The basis matrix required by the method is constructed from a set of prior geological realizations generated using a geostatistical algorithm. Unlike standard PCA-based methods, in which the high-dimensional model is constructed from a (small) set of parameters by simply performing a multiplication using the basis matrix, in this method the mapping is formulated as an optimization problem. This enables the inclusion of bound constraints and regularization, which are shown to be useful for capturing highly connected geological features and binary/bimodal (rather than Gaussian) property distributions. The approach, referred to as optimization-based PCA (O-PCA), is applied here mainly for binary-facies systems, in which case the requisite optimization problem is separable and convex. The analytical solution of the optimization problem, as well as the derivative of the model with respect to the parameters, is obtained analytically. It is shown that the O-PCA mapping can also be viewed as a post-processing of the standard PCA model. The O-PCA procedure is applied both to generate new (random) realizations and for gradient-based history matching. For the latter, two- and three-dimensional systems, involving channelized and deltaic-fan geological models, are considered. The O-PCA method is shown to perform very well for these history matching problems, and to provide models that capture the key sand–sand and sand–shale connectivities evident in the true model. Finally, the approach is extended to generate bimodal systems in which the properties of both facies are characterized by Gaussian distributions. MATLAB code with the O-PCA implementation, and examples demonstrating its use are provided online as Supplementary Materials.  相似文献   

11.
Upscaling methods that need to solve local problems subject to boundary conditions are addressed in this article. We define a new upscaling method based on optimization problems, which can take into account general boundary conditions applied to local problems. The determination of upscaled permeability leads to minimizing the difference of dissipated energies (or averaged velocity) at fine and large scale. Using optimal control techniques, we obtain an effective computing algorithm that allows us to recover, with classical boundary conditions, the well-known results. The uniqueness issue is tackled for the optimization problems introduced in our approach. We show that the method is stable with respect to G-convergence, a property that establishes a link with homogenization theory, and finally, 2D numerical experiments are presented.  相似文献   

12.
There is a correspondence between flow in a reservoir and large scale permeability trends. This correspondence can be derived by constraining reservoir models using observed production data. One of the challenges in deriving the permeability distribution of a field using production data involves determination of the scale of resolution of the permeability. The Adaptive Multiscale Estimation (AME) seeks to overcome the problems related to choosing the resolution of the permeability field by a dynamic parameterisation selection. The standard AME uses a gradient algorithm in solving several optimisation problems with increasing permeability resolution. This paper presents a hybrid algorithm which combines a gradient search and a stochastic algorithm to improve the robustness of the dynamic parameterisation selection. At low dimension, we use the stochastic algorithm to generate several optimised models. We use information from all these produced models to find new optimal refinements, and start out new optimisations with several unequally suggested parameterisations. At higher dimensions we change to a gradient-type optimiser, where the initial solution is chosen from the ensemble of models suggested by the stochastic algorithm. The selection is based on a predefined criterion. We demonstrate the robustness of the hybrid algorithm on sample synthetic cases, which most of them were considered insolvable using the standard AME algorithm.  相似文献   

13.
Effects of matrix grain size on the kinetics of intergranular diffusion   总被引:1,自引:0,他引:1  
A linear relationship exists between the mean volume of garnet porphyroblasts and the squared inverse of mean matrix grain diameter for six samples of garnetiferous mica quartzite with identical thermal histories and similar mineralogy and modes. This relationship accords with theoretical predictions of the dependence of intergranular diffusive fluxes on the volume fraction of grain edges that function as diffusional pathways during porphyroblast growth. The impact of matrix grain size is large: compared to a rock with a 1‐mm matrix, a rock with a 10‐μm matrix would experience rates of diffusion‐controlled porphyroblast growth that are 10 000 times faster, and characteristic length scales for chemical equilibration that are 100 times larger. Precursor grain sizes may therefore exert a major influence on crystallization kinetics. If matrix coarsening occurs during prograde reaction, a decrease in the volume fraction of diffusional pathways will tend to counteract the exponential thermal increase in diffusive fluxes. The impact of such matrix grain growth, although difficult to assess without firm knowledge of coarsening rates in polymineralic aggregates, might be significant for matrices finer than c. 100 μm at temperatures above c. 500–600 °C, but is likely negligible for coarser grain sizes and lower temperatures.  相似文献   

14.
基于粒子群优化的岩土工程反分析研究   总被引:11,自引:0,他引:11  
高玮 《岩土力学》2006,27(5):795-798
岩土工程优化反分析本质上看是一个典型的复杂非线性函数优化问题,采用全局优化算法是解决这个问题的理想途径,但由于优化反分析中多次调用正分析的特点使得整个算法的计算效率很低。为了提高优化反分析的计算效率,把一种计算效率更高的新型仿生算法--粒子群优化引入岩土工程反分析领域,提高反分析的计算效率。在此基础上,结合有限元数值分析技术,提出了一种新的岩土工程优化反分析算法--粒子群优化反分析。并通过一个简单算例验证了该法的有效性。  相似文献   

15.
Preconditioned projection (or conjugate gradient like) methods are increasingly used for the accurate and efficient solution to finite element (FE) coupled consolidation equations. Theory indicates that preliminary row/column scaling does not affect the eigenspectrum of the iteration matrix controlling convergence as long as the preconditioner relies on the incomplete factorization of the FE coefficient matrix. However, computational experience with mid‐large size problems shows that the above inexpensive operation can significantly accelerate the solver convergence, and to a minor extent also improve the final accuracy, as a result of a better solver stability to the accumulation and propagation of floating point round‐off errors. This is demonstrated with the aid of the least square logarithm (LSL) scaling algorithm on FE consolidation problems of increasing size up to more than 100 000. It is shown that a major source of numerical instability rests with the sub‐matrix which couples the structural to the fluid part of the underlying mathematical model. It is concluded that for mid‐large size, possibly difficult, FE consolidation problems left/right LSL scaling is to be always recommended when the incomplete factorization is used as a preconditioning technique. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
Geodesy utilizes state of the art data collection techniques such as GPS (Global Positioning System) to acquire locations of points. Traditionally, the coordinates of these points are estimated using the Least Squares (LS) method. Nevertheless, Robust Estimation (RE) yields more accurate results than LS method in the presence of blunders (gross errors) among the data set. For example, the Least Trimmed Squares (LTS) method and the Least Median Squares (LMS) method can be used for this purpose. The first method aims to minimize the sum of the squared residuals by trimming away observations with large residuals. On the other hand, the second method involves the minimization of the median of the squared residuals. Both methods can be implemented using an optimization method, i.e., Artificial Bee Colony (ABC) algorithm. The ABC algorithm is a swarm intelligence (a branch of artificial intelligence) technique that can be used for the solution of minimization or maximization problems. In this paper, using the LTS and LMS methods for GPS data by employing the ABC, a new approach is put forward. Firstly, some discussions about the theoretical principals of RE and ABC are given. Then, a numerical example is used to demonstrate the validity of the proposed approach. Numerical results show that application of the robust estimation to GPS data can easily be carried out by ABC and this approach helps to enhance the reliability of geospatial data for any application of geodesy.  相似文献   

17.
宁夏南部六盘山盆地属于典型黄土塬地貌的沉积盆地,该区表层沉积了巨厚黄土层,深层不同构造单元差异大,在地震数据采集上需克服地表和地下的双重影响。分析了该区地质条件以及勘探难点,结合实际资料并应用新技术和新方法,总结出一套针对该区的地震数据采集技术,可以指导该区后期地震勘探工作。   相似文献   

18.
基于遗传算法的土性参数估计   总被引:1,自引:0,他引:1  
在土工问题的研究和分析中,土性参数的合理估计非常重要,根据工程中实测值反演土性参数不失为一种有效会计参数的新思路,文章采用遗传算法并结合Biot固结有限元数值法对一假设路堤的双层地基的一些主要土性参数进行反演,结果显示误差很小,收敛速度也很快,说明遗传算法这种新型的优化算法在土性参数优化估计中具有精度高、反演快的优越性,克服了传统优化方法一些缺点,因此值得在岩土工程领域参数优化中推广。  相似文献   

19.
石榴石是最重要的造岩矿物之一,通常能够保留早期的矿物结构和物质并记录较为晚期的变形和变质反应。石榴石钇(Y)元素环带特征丰富、复杂,不同的环带特征通常暗示不同的形成环境或经历了不同的变质事件,是变质演化历史研究的重要媒介之一。以往的研究中,多以LA-ICP-MS作为石榴石Y元素的主要分析手段,EPMA主要用于主量元素的分析。但是,LA-ICP-MS的束斑尺寸(44μm)和基底效应较EPMA(0~5μm)大,当石榴石颗粒小、包体和裂隙发育或成分环带以微区尺寸内存在较大变化时,大束斑更容易覆盖某些特殊信息。通过对石榴石Y元素测试参数的调试和标样验证,最终确定峰位测试时长和背景测试时长分别为140s和70s,并进行了PHA谱峰干扰剥离,降低检测限至54×10^(-6)。本文将通过对比佛子岭石榴云母片岩(LD025)4颗石榴石的EPMA主、微量原位分析(Ca、Mg、Mn、Fe、Y、Al、Si、Cr、Ti、Na)和LA-ICP-MS石榴石Y元素分析结果,论证EPMA分析Y元素的可行性。石榴石X-ray Mapping和主量成分剖面揭示该4颗石榴石均为生长环带,Mn呈钟形分带,Y与X Sps呈强烈正相关性,与X Grs、X Alm、X Prp相关性不清晰。EPMA和LA-ICP-MS分析结果显示Y含量曲线在核部和幔部具有良好的一致性,Grt1~Grt3中Y均表现出自核部(500×10^(-6)~1200×10^(-6))向幔部(200×10^(-6)~500×10^(-6))逐渐降低,极边部Y含量低(20×10^(-6)~200×10^(-6))且变化复杂;Grt4中Y含量差异相对较小(180×10^(-6)~450×10^(-6)),仅在边部出现不同程度的升降。因EPMA对于Y元素含量较低(<200×10^(-6))时灵敏度不够或者LA-ICP-MS束斑尺寸大容易掩盖边部窄带成分真实变化等原因,二者在边部Y元素差异较大。分别对EPMA和LA-ICP-MS的分析结果应用Grt-Xtm温度计和Grt单矿物压力计获得的变质PT结果显示Grt1~Grt3(核-幔-边)和Grt4(核-边)均记录较为完整、统一的温压演化过程。M1→M2→M3的变质温压变化分别为T=530~544℃、P=0.78~0.82GPa→T=577~616℃、P=0.89~0.98GPa→T=631~661℃、P=1.01~1.07GPa,表现为顺时针演化型式,M1至M3反映的是一个“暖俯冲”过程。根据温度评价结果,Grt1~Grt3(1.2~1.4mm,自形程度高)形成时间应早于Grt4(0.8mm,自形程度低)。由此可知,大颗粒的石榴石Y元素含量及变化特征通常更容易揭示相对完整的变质演化历史。本次研究为变泥质岩演化历史、变质温压评价等研究提供了不同视角和思路,结合EPMA主量(矿物成分、X-ray mapping、BSE分析)和微量元素(Y等)分析能够更加精准、全面地解读地质信息。  相似文献   

20.
基于EBE方法的三维有限元并行计算   总被引:4,自引:1,他引:4  
在水利工程中,施工过程的模拟、动力的时域分析、开裂计算等,都对大规模并行计算提出了迫切的需求。然而,基于高斯消去的有限元直接解法,通常会占用大量的内存,并花费大量的CPU时间。而水利工程中的问题多为大带宽问题,这些问题更为突出。基于EBE-PCG方法的有限元方法,可以避免形成整体刚度矩阵,进而,显著减少内存的需求。而且,这种方法可以有效地并行实现,为大规模数值计算提供了可能。采用基于EBE策略的Jacobi预处理共轭梯度法,编制了有限元计算程序,并成功应用于溪洛渡、锦屏等工程的大规模数值分析。结果表明,对水利工程中的大带宽问题,该方法是一种很有效的并行计算方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号