首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new uncertainty estimation method, which we recently introduced in the literature, allows for the comprehensive search of model posterior space while maintaining a high degree of computational efficiency. The method starts with an optimal solution to an inverse problem, performs a parameter reduction step and then searches the resulting feasible model space using prior parameter bounds and sparse‐grid polynomial interpolation methods. After misfit rejection, the resulting model ensemble represents the equivalent model space and can be used to estimate inverse solution uncertainty. While parameter reduction introduces a posterior bias, it also allows for scaling this method to higher dimensional problems. The use of Smolyak sparse‐grid interpolation also dramatically increases sampling efficiency for large stochastic dimensions. Unlike Bayesian inference, which treats the posterior sampling problem as a random process, this geometric sampling method exploits the structure and smoothness in posterior distributions by solving a polynomial interpolation problem and then resampling from the resulting interpolant. The two questions we address in this paper are 1) whether our results are generally compatible with established Bayesian inference methods and 2) how does our method compare in terms of posterior sampling efficiency. We accomplish this by comparing our method for two electromagnetic problems from the literature with two commonly used Bayesian sampling schemes: Gibbs’ and Metropolis‐Hastings. While both the sparse‐grid and Bayesian samplers produce compatible results, in both examples, the sparse‐grid approach has a much higher sampling efficiency, requiring an order of magnitude fewer samples, suggesting that sparse‐grid methods can significantly improve the tractability of inference solutions for problems in high dimensions or with more costly forward physics.  相似文献   

2.
In risk analysis, a complete characterization of the concentration distribution is necessary to determine the probability of exceeding a threshold value. The most popular method for predicting concentration distribution is Monte Carlo simulation, which samples the cumulative distribution function with a large number of repeated operations. In this paper, we first review three most commonly used Monte Carlo (MC) techniques: the standard Monte Carlo, Latin Hypercube sampling, and Quasi Monte Carlo. The performance of these three MC approaches is investigated. We then apply stochastic collocation method (SCM) to risk assessment. Unlike the MC simulations, the SCM does not require a large number of simulations of flow and solute equations. In particular, the sparse grid collocation method and probabilistic collocation method are employed to represent the concentration in terms of polynomials and unknown coefficients. The sparse grid collocation method takes advantage of Lagrange interpolation polynomials while the probabilistic collocation method relies on polynomials chaos expansions. In both methods, the stochastic equations are reduced to a system of decoupled equations, which can be solved with existing solvers and whose results are used to obtain the expansion coefficients. Then the cumulative distribution function is obtained by sampling the approximate polynomials. Our synthetic examples show that among the MC methods, the Quasi Monte Carlo gives the smallest variance for the predicted threshold probability due to its superior convergence property and that the stochastic collocation method is an accurate and efficient alternative to MC simulations.  相似文献   

3.
Information from an outcrop is used as an analogue of a natural heterogeneous aquifer in order to provide an exhaustive data set of hydraulic properties. Based on this data, two commonly used borehole based investigation methods are simulated numerically. For a scenario of sparse sampling of the aquifer, the process of regionalization of the borehole hydraulic conductivity values is simulated by application of a deterministic interpolation approach and conditioned stochastic simulations. Comparison of the cumulative distributions of particle arrival times illustrates the effects of the sparse sampling, the properties of the individual investigation methods and the regionalization methods on the ability to predict flow and transport behaviour in the real system (i.e. the exhaustive data set).  相似文献   

4.
The seismic wave field, in its high-frequency asymptotic approximation, can be interpolated from a low- to a high-resolution spatial grid of receivers and, possibly, point sources by interpolating the eikonal (travel time) and the amplitude. These quantities can be considered as functions of position only. The travel time and the amplitude are assumed to vary in space only slowly, otherwise the validity conditions of the theory behind would be violated. Relatively coarse spatial sampling is then usually sufficient to obtain their reasonable interpolation. The interpolation is performed in 2-D models of different complexity. The interpolation geometry is either 1-D, 2-D, or 3-D according to the source-receiver distribution. Several interpolation methods are applied: the Fourier interpolation based on the sampling theorem, the linear interpolation, and the interpolation by means of the paraxial approximation. These techniques, based on completely different concepts, are tested by comparing their results with a reference ray-theory solution computed for gathers and grids with fine sampling. The paraxial method holds up as the most efficient and accurate in evaluating travel times from all investigated techniques. However, it is not suitable for approximation of amplitudes, for which the linear interpolation has proved to be universal and accurate enough to provide results acceptable for many seismological applications.  相似文献   

5.
This paper concerns efficient uncertainty quantification techniques in inverse problems for Richards’ equation which use coarse-scale simulation models. We consider the problem of determining saturated hydraulic conductivity fields conditioned to some integrated response. We use a stochastic parameterization of the saturated hydraulic conductivity and sample using Markov chain Monte Carlo methods (MCMC). The main advantage of the method presented in this paper is the use of multiscale methods within an MCMC method based on Langevin diffusion. Additionally, we discuss techniques to combine multiscale methods with stochastic solution techniques, specifically sparse grid collocation methods. We show that the proposed algorithms dramatically reduce the computational cost associated with traditional Langevin MCMC methods while providing similar sampling performance.  相似文献   

6.
The sampling theorem in two dimensions univocally defines a surface, provided that its values are known at points disposed on a regular lattice. If the data are irregularly spaced, the usual procedure is first to interpolate the surface on a regular grid and then to contour the interpolated data: however, the resulting surface will not necessarily assume the prescribed values on the irregular grid. One way to obtain this result is to introduce a transformation of the coordinates such that all the original data points are transferred into part of the nodes of a regular grid. The surface is then interpolated in the points correspondent to the other crosspoints of the regular grid; the contour lines are determined in the transformed plane and then, using the inverse coordinate transformation, are transferred back to the original plane where they will certainly be congruent with the original data points. Nonetheless, the resulting surface is very sensitive to the interpolation method used: two algorithms for that are analyzed. The first (harmonization) corresponds to the determination of the potential of an electrical field whose contour conditions are those defined by the data points. The second method consists in two dimensional statistical estimation (krigeing); in particular, the effects of different choices for the data auto-covariance function are discussed. The solutions are compared and some practical results are shown.  相似文献   

7.
Digital elevation models (DEMs) are still an important and current source of information for digital soil mapping and the modeling of soil processes. The grid DEM is often interpolated from contour lines. The contour sampling step becomes an additional interpolation parameter which can play an important role. The objective of this paper is to optimize the interpolation parameters of the Regularized spline with tension (RST) method, in order to prepare a DEM suitable as an input for erosion modeling. Two contrasting cases, with and without a reference DEM, were investigated. If a reference DEM was available, good results of interpolation were reached both by small and larger sampling steps. In the second case, it was found that small sampling steps should be avoided. The influence of the sampling was demonstrated by topographic potential for erosion and deposition.  相似文献   

8.
The process of attempting to model ground-water systems requires a good understanding of the spatial variation of aquifer hydraulic properties. The capabilities of the more recent innovative flowmeters such as the electromagnetic and heat pulse flowmeters provide the sensitivity to measure ambient flows and pump-induced flows. These flowmeters provide the measurements of pump-induced vertical flows which are analyzed to obtain vertical variations in horizontal hydraulic conductivity, K(z). With discrete areal K-values, K(x, y), and vertical profiles of K, provided by multiwell testing, the essential elements are present to produce a three-dimensional hydraulic conductivity field. The advent of these new flow measuring devices has contributed much to the motivation behind this paper. This paper presents the results of applying deterministic and stochastic methodology to the three-dimensional interpolation of hydraulic properties, specifically, hydraulic conductivity, K. Three of the approaches applied in this paper are deterministic in nature, inverse-distance weighting, inverse-distance-squared weighting, and ordinary kriging, while the fourth is a stochastic approach based on self-affine fractals. All of the methods are applied to measured data collected from 14 wells at a site in the United States near Mobile, Alabama. The three-dimensional K-distributions generated by each of the methods are used as inputs to an advective based transport model with the resulting model output compared to a two-well tracer study run previously at the same site.  相似文献   

9.
Quantitative estimation of the material transported by the wind under field conditions is essential for the study and control of wind erosion. A critical step of this calculation is the integration of the curve that relates the variation of the amount of the material carried by the wind with height. Several mathematical procedures have been proposed for this calculation, but results are scarce and controversial. One objective of this study was to assess the efficiency of three mathematical models (a rational, an exponential, and a simplified Gaussian function) for the calculation of the mass transport, as compared to the linear spline interpolation. Another objective of this study was to compare the mass transport calculated from field measurements obtained from a minimum of three discrete sampling heights with measurements of nine sampling heights. With this purpose, wind erosion was measured under low surface roughness conditions on an Entic Haplustoll during 25 events. The rational function was found to be mathematically limited for the estimation of wind eroded sediment mass flux. The simplified Gaussian model did not fit to the vertical mass flux profile data. Linear spline interpolation generally produced higher mass transport estimates than the exponential equation, and it proved to be a very flexible and robust method. Using different sampling arrangements and different mass flux models can produce differences of more than 45% in mass transport estimates, even under similar field conditions. Under the conditions of this study, at least three points between the soil surface and 1·5 m high, including one point as closest as possible to the surface, should be sampled in order to obtain accurate mass transport estimates. Additionally, the linear spline interpolation and the non‐linear regression using an exponential model, proved to be mathematically reliable methods for calculating the mass transport. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

10.
基于Bregman迭代的复杂地震波场稀疏域插值方法   总被引:2,自引:1,他引:1  
在地震勘探中,野外施工条件等因素使观测系统很难记录到完整的地震波场,因此,资料处理中的地震数据插值是一个重要的问题。尤其在复杂构造条件下,缺失的叠前地震数据给后续高精度处理带来严重的影响。压缩感知理论源于解决图像采集问题,主要包含信号的稀疏表征以及数学组合优化问题的求解,它为地震数据插值问题的求解提供了有效的解决方案。在应用压缩感知求解复杂地震波场的插值问题中,如何最佳化表征复杂地震波场以及快速准确的迭代算法是该理论应用的关键问题。Seislet变换是一个特殊针对地震波场表征的稀疏多尺度变换,该方法能有效地压缩地震波同相轴。同时,Bregman迭代算法在以稀疏表征为核心的压缩感知理论中,是一种有效的求解算法,通过选取适当的阈值参数,能够开发地震波动力学预测理论、图像处理变换方法和压缩感知反演算法相结合的地震数据插值方法。本文将地震数据插值问题纳入约束最优化问题,选取能够有效压缩复杂地震波场的OC-seislet稀疏变换,应用Bregman迭代方法求解压缩感知理论框架下的混合范数反问题,提出了Bregman迭代方法中固定阈值选取的H曲线方法,实现地震波场的快速、准确重建。理论模型和实际数据的处理结果验证了基于H曲线准则的Bregman迭代稀疏域插值方法可以有效地恢复复杂波场的缺失信息。  相似文献   

11.
In common‐reflection‐surface imaging the reflection arrival time field is parameterized by operators that are of higher dimension or order than in conventional methods. Using the common‐reflection‐surface approach locally in the unmigrated prestack data domain opens a potential for trace regularization and interpolation. In most data interpolation methods based on local coherency estimation, a single operator is designed for a target sample and the output amplitude is defined as a weighted average along the operator. This approach may fail in presence of interfering events or strong amplitude and phase variations. In this paper we introduce an alternative scheme in which there is no need for an operator to be defined at the target sample itself. Instead, the amplitude at a target sample is constructed from multiple operators estimated at different positions. In this case one operator may contribute to the construction of several target samples. Vice versa, a target sample might receive contributions from different operators. Operators are determined on a grid which can be sparser than the output grid. This allows to dramatically decrease the computational costs. In addition, the use of multiple operators for a single target sample stabilizes the interpolation results and implicitly allows several contributions in case of interfering events. Due to the considerable computational expense, common‐reflection‐surface interpolation is limited to work in subsets of the prestack data. We present the general workflow of a common‐reflection‐surface‐based regularization/interpolation for 3D data volumes. This workflow has been applied to an OBC common‐receiver volume and binned common‐offset subsets of a 3D marine data set. The impact of a common‐reflection‐surface regularization is demonstrated by means of a subsequent time migration. In comparison to the time migrations of the original and DMO‐interpolated data, the results show particular improvements in view of the continuity of reflections events. This gain is confirmed by an automatic picking of a horizon in the stacked time migrations.  相似文献   

12.
The comparison between two series of optimal remediation designs using deterministic and stochastic approaches showed a number of converging features. Limited sampling measurements in a supposed contaminated aquifer formed the hydraulic conductivity field and the initial concentration distribution used in the optimization process. The deterministic and stochastic approaches employed a single simulation–optimization method and a multiple realization approach, respectively. For both approaches, the optimization model made use of a genetic algorithm. In the deterministic approach, the total cost, extraction rate, and the number of wells used increase when the design must satisfy the intensified concentration constraint. Growing the stack size in the stochastic approach also brings about same effects. In particular, the change in the selection frequency of the used extraction wells, with increasing stack size, for the stochastic approach can indicate the locations of required additional wells in the deterministic approach due to the intensified constraints. These converging features between the two approaches reveal that a deterministic optimization approach with controlled constraints is achievable enough to design reliable remediation strategies, and the results of a stochastic optimization approach are readily available to real contaminated sites.  相似文献   

13.
本文用确定性和随机概率的方法研究了工桥墩台和小跨拱桥在恒载及活载(包括地震效应)作用下决定某截面上的偏心距,以便使其在设计规范规定的限度时所能通过的活载最大等级是什么,由于地震随机性强烈,故基于概率理论的结构可靠性方法优于确定性的方法,但在本文中的结果显示,这两种方法的最终结果十分接近。  相似文献   

14.
《Advances in water resources》2007,30(4):1027-1045
Streamline methods have shown to be effective for reservoir simulation. For a regular grid, it is common to use the semi-analytical Pollock’s method to obtain streamlines and time-of-flight coordinates (TOF). The usual way of handling irregular grids is by trilinear transformation of each grid cell to a unit cube together with a linear flux interpolation scaled by the Jacobian. The flux interpolation allows for fast integration of streamlines, but is inaccurate even for uniform flow. To improve the tracing accuracy, we introduce a new interpolation method, which we call corner-velocity interpolation. Instead of interpolating the velocity field based on discrete fluxes at cell edges, the new method interpolates directly from reconstructed point velocities given at the corner points in the grid. This allows for reproduction of uniform flow, and eliminates the influence of cell geometries on the velocity field. Using several numerical examples, we demonstrate that the new method is more accurate than the standard tracing methods.  相似文献   

15.
Modelling and inversion of controlled‐source electromagnetic (CSEM) fields requires accurate interpolation of modelled results near strong resistivity contrasts. There, simple linear interpolation may produce large errors, whereas higher‐order interpolation may lead to oscillatory behaviour in the interpolated result. We propose to use the essentially non‐oscillatory, piecewise polynomial interpolation scheme designed for piecewise smooth functions that contains discontinuities in the function itself or in its first or higher derivatives. The scheme uses a non‐linear adaptive algorithm to select a set of interpolation points that represent the smoothest part of the function among the sets of neighbouring points. We present numerical examples to demonstrate the usefulness of the scheme. The first example shows that the essentially non‐oscillatory interpolation (ENO) scheme better captures an isolated discontinuity. In the second example, we consider the case of sampling the electric field computed by a finite‐volume CSEM code at a receiver location. In this example, the ENO interpolation performs quite well. However, the overall error is dominated by the discretization error. The other examples consider the comparison between sampling with essentially non‐oscillatory interpolation and existing interpolation schemes. In these examples, essentially non‐oscillatory interpolation provides more accurate results than standard interpolation, especially near discontinuities.  相似文献   

16.
Rock-masses are divided into many closed blocks by deterministic and stochastic discontinuities and engineering interfaces in complex rock-mass engineering. Determining the sizes, shapes, and adjacent relations of blocks is important for stability analysis of fractured rock masses. Here we propose an algorithm for identifying spatial blocks based on a hierarchical 3D Rock-mass Structure Model (RSM). First, a model is built composed of deterministic discontinuities, engineering interfaces, and the earth’s su...  相似文献   

17.
强震记录的采样与插值研究   总被引:2,自引:0,他引:2       下载免费PDF全文
本文从频域和时域两方面对强震记录的采样和插值过程进行了研究。研究中将采样与插值当作一个信号转换系统,通过数值计算,求得和各种常用的采样和插值过程相对应的传递函数。通过对传递函数的进一步研究表明:采样和插值方式对强震记录数据处理的结果有重要的影响,发现采样过程相当于一个低通滤波器,可以滤除信号中的某些高频信息,而插值过程犹如一个高频噪声源,会在数字记录中引入某些伪高频分量。分析结果还表明:在相同的采样密度下,不等距采样方式具有较高的精度,而等间距采样方式能给出更宽的平坦的频率特性曲线;抛物线插值给出的传递函数结果要比线性插值的结果更精确,这对恢复数字信号中的高频成分具有重要的意义。  相似文献   

18.
分段光滑曲线边界波动方程数值模拟研究   总被引:1,自引:1,他引:0       下载免费PDF全文
矩形网格有限差分法在地震波传播数值模拟方面具有计算速度快的显著优势,但该方法在处理复杂边界问题上存在着效率低的严重缺陷.本文针对分段光滑曲线边界定义了尖点处的一种正则导数,给出了矩形网格情形分段光滑曲线网格边界点法向导数的一种插值计算方法.采用矩形网格有限差分法对复杂边界地球介质模型进行地震波场数值模拟,并采用波场系列快照技术揭示地震波在起伏地表和复杂介质中的传播规律.模拟结果表明:法向导数插值计算方法为矩形网格有限差分法处理复杂边界提供了有效途径,采用波场系列快照技术可以清晰地展现地震波在反射界面的反射和透射规律、在尖点的绕射规律以及在自由表面的直达波和多次反射规律.  相似文献   

19.
Correlation and covariance of runoff   总被引:1,自引:0,他引:1  
The application of objective methods for interpolation of stochastic fields is based on the assumption of homogeneity with respect to the correlation function, i.e. only the relative distance between two points is of importance. This is not the case for runoff data which is demonstrated in this paper. Taking into consideration the structure of the river network and the related drainage basin supporting areas theoretical expressions are derived for the correlation function for flow along a river from its outlet and upstream. The results are exact for a rectangular drainage basin. For more complex basin geometry a grid approximation is suggested. The found relations are demonstrated on a real world example with a good agreement between the theoretically calculated correlation functions and empirical data.  相似文献   

20.

有限差分法求解Helmholtz方程,依赖于两点:1差分格式的构造;2高效的求解算法.本文采用平均导数法离散Helmholtz方程.该差分格式有三点好处:1能适用于横纵不等间距采样;2在完全匹配层区域(PML),差分方程与微分方程逐点相容;3能将一个波长内的采样点数减少至少于4.求解离散的Helmholtz方程的算法一般分为直接法和迭代算法.直接法由于内存需求太大而无法适用于大规模问题;基于Krylov子空间的迭代方法结合多重网格预条件算法是一种快速高效求解方法,然而对于横纵不等间距采样(在多重网格中称为各向异性问题),经典的多重网格方法失效.本文分析了经典多重网格的三个重要组成部分:完全加权限制算子,点松弛技术以及双线性延拓算子,进而采用了半粗化技术代替全粗化技术,线松弛技术代替点松弛技术以及依赖差分算子的延拓算子代替双线性延拓算子,使得各向异性问题变得收敛;而且对于非均匀介质中-低频率的迭代问题,我们获得了较为满意的收敛速度.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号