首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
Building of models in the Earth Sciences often requires the solution of an inverse problem: some unknown model parameters need to be calibrated with actual measurements. In most cases, the set of measurements cannot completely and uniquely determine the model parameters; hence multiple models can describe the same data set. Bayesian inverse theory provides a framework for solving this problem. Bayesian methods rely on the fact that the conditional probability of the model parameters given the data (the posterior) is proportional to the likelihood of observing the data and a prior belief expressed as a prior distribution of the model parameters. In case the prior distribution is not Gaussian and the relation between data and parameters (forward model) is strongly non-linear, one has to resort to iterative samplers, often Markov chain Monte Carlo methods, for generating samples that fit the data likelihood and reflect the prior model statistics. While theoretically sound, such methods can be slow to converge, and are often impractical when the forward model is CPU demanding. In this paper, we propose a new sampling method that allows to sample from a variety of priors and condition model parameters to a variety of data types. The method does not rely on the traditional Bayesian decomposition of posterior into likelihood and prior, instead it uses so-called pre-posterior distributions, i.e. the probability of the model parameters given some subset of the data. The use of pre-posterior allows to decompose the data into so-called, “easy data” (or linear data) and “difficult data” (or nonlinear data). The method relies on fast non-iterative sequential simulation to generate model realizations. The difficult data is matched by perturbing an initial realization using a perturbation mechanism termed “probability perturbation.” The probability perturbation method moves the initial guess closer to matching the difficult data, while maintaining the prior model statistics and the conditioning to the linear data. Several examples are used to illustrate the properties of this method.  相似文献   

2.
Spatial inverse problems in the Earth Sciences are often ill-posed, requiring the specification of a prior model to constrain the nature of the inverse solutions. Otherwise, inverted model realizations lack geological realism. In spatial modeling, such prior model determines the spatial variability of the inverse solution, for example as constrained by a variogram, a Boolean model, or a training image-based model. In many cases, particularly in subsurface modeling, one lacks the amount of data to fully determine the nature of the spatial variability. For example, many different training images could be proposed for a given study area. Such alternative training images or scenarios relate to the different possible geological concepts each exhibiting a distinctive geological architecture. Many inverse methods rely on priors that represent a single subjectively chosen geological concept (a single variogram within a multi-Gaussian model or a single training image). This paper proposes a novel and practical parameterization of the prior model allowing several discrete choices of geological architectures within the prior. This method does not attempt to parameterize the possibly complex architectures by a set of model parameters. Instead, a large set of prior model realizations is provided in advance, by means of Monte Carlo simulation, where the training image is randomized. The parameterization is achieved by defining a metric space which accommodates this large set of model realizations. This metric space is equipped with a “similarity distance” function or a distance function that measures the similarity of geometry between any two model realizations relevant to the problem at hand. Through examples, inverse solutions can be efficiently found in this metric space using a simple stochastic search method.  相似文献   

3.
The frequency matching method defines a closed form expression for a complex prior that quantifies the higher order statistics of a proposed solution model to an inverse problem. While existing solution methods to inverse problems are capable of sampling the solution space while taking into account arbitrarily complex a priori information defined by sample algorithms, it is not possible to directly compute the maximum a posteriori model, as the prior probability of a solution model cannot be expressed. We demonstrate how the frequency matching method enables us to compute the maximum a posteriori solution model to an inverse problem by using a priori information based on multiple point statistics learned from training images. We demonstrate the applicability of the suggested method on a synthetic tomographic crosshole inverse problem.  相似文献   

4.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

5.
This paper proposes a novel history-matching method where reservoir structure is inverted from dynamic fluid flow response. The proposed workflow consists of searching for models that match production history from a large set of prior structural model realizations. This prior set represents the reservoir structural uncertainty because of interpretation uncertainty on seismic sections. To make such a search effective, we introduce a parameter space defined with a “similarity distance” for accommodating this large set of realizations. The inverse solutions are found using a stochastic search method. Realistic reservoir examples are presented to prove the applicability of the proposed method.  相似文献   

6.
7.
8.
The problem of multiphase phase flow in heterogeneous subsurface porous media is one involving many uncertainties. In particular, the permeability of the medium is an important aspect of the model that is inherently uncertain. Properly quantifying these uncertainties is essential in order to make reliable probabilistic-based predictions and future decisions. In this work, a measure-theoretic framework is employed to quantify uncertainties in a two-phase subsurface flow model in high-contrast media. Given uncertain saturation data from observation wells, the stochastic inverse problem is solved numerically in order to obtain a probability measure on the space of unknown permeability parameters characterizing the two-phase flow. As solving the stochastic inverse problem requires a number of forward model solves, we also incorporate the use of a conservative version of the generalized multiscale finite element method for added efficiency. The parameter-space probability measure is used in order to make predictions of saturation values where measurements are not available, and to validate the effectiveness of the proposed approach in the context of fine and coarse model solves. A number of numerical examples are offered to illustrate the measure-theoretic methodology for solving the stochastic inverse problem using both fine and coarse solution schemes.  相似文献   

9.
A new type of indirect inverse analysis procedure is proposed to overcome the difficulties the geotechnical inverse analyses are encountering (such as unstability and non-uniqueness of the solutions as well as multicollinearity). These difficulties are eased by combining the objective information (i.e. the observation data) and the subjective information (i.e. the prior information) in an appropriate manner by so-called extended Bayesian method. The method is based on a new view on Bayesian model proposed by Akaike. The problem of model identification in the inverse analysis is also tackled by applying well-known AIC but of the Bayesian version. A case study on an embankment on soft clay is presented to illustrate the effectiveness of the new method. A rather thorough review on the geotechnical inverse analysis is also presented to indicate the necessity of the proposed procedure. An appendix is attached to summarize the statistical background of the new method.  相似文献   

10.
针对湖北大冶矿区复杂的地质问题, 基于高精度重磁实测数据, 采用三维重磁数据概率分析和物性反演技术, 实现了大冶矿区重磁资料的精细化处理与解释, 为大冶危机矿山深部及外围找矿提供了重要的参考.在无约束条件下对高磁异常源进行了三维概率成像反演, 并结合地质先验信息完成了矿区航磁异常的带地形人机交互三维反演, 预测了矿区沉积岩类与闪长岩体接触带的空间延展形态, 定量反演了矿区的三维物性特征, 反演结果显示在矿区深部(1000m以下) 及接触带弯折部位为有利的找矿区域, 目前通过实钻已得到证实.   相似文献   

11.
Li  Xiaobin  Li  Yunbo  Tang  Junting 《Natural Hazards》2019,97(1):83-97

Mine gas disaster prediction and prevention are based on gas content measurement, which results in initial stage loss when determining coal gas desorption contents in engineering applications. We propose a Bayesian probability statistical method in the coal gas desorption model on the basis of constrained prior information. First, we use a self-made coal sample gas desorption device to test initial stage gas desorption data of tectonic coal and undeformed coal. Second, we calculate the initial stage loss of different coal samples with the power exponential function parameters by using Bayesian probability statistics and least squares estimation. Results show that Bayesian probability statistics and least squares estimation can be used to obtain regression and desorption coefficients, thereby illustrating the Bayesian estimation method’s validity and reliability. Given that the Bayesian probability method can apply prior information to constrain the model’s posterior parameters, it provides results that are statistically significant in the initial stage loss of coal gas desorption by connecting observation data and prior information.

  相似文献   

12.
Constraining stochastic models of reservoir properties such as porosity and permeability can be formulated as an optimization problem. While an optimization based on random search methods preserves the spatial variability of the stochastic model, it is prohibitively computer intensive. In contrast, gradient search methods may be very efficient but it does not preserve the spatial variability of the stochastic model. The gradual deformation method allows for modifying a reservoir model (i.e., realization of the stochastic model) from a small number of parameters while preserving its spatial variability. It can be considered as a first step towards the merger of random and gradient search methods. The gradual deformation method yields chains of reservoir models that can be investigated successively to identify an optimal reservoir model. The investigation of each chain is based on gradient computations, but the building of chains of reservoir models is random. In this paper, we propose an algorithm that further improves the efficiency of the gradual deformation method. Contrary to the previous gradual deformation method, we also use gradient information to build chains of reservoir models. The idea is to combine the initial reservoir model or the previously optimized reservoir model with a compound reservoir model. This compound model is a linear combination of a set of independent reservoir models. The combination coefficients are calculated so that the search direction from the initial model is as close as possible to the gradient search direction. This new gradual deformation scheme allows us for reducing the number of optimization parameters while selecting an optimal search direction. The numerical example compares the performance of the new gradual deformation scheme with that of the traditional one.  相似文献   

13.
地基沉降修正系数的Bayes概率推断   总被引:2,自引:0,他引:2  
通过分析常规方法在沉降修正系数的选取中具有的定值性和随意性,引入建立在过去信息和现在样本信息之上的Bayes理论,结合某客运专线红黏土路基工程,提出用后验分布得到修正系数的取值范围。实例研究表明,用以往经验综合样本信息,估计修正系数的先验概率在某一区间上服从均匀分布。由现场载荷试验实测沉降量与理论计算沉降量分析所得的修正系数,将现场量测的沉降变形信息与先验信息结合起来,利用Bayes统计理论,由小样本试验数据推算得到修正系数的后验概率服从正态分布。对后验分布所得参数进行区间估计,得到该区域红黏土地基沉降修正系数的取值优化区间为 [1.0, 1.7],分析了不同荷载作用条件下沉降修正系数的概率分布模型。  相似文献   

14.
15.
In this paper, we propose a method to detect the damage and estimate the degree of damage by means of a multifield‐based inverse analysis. The fields being considered are displacement, temperature, and water pressure. Furthermore, the uncertainties due to the size of the damage, the errors in the measurement data, and the errors in the model parameters are also investigated. The uncertainty due to the measurements is quantified by assuming different sources of noise in the measurements. The inverse problem is solved repeatedly by a sampling process. The uncertainties in the inverse solutions can be quantified by their probability distributions. This method can be applied to identify damages in masonry dams using coupled nonlinear thermo‐hydro‐mechanical problems.  相似文献   

16.
The conditional probabilities (CP) method implements a new procedure for the generation of transmissivity fields conditional to piezometric head data capable to sample nonmulti-Gaussian random functions and to integrate soft and secondary information. The CP method combines the advantages of the self-calibrated (SC) method with probability fields to circumvent some of the drawbacks of the SC method—namely, its difficulty to integrate soft and secondary information or to generate non-Gaussian fields. The SC method is based on the perturbation of a seed transmissivity field already conditional to transmissivity and secondary data, with the perturbation being function of the transmissivity variogram. The CP method is also based on the perturbation of a seed field; however, the perturbation is made function of the full transmissivity bivariate distribution and of the correlation to the secondary data. The two methods are applied to a sample of an exhaustive non-Gaussian data set of natural origin to demonstrate the interest of using a simulation method that is capable to model the spatial patterns of transmissivity variability beyond the variogram. A comparison of the probabilistic predictions of convective transport derived from a Monte Carlo exercise using both methods demonstrates the superiority of the CP method when the underlying spatial variability is non-Gaussian.  相似文献   

17.
Optimization with the Gradual Deformation Method   总被引:1,自引:0,他引:1  
Building reservoir models consistent with production data and prior geological knowledge is usually carried out through the minimization of an objective function. Such optimization problems are nonlinear and may be difficult to solve because they tend to be ill-posed and to involve many parameters. The gradual deformation technique was introduced recently to simplify these problems. Its main feature is the preservation of the spatial structure: perturbed realizations exhibit the same spatial variability as the starting ones. It is shown that optimizations based on gradual deformation converge exponentially to the global minimum, at least for linear problems. In addition, it appears that combining the gradual deformation parameterization with optimizations may remove step by step the structure preservation capability of the gradual deformation method. This bias is negligible when deformation is restricted to a few realization chains, but grows increasingly when the chain number tends to infinity. As in practice, optimization of reservoir models is limited to a small number of iterations with respect to the number of gridblocks, the spatial variability is preserved. Last, the optimization processes are implemented on the basis of the Levenberg–Marquardt method. Although the objective functions, written in terms of Gaussian white noises, are reduced to the data mismatch term, the conditional realization space can be properly sampled.  相似文献   

18.
Gradual deformation is a parameterization method that reduces considerably the unknown parameter space of stochastic models. This method can be used in an iterative optimization procedure for constraining stochastic simulations to data that are complex, nonanalytical functions of the simulated variables. This method is based on the fact that linear combinations of multi-Gaussian random functions remain multi-Gaussian random functions. During the past few years, we developed the gradual deformation method by combining independent realizations. This paper investigates another alternative: the combination of dependent realizations. One of our motivations for combining dependent realizations was to improve the numerical stability of the gradual deformation method. Because of limitations both in the size of simulation grids and in the precision of simulation algorithms, numerical realizations of a stochastic model are never perfectly independent. It was shown that the accumulation of very small dependence between realizations might result in significant structural drift from the initial stochastic model. From the combination of random functions whose covariance and cross-covariance are proportional to each other, we derived a new formulation of the gradual deformation method that can explicitly take into account the numerical dependence between realizations. This new formulation allows us to reduce the structural deterioration during the iterative optimization. The problem of combining dependent realizations also arises when deforming conditional realizations of a stochastic model. As opposed to the combination of independent realizations, combining conditional realizations avoids the additional conditioning step during the optimization process. However, this procedure is limited to global deformations with fixed structural parameters.  相似文献   

19.
Matching seismic data in assisted history matching processes can be a challenging task. One main idea is to bring flexibility in the choice of the parameters to be perturbed, focusing on the information provided by seismic data. Local parameterization techniques such as pilot-point or gradual deformation methods can be introduced, considering their high adaptability. However, the choice of the spatial supports associated to the perturbed parameters is crucial to successfully reduce the seismic mismatch. The information related to seismic data is sometimes considered to initialize such local methods. Recent attempts to define the regions adaptively have been proposed, focusing on the mismatch between simulated and reference seismic data. However, the regions are defined manually for each optimization process. Therefore, we propose to drive the definition of the parameter support by performing an automatic definition of the regions to be perturbed from the residual maps related to the 3D seismic data. Two methods are developed in this paper. The first one consists in clustering the residual map with classification algorithms. The second method proposes to drive the generation of pilot point locations in an adaptive way. Residual maps, after proper normalization, are considered as probability density functions of the pilot point locations. Both procedures lead to a complete adaptive and highly flexible perturbation technique for 3D seismic matching. A synthetic study based on the PUNQ test case is introduced to illustrate the potential of these adaptive strategies.  相似文献   

20.
In the present paper, a new geostatistical parameterization technique is introduced for solving inverse problems, either in groundwater hydrology or petroleum engineering. The purpose of this is to characterize permeability at the field scale from the available dynamic data, that is, data depending on fluid displacements. Thus, a permeability model is built, which yields numerical flow answers similar to the data collected. This problem is often defined as an objective function to be minimized. We are especially focused on the possibility to locally change the permeability model, so as to further reduce the objective function. This concern is of interest when dealing with 4D-seismic data. The calibration phase consists of selecting sub-domains or pilot blocks and of varying their log-permeability averages. The permeability model is then constrained to these fictitious block-data through simple cokriging. In addition, we estimate the prior probability density function relative to the pilot block values and incorporate this prior information into the objective function. Therefore, variations in block values are governed by the optimizer while accounting for nearby point and block-data. Pilot block based optimizations provide permeability models respecting point-data at their locations, spatial variability models inferred from point-data and dynamic data in a least squares sense. A synthetic example is presented to demonstrate the applicability of the proposed matching methodology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号