首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Inverse problems are ubiquitous in the Earth Sciences. Many such problems are ill-posed in the sense that multiple solutions can be found that match the data to be inverted. To impose restrictions on these solutions, a prior distribution of the model parameters is required. In a spatial context this prior model can be as simple as a Multi-Gaussian law with prior covariance matrix, or could come in the form of a complex training image describing the prior statistics of the model parameters. In this paper, two methods for generating inverse solutions constrained to such prior model are compared. The gradual deformation method treats the problem of finding inverse solution as an optimization problem. Using a perturbation mechanism, the gradual deformation method searches (optimizes) in the prior model space for those solutions that match the data to be inverted. The perturbation mechanism guarantees that the prior model statistics are honored. However, it is shown with a simple example that this perturbation method does not necessarily draw accurately samples from a given posterior distribution when the inverse problem is framed within a Bayesian context. On the other hand, the probability perturbation method approaches the inverse problem as a data integration problem. This method explicitly deals with the problem of combining prior probabilities with pre-posterior probabilities derived from the data. It is shown that the sampling properties of the probability perturbation method approach the accuracy of well-known Markov chain Monte Carlo samplers such as the rejection sampler. The paper uses simple examples to illustrate the clear differences between these two methods  相似文献   

3.
Bayesian updating methods provide an alternate philosophy to the characterization of the input variables of a stochastic mathematical model. Here, a priori values of statistical parameters are assumed on subjective grounds or by analysis of a data base from a geologically similar area. As measurements become available during site investigations, updated estimates of parameters characterizing spatial variability are generated. However, in solving the traditional updating equations, an updated covariance matrix may be generated that is not positive-definite, particularly when observed data errors are small. In addition, measurements may indicate that initial estimates of the statistical parameters are poor. The traditional procedure does not have a facility to revise the parameter estimates before the update is carried out. alternatively, Bayesian updating can be viewed as a linear inverse problem that minimizes a weighted combination of solution simplicity and data misfit. Depending on the weight given to the a priori information, a different solution is generated. A Bayesian updating procedure for log-conductivity interpolation that uses a singular value decomposition (SVD) is presented. An efficient and stable algorithm is outlined that computes the updated log-conductivity field and the a posteriori covariance of the estimated values (estimation errors). In addition, an information density matrix is constructed that indicates how well predicted data match observations. Analysis of this matrix indicates the relative importance of the observed data. The SVD updating procedure is used to interpolate the log-conductivity fields of a series of hypothetical aquifers to demonstrate pitfalls and possibilities of the method.  相似文献   

4.
Bayesian updating methods provide an alternate philosophy to the characterization of the input variables of a stochastic mathematical model. Here, a priori values of statistical parameters are assumed on subjective grounds or by analysis of a data base from a geologically similar area. As measurements become available during site investigations, updated estimates of parameters characterizing spatial variability are generated. However, in solving the traditional updating equations, an updated covariance matrix may be generated that is not positive-definite, particularly when observed data errors are small. In addition, measurements may indicate that initial estimates of the statistical parameters are poor. The traditional procedure does not have a facility to revise the parameter estimates before the update is carried out. alternatively, Bayesian updating can be viewed as a linear inverse problem that minimizes a weighted combination of solution simplicity and data misfit. Depending on the weight given to the a priori information, a different solution is generated. A Bayesian updating procedure for log-conductivity interpolation that uses a singular value decomposition (SVD) is presented. An efficient and stable algorithm is outlined that computes the updated log-conductivity field and the a posteriori covariance of the estimated values (estimation errors). In addition, an information density matrix is constructed that indicates how well predicted data match observations. Analysis of this matrix indicates the relative importance of the observed data. The SVD updating procedure is used to interpolate the log-conductivity fields of a series of hypothetical aquifers to demonstrate pitfalls and possibilities of the method.  相似文献   

5.
Building of models in the Earth Sciences often requires the solution of an inverse problem: some unknown model parameters need to be calibrated with actual measurements. In most cases, the set of measurements cannot completely and uniquely determine the model parameters; hence multiple models can describe the same data set. Bayesian inverse theory provides a framework for solving this problem. Bayesian methods rely on the fact that the conditional probability of the model parameters given the data (the posterior) is proportional to the likelihood of observing the data and a prior belief expressed as a prior distribution of the model parameters. In case the prior distribution is not Gaussian and the relation between data and parameters (forward model) is strongly non-linear, one has to resort to iterative samplers, often Markov chain Monte Carlo methods, for generating samples that fit the data likelihood and reflect the prior model statistics. While theoretically sound, such methods can be slow to converge, and are often impractical when the forward model is CPU demanding. In this paper, we propose a new sampling method that allows to sample from a variety of priors and condition model parameters to a variety of data types. The method does not rely on the traditional Bayesian decomposition of posterior into likelihood and prior, instead it uses so-called pre-posterior distributions, i.e. the probability of the model parameters given some subset of the data. The use of pre-posterior allows to decompose the data into so-called, “easy data” (or linear data) and “difficult data” (or nonlinear data). The method relies on fast non-iterative sequential simulation to generate model realizations. The difficult data is matched by perturbing an initial realization using a perturbation mechanism termed “probability perturbation.” The probability perturbation method moves the initial guess closer to matching the difficult data, while maintaining the prior model statistics and the conditioning to the linear data. Several examples are used to illustrate the properties of this method.  相似文献   

6.
7.
Reservoir characterization needs the integration of various data through history matching, especially dynamic information such as production or four-dimensional seismic data. To update geostatistical realizations, the local gradual deformation method can be used. However, history matching is a complex inverse problem, and the computational effort in terms of the number of reservoir simulations required in the optimization procedure increases with the number of matching parameters. History matching large fields with a large number of parameters has been an ongoing challenge in reservoir simulation. This paper presents a new technique to improve history matching with the local gradual deformation method using the gradient-based optimizations. The new approach is based on the approximate derivative calculations using the partial separability of the objective function. The objective function is first split into local components, and only the most influential parameters in each component are used for the derivative computation. A perturbation design is then proposed to simultaneously compute all the derivatives with only a few simulations. This new technique makes history matching using the local gradual deformation method with large numbers of parameters tractable.  相似文献   

8.
The thermal regime of the Northeastern-German Basin from 2-D inversion   总被引:2,自引:0,他引:2  
The thermal regime and the distribution of heat flow at the base of sedimentary basins is fundamental to the understanding of the process of basin evolution and the associated mobilization and migration of hydrocarbon and other fluids. For the Northeastern-German sedimentary basin, available information on structure, temperature, and thermal properties along a seismic DEKORP reflection profile allow high resolution 2-D forward and inverse simulations. This approach is attractive in situations where much information is available, if only with considerable uncertainty. In particular, this allows to introduce “soft” information into the analysis. In our case, forward simulations yield initial a priori estimates of the parameters while inversion calculations yield a posteriori estimates of the parameters and their uncertainty. The a priori parameters as well as their assumed uncertainty are input for a Bayesian parameter estimation scheme. In respect to the Northeastern-German sedimentary basin, the inverse analysis postulates a significant and characteristic a posteriori variation of thermal conductivity of the Zechstein unit along the entire profile as well as a generally large a posteriori thermal conductivity of the (pre-Permian) basement in the northern part of the basin. For inverse calculations, we used two alternative scenarios: One assumes the thermal conductivity of the Zechstein unit to be homogeneous along the profile while the other allows a lateral variation. A posteriori heat flow across the base of the model varies from 40 to 60 and 50 to 65 mW m−2 for models in which values for thermal conductivity and radiogenic heat generation rate were either based on literature values or direct measurements, respectively.  相似文献   

9.
Spatial inverse problems in the Earth Sciences are often ill-posed, requiring the specification of a prior model to constrain the nature of the inverse solutions. Otherwise, inverted model realizations lack geological realism. In spatial modeling, such prior model determines the spatial variability of the inverse solution, for example as constrained by a variogram, a Boolean model, or a training image-based model. In many cases, particularly in subsurface modeling, one lacks the amount of data to fully determine the nature of the spatial variability. For example, many different training images could be proposed for a given study area. Such alternative training images or scenarios relate to the different possible geological concepts each exhibiting a distinctive geological architecture. Many inverse methods rely on priors that represent a single subjectively chosen geological concept (a single variogram within a multi-Gaussian model or a single training image). This paper proposes a novel and practical parameterization of the prior model allowing several discrete choices of geological architectures within the prior. This method does not attempt to parameterize the possibly complex architectures by a set of model parameters. Instead, a large set of prior model realizations is provided in advance, by means of Monte Carlo simulation, where the training image is randomized. The parameterization is achieved by defining a metric space which accommodates this large set of model realizations. This metric space is equipped with a “similarity distance” function or a distance function that measures the similarity of geometry between any two model realizations relevant to the problem at hand. Through examples, inverse solutions can be efficiently found in this metric space using a simple stochastic search method.  相似文献   

10.
A new type of indirect inverse analysis procedure is proposed to overcome the difficulties the geotechnical inverse analyses are encountering (such as unstability and non-uniqueness of the solutions as well as multicollinearity). These difficulties are eased by combining the objective information (i.e. the observation data) and the subjective information (i.e. the prior information) in an appropriate manner by so-called extended Bayesian method. The method is based on a new view on Bayesian model proposed by Akaike. The problem of model identification in the inverse analysis is also tackled by applying well-known AIC but of the Bayesian version. A case study on an embankment on soft clay is presented to illustrate the effectiveness of the new method. A rather thorough review on the geotechnical inverse analysis is also presented to indicate the necessity of the proposed procedure. An appendix is attached to summarize the statistical background of the new method.  相似文献   

11.
12.
Model calibration and history matching are important techniques to adapt simulation tools to real-world systems. When prediction uncertainty needs to be quantified, one has to use the respective statistical counterparts, e.g., Bayesian updating of model parameters and data assimilation. For complex and large-scale systems, however, even single forward deterministic simulations may require parallel high-performance computing. This often makes accurate brute-force and nonlinear statistical approaches infeasible. We propose an advanced framework for parameter inference or history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. Our framework consists of two main steps. In step 1, the original model is projected onto a mathematically optimal response surface via the aPC technique. The resulting response surface can be viewed as a reduced (surrogate) model. It captures the model’s dependence on all parameters relevant for history matching at high-order accuracy. Step 2 consists of matching the reduced model from step 1 to observation data via bootstrap filtering. Bootstrap filtering is a fully nonlinear and Bayesian statistical approach to the inverse problem in history matching. It allows to quantify post-calibration parameter and prediction uncertainty and is more accurate than ensemble Kalman filtering or linearized methods. Through this combination, we obtain a statistical method for history matching that is accurate, yet has a computational speed that is more than sufficient to be developed towards real-time application. We motivate and demonstrate our method on the problem of CO2 storage in geological formations, using a low-parametric homogeneous 3D benchmark problem. In a synthetic case study, we update the parameters of a CO2/brine multiphase model on monitored pressure data during CO2 injection.  相似文献   

13.
We consider the problem about determination of characteristics of a lava flow from the physical parameters measured on its surface. The problem is formulated as an inverse boundary problem for the model simulating the dynamics of a viscous heat-conducting incompressible inhomogeneous fluid, where, on the basis of additional data at one part of the model boundary, the missing conditions at another part of the boundary have to be determined, and then the characteristics of fluid in the entire model domain have to be reconstructed. The considered problem is ill-posed. We develop a numerical approach to the solution of the problem in the case of a steady-state flow. Assuming that the temperature and the heat flow are known at the upper surface of the lava, we determine the flow characteristics inside the lava. We compute model examples and show that the lava temperature and flow velocity can be determined with a high precision when the initial data are smooth or slightly noisy.  相似文献   

14.
Seawater intrusion (SWI) is a complex process, where 3D modeling is often necessary in order to monitor and manage the affected aquifers. Here, we present a synthetic study to test a joint hydrogeophysical inversion approach aimed at solving the inverse problem of estimating initial and current saltwater distribution. First, we use a 3D groundwater model for variable density flow based on discretized flow and solute mass balance equations. In addition to the groundwater model, a 3D geophysical model was developed for direct current resistivity imaging and inversion. The objective function of the coupled problem consists of data misfit and regularization terms as well as a coupling term that relates groundwater and geophysical states. We present a novel approach to solve the inverse problem using an alternating direction method of multipliers (ADMM) to minimize this coupled objective function. ADMM enables to treat the groundwater and geophysical part separately and thus use the existing software with minor changes. To further reduce the computational cost, the sensitivities are derived analytically for the discretized system of equations, which allows us to efficiently compute the gradients in the minimization procedure. The method was tested on different synthetic scenarios with groundwater and geophysical data represented by solute mass fraction data and direct current resistivity data. With the ADMM approach, we were able to obtain better estimates for the solute distribution compared to just considering each data separately, solving the problem with a simple coupled approach or by a direct substitution of the coupling constraint.  相似文献   

15.
In this article we present a geostatistical approach to the transmission tomographic inverse problem, which is based on consideration of the inverse problem variables (velocity and traveltime errors) as regionalized variables (R.V.). Their structural analysis provides us with a new method to study the geophysical anisotropy of the rock, an important source of a priori information in order to design the anisotropic corrections. The underlying idea is that the geophysical structure can be deduced from the spatial structure of the regionalized variables which result from solving the tomographic problem with an isotropic algorithm. Also, the application of the structural analysis technique to the anisotropic corrected velocity field allows us to characterize the reliability of these corrections (model quality analysis). Geostatistical formalism also provides us with different techniques (parametric and non-parametric) to estimate and even simulate the velocity in the areas where this field has been considered anomalous based on field studies and on geophysical and statistical criteria. The kriging acts as a low-pass smoothing filter for the anomalous model parameters (velocities), but is not a substitute for an adequate filtering of the outliers before the inversion. This methodology opens the possibility of considering the inverse problem variables as stochastic processes, an important feature in cases where the tomogram is to be used as a tool of assessment to quantify the rock heterogeneities.  相似文献   

16.
We present a method to determine lower and upper bounds to the predicted production or any other economic objective from history-matched reservoir models. The method consists of two steps: 1) performing a traditional computer-assisted history match of a reservoir model with the objective to minimize the mismatch between predicted and observed production data through adjusting the grid block permeability values of the model. 2) performing two optimization exercises to minimize and maximize an economic objective over the remaining field life, for a fixed production strategy, by manipulating the same grid block permeabilities, however without significantly changing the mismatch obtained under step 1. This is accomplished through a hierarchical optimization procedure that limits the solution space of a secondary optimization problem to the (approximate) null space of the primary optimization problem. We applied this procedure to two different reservoir models. We performed a history match based on synthetic data, starting from a uniform prior and using a gradient-based minimization procedure. After history matching, minimization and maximization of the net present value (NPV), using a fixed control strategy, were executed as secondary optimization problems by changing the model parameters while staying close to the null space of the primary optimization problem. In other words, we optimized the secondary objective functions, while requiring that optimality of the primary objective (a good history match) was preserved. This method therefore provides a way to quantify the economic consequences of the well-known problem that history matching is a strongly ill-posed problem. We also investigated how this method can be used as a means to assess the cost-effectiveness of acquiring different data types to reduce the uncertainty in the expected NPV.  相似文献   

17.
Trace element concentration data can be used in a systematic way for the study of igneous processes by means of constructing models of such processes which satisfactorily account for the observations. We propose to treat the problem as an inverse problem. The concept of trace element paths (TEP) is introduced as a representation of the solution to the direct problem. The inverse problem consists of estimating, by a resolution of the equations, the various parameters of a model so as to provide a best fit to observed TEP. A detailed account of the theory is given in the case of equilibrium fractional crystallization. The estimated parameters are then those figuring in the Rayleigh distillation law, namely, 1) the initial concentrations of trace elements in the parental magma, 2) the bulk partition coefficients of the elements, and 3) the degree of crystallization corresponding to each sample of the magmatic suite analyzed.A slightly generalized maximum likelihood method is used to solve the linearized equation by a stable, iterative algorithm. Information theory is then shown to yield an account of the distribution and flow of information during the process of solving the inverse problem. The concept of Data Importances is generalized, and its use in optimizing the study justified. The technique is successfully applied to a synthetic data set, and then illustrated on a data set from Terceira (Azores). The results are used to refine the conclusions reached in part I, and permit a more detailed discussion of the model.Now at Dept. Geological and Planetary Sciences, California Institute of Technology, Pasadena, Calif., USA  相似文献   

18.
Application of Multiple Point Geostatistics to Non-stationary Images   总被引:5,自引:2,他引:3  
Simulation of flow and solute transport through aquifers or oil reservoirs requires a precise representation of subsurface heterogeneity that can be achieved by stochastic simulation approaches. Traditional geostatistical methods based on variograms, such as truncated Gaussian simulation or sequential indicator simulation, may fail to generate the complex, curvilinear, continuous and interconnected facies distributions that are often encountered in real geological media, due to their reliance on two-point statistics. Multiple Point Geostatistics (MPG) overcomes this constraint by using more complex point configurations whose statistics are retrieved from training images. Obtaining representative statistics requires stationary training images, but geological understanding often suggests a priori facies variability patterns. This research aims at extending MPG to non-stationary facies distributions. The proposed method subdivides the training images into different areas. The statistics for each area are stored in separate frequency search trees. Several training images are used to ensure that the obtained statistics are representative. The facies probability distribution for each cell during simulation is calculated by weighting the probabilities from the frequency trees. The method is tested on two different object-based training image sets. Results show that non-stationary training images can be used to generate suitable non-stationary facies distributions.  相似文献   

19.
Classic mathematical statistics recommends maximum likelihood estimators of parameters of a model because they have minimal variance in the model. The theory of robustness showed that these estimators were unstable to small deviations of probability density. The estimator stability is necessary for applications, where reality is always more complex than any model, especially in geology, where objects are unique. Methods of calculus of variations give a measure of the estimator stability, and the maximum likelihood estimators have little stability. Simultaneous maximization of efficiency and stability gives new estimators more suitable for applications. The estimator instability is especially harmful in the estimation of the multivariate normal distribution. To avoid instability, multivariate problems are reduced to sequences of bivariate problems. An example of the solution of a geological problem shows that methods of classic statistics are not good and the reductive method is much better.  相似文献   

20.
地震数据本质上是非平稳的,如何解决复杂非平稳地震波场的数据缺失问题是地震勘探数据处理的重要环节之一。预测滤波器在地震数据处理和分析中具有重要的作用,该技术可以有效地解决地震数据缺失问题,但传统的平稳预测滤波方法无法很好地适应地震数据的非平稳特征;因此,开发高效的复杂地震波场自适应预测插值方法具有重要的工业价值。本文将预测滤波器加入"流处理"的概念,滤波器系数随着地震数据的变化同时更新,此计算过程仅需矢量点积运算,能够提高计算效率并降低内存空间;并以此为基础开发基于流预测滤波的地震数据插值方法。利用多次波的动力学信息,通过互相关技术构建虚拟一次波,有效地解决了缺失数据位置滤波系数估计不准的问题,为插值过程提供了更为合理的滤波器估计,更好地解决了非平稳地震数据的重建问题。对Sigsbee 2B模型和实际数据的测试结果表明,该方法可以合理地针对复杂地震信息完成缺失数据的重建。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号