共查询到20条相似文献,搜索用时 15 毫秒
1.
Regularized sparse‐grid geometric sampling for uncertainty analysis in non‐linear inverse problems 下载免费PDF全文
This paper introduces an efficiency improvement to the sparse‐grid geometric sampling methodology for assessing uncertainty in non‐linear geophysical inverse problems. Traditional sparse‐grid geometric sampling works by sampling in a reduced‐dimension parameter space bounded by a feasible polytope, e.g., a generalization of a polygon to dimension above two. The feasible polytope is approximated by a hypercube. When the polytope is very irregular, the hypercube can be a poor approximation leading to computational inefficiency in sampling. We show how the polytope can be regularized using a rotation and scaling based on principal component analysis. This simple regularization helps to increase the efficiency of the sampling and by extension the computational complexity of the uncertainty solution. We demonstrate this on two synthetic 1D examples related to controlled‐source electromagnetic and amplitude versus offset inversion. The results show an improvement of about 50% in the performance of the proposed methodology when compared with the traditional one. However, as the amplitude versus offset example shows, the differences in the efficiency of the proposed methodology are very likely to be dependent on the shape and complexity of the original polytope. However, it is necessary to pursue further investigations on the regularization of the original polytope in order to fully understand when a simple regularization step based on rotation and scaling is enough. 相似文献
2.
This paper concerns efficient uncertainty quantification techniques in inverse problems for Richards’ equation which use coarse-scale simulation models. We consider the problem of determining saturated hydraulic conductivity fields conditioned to some integrated response. We use a stochastic parameterization of the saturated hydraulic conductivity and sample using Markov chain Monte Carlo methods (MCMC). The main advantage of the method presented in this paper is the use of multiscale methods within an MCMC method based on Langevin diffusion. Additionally, we discuss techniques to combine multiscale methods with stochastic solution techniques, specifically sparse grid collocation methods. We show that the proposed algorithms dramatically reduce the computational cost associated with traditional Langevin MCMC methods while providing similar sampling performance. 相似文献
3.
1D elastic full‐waveform inversion and uncertainty estimation by means of a hybrid genetic algorithm–Gibbs sampler approach 下载免费PDF全文
Stochastic optimization methods, such as genetic algorithms, search for the global minimum of the misfit function within a given parameter range and do not require any calculation of the gradients of the misfit surfaces. More importantly, these methods collect a series of models and associated likelihoods that can be used to estimate the posterior probability distribution. However, because genetic algorithms are not a Markov chain Monte Carlo method, the direct use of the genetic‐algorithm‐sampled models and their associated likelihoods produce a biased estimation of the posterior probability distribution. In contrast, Markov chain Monte Carlo methods, such as the Metropolis–Hastings and Gibbs sampler, provide accurate posterior probability distributions but at considerable computational cost. In this paper, we use a hybrid method that combines the speed of a genetic algorithm to find an optimal solution and the accuracy of a Gibbs sampler to obtain a reliable estimation of the posterior probability distributions. First, we test this method on an analytical function and show that the genetic algorithm method cannot recover the true probability distributions and that it tends to underestimate the true uncertainties. Conversely, combining the genetic algorithm optimization with a Gibbs sampler step enables us to recover the true posterior probability distributions. Then, we demonstrate the applicability of this hybrid method by performing one‐dimensional elastic full‐waveform inversions on synthetic and field data. We also discuss how an appropriate genetic algorithm implementation is essential to attenuate the “genetic drift” effect and to maximize the exploration of the model space. In fact, a wide and efficient exploration of the model space is important not only to avoid entrapment in local minima during the genetic algorithm optimization but also to ensure a reliable estimation of the posterior probability distributions in the subsequent Gibbs sampler step. 相似文献
4.
Optimal implicit staggered‐grid finite‐difference schemes based on the sampling approximation method for seismic modelling 下载免费PDF全文
We propose new implicit staggered‐grid finite‐difference schemes with optimal coefficients based on the sampling approximation method to improve the numerical solution accuracy for seismic modelling. We first derive the optimized implicit staggered‐grid finite‐difference coefficients of arbitrary even‐order accuracy for the first‐order spatial derivatives using the plane‐wave theory and the direct sampling approximation method. Then, the implicit staggered‐grid finite‐difference coefficients based on sampling approximation, which can widen the range of wavenumber with great accuracy, are used to solve the first‐order spatial derivatives. By comparing the numerical dispersion of the implicit staggered‐grid finite‐difference schemes based on sampling approximation, Taylor series expansion, and least squares, we find that the optimal implicit staggered‐grid finite‐difference scheme based on sampling approximation achieves greater precision than that based on Taylor series expansion over a wider range of wavenumbers, although it has similar accuracy to that based on least squares. Finally, we apply the implicit staggered‐grid finite difference based on sampling approximation to numerical modelling. The modelling results demonstrate that the new optimal method can efficiently suppress numerical dispersion and lead to greater accuracy compared with the implicit staggered‐grid finite difference based on Taylor series expansion. In addition, the results also indicate the computational cost of the implicit staggered‐grid finite difference based on sampling approximation is almost the same as the implicit staggered‐grid finite difference based on Taylor series expansion. 相似文献
5.
Non-local stochastic moment equations are used successfully to analyze groundwater flow in randomly heterogeneous media. Here we present a moment equations-based approach to quantify the uncertainty associated with the estimation of well catchments. Our approach is based on the development of a complete second order formalism which allows obtaining the first statistical moments of the trajectories of conservative solute particles advected in a generally non-uniform groundwater flow. Approximate equations of moments of particles’ trajectories are then derived on the basis of a second order expansion in terms of the standard deviation of the aquifer log hydraulic conductivity. Analytical expressions are then obtained for the predictors of locations of mean stagnation points, together with their associated uncertainties. We implement our approach on heterogeneous media in bounded two-dimensional domains, with and without including the effect of conditioning on hydraulic conductivity information. The impact of domain size, boundary conditions, heterogeneity and non-stationarity of hydraulic conductivity on the prediction of a well catchment is explored. The results are compared against Monte Carlo simulations and semi-analytical solutions available in the literature. The methodology is applicable to both infinite and bounded domains and is free of distributional assumptions (and so applies to both Gaussian and non-Gaussian log hydraulic conductivity fields) and formally includes the effect of conditioning on available information. 相似文献
6.
In most real-world hydrogeologic situations, natural heterogeneity and measurement errors introduce major sources of uncertainty in the solution of the inverse problem. The Bayesian Maximum Entropy (BME) method of modern geostatistics offers an efficient solution to the inverse problem by first assimilating various physical knowledge bases (hydrologic laws, water table elevation data, uncertain hydraulic resistivity measurements, etc.) and then producing robust estimates of the subsurface variables across space. We present specific methods for implementing the BME conceptual framework to solve an inverse problem involving Darcys law for subsurface flow. We illustrate one of these methods in the case of a synthetic one-dimensional case study concerned with the estimation of hydraulic resistivity conditioned on soft data and hydraulic head measurements. The BME framework processes the physical knowledge contained in Darcys law and generates accurate estimates of hydraulic resistivity across space. The optimal distribution of hard and soft data needed to minimize the associated estimation error at a specified sampling cost is determined.
This work was supported by grants from the National Institute of Environmental Health Sciences (Grant no. 5 P42 ES05948 and P30ES10126), the National Aeronautics and Space Administration (Grant no. 60-00RFQ041), the Army Research Office (Grant no. DAAG55-98-1-0289), and the National Science Foundation under Agreement No. DMS-0112069. 相似文献
7.
Wave‐equation based methods, such as the estimation of primaries by sparse inversion, have been successful in the mitigation of the adverse effects of surface‐related multiples on seismic imaging and migration‐velocity analysis. However, the reliance of these methods on multidimensional convolutions with fully sampled data exposes the ‘curse of dimensionality’, which leads to disproportional growth in computational and storage demands when moving to realistic 3D field data. To remove this fundamental impediment, we propose a dimensionality‐reduction technique where the ‘data matrix’ is approximated adaptively by a randomized low‐rank factorization. Compared to conventional methods, which need for each iteration passage through all data possibly requiring on‐the‐fly interpolation, our randomized approach has the advantage that the total number of passes is reduced to only one to three. In addition, the low‐rank matrix factorization leads to considerable reductions in storage and computational costs of the matrix multiplies required by the sparse inversion. Application of the proposed method to two‐dimensional synthetic and real data shows that significant performance improvements in speed and memory use are achievable at a low computational up‐front cost required by the low‐rank factorization. 相似文献
8.
We recently proposed an efficient hybrid scheme to absorb boundary reflections for acoustic wave modelling that could attain nearly perfect absorptions. This scheme uses weighted averaging of wavefields in a transition area, between the inner area and the model boundaries. In this paper we report on the extension of this scheme to 2D elastic wave modelling with displacement‐stress formulations on staggered grids using explicit finite‐difference, pseudo‐implicit finite‐difference and pseudo‐spectral methods. Numerical modelling results of elastic wave equations with hybrid absorbing boundary conditions show great improvement for modelling stability and significant absorption for boundary reflections, compared with the conventional Higdon absorbing boundary conditions, demonstrating the effectiveness of this scheme for elastic wave modelling. The modelling results also show that the hybrid scheme works well in 2D rotated staggered‐grid modelling for isotropic medium, 2D staggered‐grid modelling for vertically transversely isotropic medium and 2D rotated staggered‐grid modelling for tilted transversely isotropic medium. 相似文献
9.
Alternative non‐linear dynamic analysis procedures, using real ground motion records, can be used to make probability‐based seismic assessments. These procedures can be used both to obtain parameter estimates for specific probabilistic assessment criteria such as demand and capacity factored design and also to make direct probabilistic performance assessments using numerical methods. Multiple‐stripe analysis is a non‐linear dynamic analysis method that can be used for performance‐based assessments for a wide range of ground motion intensities and multiple performance objectives from onset of damage through global collapse. Alternatively, the amount of analysis effort needed in the performance assessments can be reduced by performing the structural analyses and estimating the main parameters in the region of ground motion intensity levels of interest. In particular, single‐stripe and double‐stripe analysis can provide local probabilistic demand assessments using minimal number of structural analyses (around 20 to 40). As a case study, the displacement‐based seismic performance of an older reinforced concrete frame structure, which is known to have suffered shear failure in its columns during the 1994 Northridge Earthquake, is evaluated. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
10.
Microseismic monitoring in the oil and gas industry commonly uses migration‐based methods to locate very weak microseismic events. The objective of this study is to compare the most popular migration‐based methods on a synthetic dataset that simulates a strike‐slip source mechanism event with a low signal‐to‐noise ratio recorded by surface receivers (vertical components). The results show the significance of accounting for the known source mechanism in the event detection and location procedures. For detection and location without such a correction, the ability to detect weak events is reduced. We show both numerically and theoretically that neglecting the source mechanism by using only absolute values of the amplitudes reduces noise suppression during stacking and, consequently, limits the possibility to retrieve weak microseismic events. On the other hand, even a simple correction to the data polarization used with otherwise ineffective methods can significantly improve detections and locations. A simple stacking of the data with a polarization correction provided clear event detection and location, but even better results were obtained for those data combined with methods that are based on semblance and cross‐correlation. 相似文献
11.
We present a method for studying local stability of a solution to an inverse problem and evaluate the uncertainty in determining
true values of particular observables. The investigation is done under the assumption that only the Gaussian part of fluctuations
about the local minimum of the cost (likelihood) function is essential. Our approach is based on the spectral analysis of
the Hessian operator associated with the cost function at its extremal point, and we put forward an effective iterative algorithm
suitable for numerical implementation in the case of a computationally large problem.
Received: 16 May 2001 / Accepted: 22 October 2001 相似文献
12.
Pseudo‐spectral method using rotated staggered grid for elastic wave propagation in 3D arbitrary anisotropic media 下载免费PDF全文
Staggering grid is a very effective way to reduce the Nyquist errors and to suppress the non‐causal ringing artefacts in the pseudo‐spectral solution of first‐order elastic wave equations. However, the straightforward use of a staggered‐grid pseudo‐spectral method is problematic for simulating wave propagation when the anisotropy level is greater than orthorhombic or when the anisotropic symmetries are not aligned with the computational grids. Inspired by the idea of rotated staggered‐grid finite‐difference method, we propose a modified pseudo‐spectral method for wave propagation in arbitrary anisotropic media. Compared with an existing remedy of staggered‐grid pseudo‐spectral method based on stiffness matrix decomposition and a possible alternative using the Lebedev grids, the rotated staggered‐grid‐based pseudo‐spectral method possesses the best balance between the mitigation of artefacts and efficiency. A 2D example on a transversely isotropic model with tilted symmetry axis verifies its effectiveness to suppress the ringing artefacts. Two 3D examples of increasing anisotropy levels demonstrate that the rotated staggered‐grid‐based pseudo‐spectral method can successfully simulate complex wavefields in such anisotropic formations. 相似文献
13.
With the increasing emphasis of performance‐based earthquake engineering in the engineering community, several investigations have been presented outlining simplified approaches suitable for performance‐based seismic design (PBSD). Central to most of these PBSD approaches is the use of closed‐form analytical solutions to the probabilistic integral equations representing the rate of exceedance of key performance measures. Situations where such closed‐form solutions are not appropriate primarily relate to the problem of extrapolation outside of the region in which parameters of the closed‐form solution are fit. This study presents a critical review of the closed‐form solution for the annual rate of structural collapse. The closed‐form solution requires the assumptions of lognormality of the collapse fragility and power model form of the ground motion hazard, of which the latter is more significant regarding the error of the closed‐form solution. Via a parametric study, the key variables contributing to the error between the closed‐form solution and solution via numerical integration are illustrated. As these key variables cannot be easily measured, it casts doubt on the use of such closed‐form solutions in future PBSD, especially considering the simple and efficient nature of using direct numerical integration to obtain the solution. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
14.
Comprehensive snow depth data, collected using georadar and hand probing, were used for statistical analyses of snow depths inside 1 km grid cells. The sub‐grid cell spatial scale was 100 m. Statistical distribution functions were found to have varying parameters, and an attempt was made to connect these statistical parameters to different terrain variables. The results showed that the two parameters mean and standard deviation of snow depth were significantly related to the sub‐grid terrain characteristics. Linear regression models could explain up to 50% of the variation for both of the snowcover parameters mentioned. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
15.
Guy Drijkoningen 《Geophysical Prospecting》2016,64(3):543-553
When a seismic source is placed in the water at a height less than a wavelength from the water–solid interface, a prominent S‐wave arrival can be observed. It travels kinematically as if it was excited at the projection point of the source on the interface. This non‐geometric S‐wave has been investigated before, mainly for a free‐surface configuration. However, as was shown in a field experiment, the non‐geometric S‐wave can also be excited at a fluid–solid configuration if the S‐wave speed in the solid is less than the sound speed in the water. The amplitude of this wave exponentially decreases when the source is moved away from the interface revealing its evanescent character in the fluid. In the solid, this particular converted mode is propagating as an ordinary S‐wave and can be transmitted and reflected as such. There is a specific region of horizontal slownesses where this non‐geometric wave exists, depending on the ratio of the S‐wave velocity and the sound speed of water. Only for ratios smaller than 1, this wave appears. Lower ratios result in a wider region of appearance. Due to this property, this particular P‐S converted mode can be identified and filtered from other events in the Radon domain. 相似文献
16.
本文就两种抗噪反Q滤波方法进行了讨论,其中对比了考虑时频域信噪比的反Q滤波方法和基于变稳定因子的反Q滤波方法.与稳定的反Q滤波方法相比,这两种方法可以抑制噪声,提高反Q滤波后地震数据的分辨率和信噪比.然而,它们基于不同的方法原理,并且在处理效果上存在一定的差异.为了探究反Q滤波方法抑制噪声的关键问题,回顾了方法的原理,对比和讨论这两种方法的差异,并通过理论模型数据和实际地震数据进行了比较.测试结果表明,反Q滤波方法抑制噪声的关键在于有效控制振幅补偿的频带范围.就算法而言,考虑时频域信噪比的反Q滤波方法可根据地震数据的信噪比水平选择不同的频带补偿范围,具有较强的灵活性,但需要先计算出地震数据的时频域信噪比;而基于变稳定因子的反Q滤波方法具有更加简单的算法,其振幅补偿规律本身即具有一定的噪声压制能力,避免了计算时频域信噪比.当选取合适的参数,二者的振幅补偿频带范围接近时,这两种抗噪反Q滤波方法可以得到相似的处理效果. 相似文献
17.
18.
Tree‐ring‐based reconstructions of paleo‐hydrology have proved useful for better understanding the irregularities and extent of past climate changes, and therefore, for more effective water resources management. Despite considerable advances in the field, there still exist challenges that introduce significant uncertainties into paleo‐reconstructions. This study outlines these challenges and address them by developing two themes: (1) the effect of temporal scaling on the strength of the relationship between the hydrologic variables, streamflow in this study, and tree growth rates and (2) the reconstruction uncertainty of streamflow due to the dissimilarity or inconsistency in the pool of tree‐ring chronologies (predictors in reconstruction) in a basin. Based on the insight gained, a methodology is developed to move beyond only relying on the annual hydrology‐growth correlations, and to utilize additional information embedded in the annual time series at longer time scales (e.g. multi‐year to decadal time scales). This methodology also generates an ensemble of streamflow reconstructions to formally account for uncertainty in the pool of chronology sites. The major headwater tributaries of the Saskatchewan River Basin, the main source of surface water in the Canadian Prairie Provinces, are used as the case study. It is shown that the developed methodology explains the variance of streamflows to a larger extent than the conventional approach and better preserves the persistence and variability of streamflows across time scales (Hurst‐type behaviour). The resulting ensemble of paleo‐hydrologic time series is able to more credibly pinpoint the timing and extent of past dry and wet periods and provides a dynamic range of uncertainty in reconstruction. This range varies with time over the course of the reconstruction period, indicating that the utility of tree‐ring chronologies for paleo‐reconstruction differs for different time periods over the past several centuries in the history of the region. The proposed ensemble approach provides a credible range of multiple‐century‐long water availability scenarios that can be used for vulnerability assessment of the existing water infrastructure and improving water resources management. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
19.
We study the appraisal problem for the joint inversion of seismic and controlled source electro‐magnetic (CSEM) data and utilize rock‐physics models to integrate these two disparate data sets. The appraisal problem is solved by adopting a Bayesian model and we incorporate four representative sources of uncertainty. These are uncertainties in 1) seismic wave velocity, 2) electric conductivity, 3) seismic data and 4) CSEM data. The uncertainties in porosity and water saturation are quantified by a posterior random sampling in the model space of porosity and water saturation in a marine one‐dimensional structure. We study the relative contributions from the four individual sources of uncertainty by performing several statistical experiments. The uncertainties in the seismic wave velocity and electric conductivity play a more significant role on the variation of posterior uncertainty than do the seismic and CSEM data noise. The numerical simulations also show that the uncertainty in porosity is most affected by the uncertainty in the seismic wave velocity and that the uncertainty in water saturation is most influenced by the uncertainty in electric conductivity. The framework of the uncertainty analysis presented in this study can be utilized to effectively reduce the uncertainty of the porosity and water saturation derived from the integration of seismic and CSEM data. 相似文献
20.
Iunio Iervolino 《地震工程与结构动力学》2017,46(10):1711-1723
State‐of‐the‐art approaches to probabilistic assessment of seismic structural reliability are based on simulation of structural behavior via nonlinear dynamic analysis of computer models. Simulations are carried out considering samples of ground motions supposedly drawn from specific populations of signals virtually recorded at the site of interest. This serves to produce samples of structural response to evaluate the failure rate, which in turn allows to compute the failure risk (probability) in a time interval of interest. This procedure alone implies that uncertainty of estimation affects the probabilistic results. The latter is seldom quantified in risk analyses, although it may be relevant. This short paper discusses some basic issues and some simple statistical tools, which can aid the analyst towards the assessment of the impact of sample variability on fragility functions and the resulting seismic structural risk. On the statistical inference side, the addressed strategies are based on consolidated results such as the well‐known delta method and on some resampling plans belonging to the bootstrap family. On the structural side, they rely on assumptions and methods typical in performance‐based earthquake engineering applications. Copyright © 2017 John Wiley & Sons, Ltd. 相似文献