首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
A new uncertainty estimation method, which we recently introduced in the literature, allows for the comprehensive search of model posterior space while maintaining a high degree of computational efficiency. The method starts with an optimal solution to an inverse problem, performs a parameter reduction step and then searches the resulting feasible model space using prior parameter bounds and sparse‐grid polynomial interpolation methods. After misfit rejection, the resulting model ensemble represents the equivalent model space and can be used to estimate inverse solution uncertainty. While parameter reduction introduces a posterior bias, it also allows for scaling this method to higher dimensional problems. The use of Smolyak sparse‐grid interpolation also dramatically increases sampling efficiency for large stochastic dimensions. Unlike Bayesian inference, which treats the posterior sampling problem as a random process, this geometric sampling method exploits the structure and smoothness in posterior distributions by solving a polynomial interpolation problem and then resampling from the resulting interpolant. The two questions we address in this paper are 1) whether our results are generally compatible with established Bayesian inference methods and 2) how does our method compare in terms of posterior sampling efficiency. We accomplish this by comparing our method for two electromagnetic problems from the literature with two commonly used Bayesian sampling schemes: Gibbs’ and Metropolis‐Hastings. While both the sparse‐grid and Bayesian samplers produce compatible results, in both examples, the sparse‐grid approach has a much higher sampling efficiency, requiring an order of magnitude fewer samples, suggesting that sparse‐grid methods can significantly improve the tractability of inference solutions for problems in high dimensions or with more costly forward physics.  相似文献   

2.
Gaussian beam depth migration overcomes the single‐wavefront limitation of most implementations of Kirchhoff migration and provides a cost‐effective alternative to full‐wavefield imaging methods such as reverse‐time migration. Common‐offset beam migration was originally derived to exploit symmetries available in marine towed‐streamer acquisition. However, sparse acquisition geometries, such as cross‐spread and ocean bottom, do not easily accommodate requirements for common‐offset, common‐azimuth (or common‐offset‐vector) migration. Seismic data interpolation or regularization can be used to mitigate this problem by forming well‐populated common‐offset‐vector volumes. This procedure is computationally intensive and can, in the case of converted‐wave imaging with sparse receivers, compromise the final image resolution. As an alternative, we introduce a common‐shot (or common‐receiver) beam migration implementation, which allows migration of datasets rich in azimuth, without any regularization pre‐processing required. Using analytic, synthetic, and field data examples, we demonstrate that converted‐wave imaging of ocean‐bottom‐node data benefits from this formulation, particularly in the shallow subsurface where regularization for common‐offset‐vector migration is both necessary and difficult.  相似文献   

3.
We extend the frequency‐ and angle‐dependent poroelastic reflectivity to systematically analyse the characteristic of seismic waveforms for highly attenuating reservoir rocks. It is found that the mesoscopic fluid pressure diffusion can significantly affect the root‐mean‐square amplitude, frequency content, and phase signatures of seismic waveforms. We loosely group the seismic amplitude‐versus‐angle and ‐frequency characteristics into three classes under different geological circumstances: (i) for Class‐I amplitude‐versus‐angle and ‐frequency, which corresponds to well‐compacted reservoirs having Class‐I amplitude‐versus‐offset characteristic, the root‐mean‐square amplitude at near offset is boosted at high frequency, whereas seismic energy at far offset is concentrated at low frequency; (ii) for Class‐II amplitude‐versus‐angle and ‐frequency, which corresponds to moderately compacted reservoirs having Class‐II amplitude‐versus‐offset characteristic, the weak seismic amplitude might exhibit a phase‐reversal trend, hence distorting both the seismic waveform and energy distribution; (iii) for Class‐III amplitude‐versus‐angle and ‐frequency, which corresponds to unconsolidated reservoir having Class‐III amplitude‐versus‐offset characteristic, the mesoscopic fluid flow does not exercise an appreciable effect on the seismic waveforms, but there exists a non‐negligible amplitude decay compared with the elastic seismic responses based on the Zoeppritz equation.  相似文献   

4.
State‐of‐the‐art 3D seismic acquisition geometries have poor sampling along at least one dimension. This results in coherent migration noise that always contaminates pre‐stack migrated data, including high‐fold surveys, if prior‐to‐migration interpolation was not applied. We present a method for effective noise suppression in migrated gathers, competing with data interpolation before pre‐stack migration. The proposed technique is based on a dip decomposition of common‐offset volumes and a semblance‐type measure computation via offset for all constant‐dip gathers. Thus the processing engages six dimensions: offset, inline, crossline, depth, inline dip, and crossline dip. To reduce computational costs, we apply a two‐pass (4D in each pass) noise suppression: inline processing and then crossline processing (or vice versa). Synthetic and real‐data examples verify that the technique preserves signal amplitudes, including amplitude‐versus‐offset dependence, and that faults are not smeared.  相似文献   

5.
For 3‐D shallow‐water seismic surveys offshore Abu Dhabi, imaging the target reflectors requires high resolution. Characterization and monitoring of hydrocarbon reservoirs by seismic amplitude‐versus‐offset techniques demands high pre‐stack amplitude fidelity. In this region, however, it still was not clear how the survey parameters should be chosen to satisfy the required data quality. To answer this question, we applied the focal‐beam method to survey evaluation and design. This subsurface‐ and target‐oriented approach enables quantitative analysis of attributes such as the best achievable resolution and pre‐stack amplitude fidelity at a fixed grid point in the subsurface for a given acquisition geometry at the surface. This method offers an efficient way to optimize the acquisition geometry for maximum resolution and minimum amplitude‐versus‐offset imprint. We applied it to several acquisition geometries in order to understand the effects of survey parameters such as the four spatial sampling intervals and apertures of the template geometry. The results led to a good understanding of the relationship between the survey parameters and the resulting data quality and identification of the survey parameters for reflection imaging and amplitude‐versus‐offset applications.  相似文献   

6.
Scattering theory, a form of perturbation theory, is a framework from within which time‐lapse seismic reflection methods can be derived and understood. It leads to expressions relating baseline and monitoring data and Earth properties, focusing on differences between these quantities as it does so. The baseline medium is, in the language of scattering theory, the reference medium and the monitoring medium is the perturbed medium. The general scattering relationship between monitoring data, baseline data, and time‐lapse Earth property changes is likely too complex to be tractable. However, there are special cases that can be analysed for physical insight. Two of these cases coincide with recognizable areas of applied reflection seismology: amplitude versus offset modelling/inversion, and imaging. The main result of this paper is a demonstration that time‐lapse difference amplitude versus offset modelling, and time‐lapse difference data imaging, emerge from a single theoretical framework. The time‐lapse amplitude versus offset case is considered first. We constrain the general time‐lapse scattering problem to correspond with a single immobile interface that separates a static overburden from a target medium whose properties undergo time‐lapse changes. The scattering solutions contain difference‐amplitude versus offset expressions that (although presently acoustic) resemble the expressions of Landro ( 2001 ). In addition, however, they contain non‐linear corrective terms whose importance becomes significant as the contrasts across the interface grow. The difference‐amplitude versus offset case is exemplified with two parameter acoustic (bulk modulus and density) and anacoustic (P‐wave velocity and quality factor Q) examples. The time‐lapse difference data imaging case is considered next. Instead of constraining the structure of the Earth volume as in the amplitude versus offset case, we instead make a small‐contrast assumption, namely that the time‐lapse variations are small enough that we may disregard contributions from beyond first order. An initial analysis, in which the case of a single mobile boundary is examined in 1D, justifies the use of a particular imaging algorithm applied directly to difference data shot records. This algorithm, a least‐squares, shot‐profile imaging method, is additionally capable of supporting a range of regularization techniques. Synthetic examples verify the applicability of linearized imaging methods of the difference image formation under ideal conditions.  相似文献   

7.
In common‐reflection‐surface imaging the reflection arrival time field is parameterized by operators that are of higher dimension or order than in conventional methods. Using the common‐reflection‐surface approach locally in the unmigrated prestack data domain opens a potential for trace regularization and interpolation. In most data interpolation methods based on local coherency estimation, a single operator is designed for a target sample and the output amplitude is defined as a weighted average along the operator. This approach may fail in presence of interfering events or strong amplitude and phase variations. In this paper we introduce an alternative scheme in which there is no need for an operator to be defined at the target sample itself. Instead, the amplitude at a target sample is constructed from multiple operators estimated at different positions. In this case one operator may contribute to the construction of several target samples. Vice versa, a target sample might receive contributions from different operators. Operators are determined on a grid which can be sparser than the output grid. This allows to dramatically decrease the computational costs. In addition, the use of multiple operators for a single target sample stabilizes the interpolation results and implicitly allows several contributions in case of interfering events. Due to the considerable computational expense, common‐reflection‐surface interpolation is limited to work in subsets of the prestack data. We present the general workflow of a common‐reflection‐surface‐based regularization/interpolation for 3D data volumes. This workflow has been applied to an OBC common‐receiver volume and binned common‐offset subsets of a 3D marine data set. The impact of a common‐reflection‐surface regularization is demonstrated by means of a subsequent time migration. In comparison to the time migrations of the original and DMO‐interpolated data, the results show particular improvements in view of the continuity of reflections events. This gain is confirmed by an automatic picking of a horizon in the stacked time migrations.  相似文献   

8.
Seismic amplitudes contain important information that can be related to fluid saturation. The amplitude‐versus‐offset analysis of seismic data based on Gassmann's theory and the approximation of the Zoeppritz equations has played a central role in reservoir characterization. However, this standard technique faces a long‐standing problem: its inability to distinguish between partial gas and “fizz‐water” with little gas saturation. In this paper, we studied seismic dispersion and attenuation in partially saturated poroelastic media by using frequency‐dependent rock physics model, through which the frequency‐dependent amplitude‐versus‐offset response is calculated as a function of porosity and water saturation. We propose a cross‐plotting of two attributes derived from the frequency‐dependent amplitude‐versus‐offset response to differentiate partial gas saturation and “fizz‐water” saturation. One of the attributes is a measure of “low frequency”, or Gassmann, of reflectivity, whereas the other is a measure of the “frequency dependence” of reflectivity. This is in contrast to standard amplitude‐versus‐offset attributes, where there is typically no such separation. A pragmatic frequency‐dependent amplitude‐versus‐offset inversion for rock and fluid properties is also established based on Bayesian theorem. A synthetic study is performed to explore the potential of the method to estimate gas saturation and porosity variations. An advantage of our work is that the method is in principle predictive, opening the way to further testing and calibration with field data. We believe that such work should guide and augment more theoretical studies of frequency‐dependent amplitude‐versus‐offset analysis.  相似文献   

9.
A comprehensive controlled source electromagnetic (CSEM) modelling study, based on complex resistivity structures in a deep marine geological setting, is conducted. The study demonstrates the effects of acquisition parameters and multi‐layered resistors on CSEM responses. Three‐dimensional (3D) finite difference time domain (FDTD) grid‐modelling is used for CSEM sensitivity analysis. Interpolation of normalized CSEM responses provides attributes representing relative sensitivity of the modelled structures. Modelling results show that fine grid, 1 × 1 km receiver spacing, provides good correlations between CSEM responses and the modelled structures, irrespective of source orientation. The resolution of CSEM attributes decreases for receiver spacing >2 × 2 km, when using only in‐line data. Broadside data in the grid geometry increase data density by 100 – approximately 200% by filling in in‐line responses and improve the resolution of CSEM attributes. Optimized source orientation (i.e., oblique to the strike of an elongated resistor) improves the structural definition of the CSEM anomalies for coarse‐grid geometries (receiver spacing ≥3 × 3 km). The study also shows that a multi‐resistor anomaly is not simply the summation but a cumulative response with mutual interference between constituent resistors. The combined response of constituent resistors is approximately 50% higher than the cumulative response of the multi‐resistor for 0.5 Hz at 4000 m offset. A gradual inverse variation of offset and frequency allows differentiation of CSEM anomalies for multi‐layered resistors. Similar frequency‐offset variations for laterally persistent high‐resistivity facies show visual continuity with varying geometric expressions. 3D grid‐modelling is an effective and adequate tool for CSEM survey design and sensitivity analysis.  相似文献   

10.
Most groundwater models simulate stream‐aquifer interactions with a head‐dependent flux boundary condition based on a river conductance (CRIV). CRIV is usually calibrated with other parameters by history matching. However, the inverse problem of groundwater models is often ill‐posed and individual model parameters are likely to be poorly constrained. Ill‐posedness can be addressed by Tikhonov regularization with prior knowledge on parameter values. The difficulty with a lumped parameter like CRIV, which cannot be measured in the field, is to find suitable initial and regularization values. Several formulations have been proposed for the estimation of CRIV from physical parameters. However, these methods are either too simple to provide a reliable estimate of CRIV, or too complex to be easily implemented by groundwater modelers. This paper addresses the issue with a flexible and operational tool based on a 2D numerical model in a local vertical cross section, where the river conductance is computed from selected geometric and hydrodynamic parameters. Contrary to other approaches, the grid size of the regional model and the anisotropy of the aquifer hydraulic conductivity are also taken into account. A global sensitivity analysis indicates the strong sensitivity of CRIV to these parameters. This enhancement for the prior estimation of CRIV is a step forward for the calibration and uncertainty analysis of surface‐subsurface models. It is especially useful for modeling objectives that require CRIV to be well known such as conjunctive surface water‐groundwater use.  相似文献   

11.
Extracting true amplitude versus angle common image gathers is one of the key objectives in seismic processing and imaging. This is achievable to different degrees using different migration techniques (e.g., Kirchhoff, wavefield extrapolation, and reverse time migration techniques) and is a common tool in exploration, but the costs can vary depending on the selected migration algorithm and the desired accuracy. Here, we investigate the possibility of combining the local‐shift imaging condition, specifically the time‐shift extended imaging condition, for angle gathers with a Kirchhoff migration. The aims are not to replace the more accurate full‐wavefield migration but to offer a cheaper alternative where ray‐based methods are applicable and to use Kirchhoff time‐lag common image gathers to help bridge the gap between the traditional offset common image gathers and reverse time migration angle gathers; finally, given the higher level of summation inside the extended imaging migration, we wish to understand the impact on the amplitude versus angle response. The implementation of the time‐shift imaging condition along with the computational cost is discussed, and results of four different datasets are presented. The four example datasets, two synthetic, one land acquisition, and a marine dataset, have been migrated using a Kirchhoff offset method, a Kirchhoff time‐shift method, and, for comparison, a reverse time migration algorithm. The results show that the time‐shift imaging condition at zero time lag is equivalent to the full offset stack as expected. The output gathers are cleaner and more consistent in the time‐lag‐derived angle gathers, but the conversion from time lag to angle can be considered a post‐processing step. The main difference arises in the amplitude versus offset/angle distribution where the responses are different and dramatically so for the land data. The results from the synthetics and real data show that a Kirchhoff migration with an extended imaging condition is capable of generating subsurface angle gathers. The same disadvantages with a ray‐based approach will apply using the extended imaging condition relative to a wave equation angle gather solution. Nevertheless, using this approach allows one to explore the relationship between the velocity model and focusing of the reflected energy, to use the Radon transformation to remove noise and multiples, and to generate consistent products from a ray‐based migration and a full‐wave equation migration, which can then be interchanged depending on the process under study.  相似文献   

12.
Reverse‐time migration has become an industry standard for imaging in complex geological areas. We present an approach for increasing its imaging resolution by employing time‐shift gathers. The method consists of two steps: (i) migrating seismic data with the extended imaging condition to get time‐shift gathers and (ii) accumulating the information from time‐shift gathers after they are transformed to zero‐lag time‐shift by a post‐stack depth migration on a finer grid. The final image is generated on a grid, which is denser than that of the original image, thus improving the resolution of the migrated images. Our method is based on the observation that non‐zero‐lag time‐shift images recorded on the regular computing grid contain the information of zero‐lag time‐shift image on a denser grid, and such information can be continued to zero‐lag time‐shift and refocused at the correct locations on the denser grid. The extra computational cost of the proposed method amounts to the computational cost of zero‐offset migration and is almost negligible compared with the cost of pre‐stack shot‐record reverse‐time migration. Numerical tests on synthetic models demonstrate that the method can effectively improve reverse‐time migration resolution. It can also be regarded as an approach to improve the efficiency of reverse‐time migration by performing wavefield extrapolation on a coarse grid and by generating the final image on the desired fine grid.  相似文献   

13.
Three‐dimensional seismic survey design should provide an acquisition geometry that enables imaging and amplitude‐versus‐offset applications of target reflectors with sufficient data quality under given economical and operational constraints. However, in land or shallow‐water environments, surface waves are often dominant in the seismic data. The effectiveness of surface‐wave separation or attenuation significantly affects the quality of the final result. Therefore, the need for surface‐wave attenuation imposes additional constraints on the acquisition geometry. Recently, we have proposed a method for surface‐wave attenuation that can better deal with aliased seismic data than classic methods such as slowness/velocity‐based filtering. Here, we investigate how surface‐wave attenuation affects the selection of survey parameters and the resulting data quality. To quantify the latter, we introduce a measure that represents the estimated signal‐to‐noise ratio between the desired subsurface signal and the surface waves that are deemed to be noise. In a case study, we applied surface‐wave attenuation and signal‐to‐noise ratio estimation to several data sets with different survey parameters. The spatial sampling intervals of the basic subset are the survey parameters that affect the performance of surface‐wave attenuation methods the most. Finer spatial sampling will reduce aliasing and make surface‐wave attenuation easier, resulting in better data quality until no further improvement is obtained. We observed this behaviour as a main trend that levels off at increasingly denser sampling. With our method, this trend curve lies at a considerably higher signal‐to‐noise ratio than with a classic filtering method. This means that we can obtain a much better data quality for given survey effort or the same data quality as with a conventional method at a lower cost.  相似文献   

14.
We propose new implicit staggered‐grid finite‐difference schemes with optimal coefficients based on the sampling approximation method to improve the numerical solution accuracy for seismic modelling. We first derive the optimized implicit staggered‐grid finite‐difference coefficients of arbitrary even‐order accuracy for the first‐order spatial derivatives using the plane‐wave theory and the direct sampling approximation method. Then, the implicit staggered‐grid finite‐difference coefficients based on sampling approximation, which can widen the range of wavenumber with great accuracy, are used to solve the first‐order spatial derivatives. By comparing the numerical dispersion of the implicit staggered‐grid finite‐difference schemes based on sampling approximation, Taylor series expansion, and least squares, we find that the optimal implicit staggered‐grid finite‐difference scheme based on sampling approximation achieves greater precision than that based on Taylor series expansion over a wider range of wavenumbers, although it has similar accuracy to that based on least squares. Finally, we apply the implicit staggered‐grid finite difference based on sampling approximation to numerical modelling. The modelling results demonstrate that the new optimal method can efficiently suppress numerical dispersion and lead to greater accuracy compared with the implicit staggered‐grid finite difference based on Taylor series expansion. In addition, the results also indicate the computational cost of the implicit staggered‐grid finite difference based on sampling approximation is almost the same as the implicit staggered‐grid finite difference based on Taylor series expansion.  相似文献   

15.
We propose an improvement of the overland‐flow parameterization in a distributed hydrological model, which uses a constant horizontal grid resolution and employs the kinematic wave approximation for both hillslope and river channel flow. The standard parameterization lacks any channel flow characteristics for rivers, which results in reduced river flow velocities for streams narrower than the horizontal grid resolution. Moreover, the surface areas, through which these wider model rivers may exchange water with the subsurface, are larger than the real river channels potentially leading to unrealistic vertical flows. We propose an approximation of the subscale channel flow by scaling Manning's roughness in the kinematic wave formulation via a relationship between river width and grid cell size, following a simplified version of the Barré de Saint‐Venant equations (Manning–Strickler equations). The too large exchange areas between model rivers and the subsurface are compensated by a grid resolution‐dependent scaling of the infiltration/exfiltration rate across river beds. We test both scaling approaches in the integrated hydrological model ParFlow. An empirical relation is used for estimating the true river width from the mean annual discharge. Our simulations show that the scaling of the roughness coefficient and the hydraulic conductivity effectively corrects overland flow velocities calculated on the coarse grid leading to a better representation of flood waves in the river channels.  相似文献   

16.
Incremental dynamic analysis (IDA) is presented as a powerful tool to evaluate the variability in the seismic demand and capacity of non‐deterministic structural models, building upon existing methodologies of Monte Carlo simulation and approximate moment‐estimation. A nine‐story steel moment‐resisting frame is used as a testbed, employing parameterized moment‐rotation relationships with non‐deterministic quadrilinear backbones for the beam plastic‐hinges. The uncertain properties of the backbones include the yield moment, the post‐yield hardening ratio, the end‐of‐hardening rotation, the slope of the descending branch, the residual moment capacity and the ultimate rotation reached. IDA is employed to accurately assess the seismic performance of the model for any combination of the parameters by performing multiple nonlinear timehistory analyses for a suite of ground motion records. Sensitivity analyses on both the IDA and the static pushover level reveal the yield moment and the two rotational‐ductility parameters to be the most influential for the frame behavior. To propagate the parametric uncertainty to the actual seismic performance we employ (a) Monte Carlo simulation with latin hypercube sampling, (b) point‐estimate and (c) first‐order second‐moment techniques, thus offering competing methods that represent different compromises between speed and accuracy. The final results provide firm ground for challenging current assumptions in seismic guidelines on using a median‐parameter model to estimate the median seismic performance and employing the well‐known square‐root‐sum‐of‐squares rule to combine aleatory randomness and epistemic uncertainty. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
The implementation of Monte Carlo simulations (MCSs) for the propagation of uncertainty in real-world seawater intrusion (SWI) numerical models often becomes computationally prohibitive due to the large number of deterministic solves needed to achieve an acceptable level of accuracy. Previous studies have mostly relied on parallelization and grid computing to decrease the computational time of MCSs. However, another approach which has received less attention in the literature is to decrease the number of deterministic simulations by using more efficient sampling strategies. Sampling efficiency is a measure of the optimality of a sampling strategy. A more efficient sampling strategy requires fewer simulations and less computational time to reach a certain level of accuracy. The efficiency of a sampling strategy is highly related to its space-filling characteristics.This paper illustrates that the use of optimized Latin hypercube sampling (OLHS) strategies instead of the widely employed simple random sampling (SRS) and Latin hypercube sampling (LHS) strategies, can significantly improve sampling efficiency and hence decrease the simulation time of MCSs. Nine OLHS strategies are evaluated including: improved Latin hypercube sampling (IHS); optimum Latin hypercube (OLH) sampling; genetic optimum Latin hypercube (GOLH) sampling; three sampling strategies based on the enhanced stochastic evolutionary (ESE) algorithm namely φp-ESE which employs the φp space-filling criterion, CLD-ESE which utilizes the centered L2-discrepancy (CLD) space-filling criterion, and SLD-ESE which uses the star L2-discrepancy (SLD) space-filling criterion; and three sampling strategies based on the simulated annealing (SA) algorithm namely φp-SA which employs the φp criterion, CLD-SA which uses the CLD criterion, and SLD-SA which utilizes the SLD criterion. The study applies SRS, LHS and the nine OLHS strategies to MCSs of two synthetic test cases of SWI. The two test cases are the Henry problem and a two-dimensional radial representation of SWI in a circular island. The comparison demonstrates that the CLD-ESE strategy is the most efficient among the evaluated strategies. This paper also demonstrates how the space-filling characteristics of different OLHS designs change with variations in the input arguments of their optimization algorithms.  相似文献   

18.
The inverse problem of parameter structure identification in a distributed parameter system remains challenging. Identifying a more complex parameter structure requires more data. There is also the problem of over-parameterization. In this study, we propose a modified Tabu search for parameter structure identification. We embed an adjoint state procedure in the search process to improve the efficiency of the Tabu search. We use Voronoi tessellation for automatic parameterization to reduce the dimension of the distributed parameter. Additionally, a coarse-fine grid technique is applied to further improve the effectiveness and efficiency of the proposed methodology. To avoid over-parameterization, at each level of parameter complexity we calculate the residual error for parameter fitting, the parameter uncertainty error and a modified Akaike Information Criterion. To demonstrate the proposed methodology, we conduct numerical experiments with synthetic data that simulate both discrete hydraulic conductivity zones and a continuous hydraulic conductivity distribution. Our results indicate that the Tabu search allied with the adjoint state method significantly improves computational efficiency and effectiveness in solving the inverse problem of parameter structure identification.  相似文献   

19.
With ill‐posed inverse problems such as Full‐Waveform Inversion, regularization schemes are needed to constrain the solution. Whereas many regularization schemes end up smoothing the model, an undesirable effect with FWI where high‐resolution maps are sought, blocky regularization does not: it identifies and preserves strong velocity contrasts leading to step‐like functions. These models might be needed for imaging with wave‐equation based techniques such as Reverse Time Migration or for reservoir characterization. Enforcing blockiness in the model space amounts to enforcing a sparse representation of discontinuities in the model. Sparseness can be obtained using the ?1 norm or Cauchy function which are related to long‐tailed probability density functions. Detecting these discontinuities with vertical and horizontal gradient operators helps constraining the model in both directions. Blocky regularization can also help recovering higher wavenumbers that the data used for inversion would allow, thus helping controlling the cost of FWI. While the Cauchy function yields blockier models, both ?1 and Cauchy attenuate illumination and inversion artifacts.  相似文献   

20.
Fluid depletion within a compacting reservoir can lead to significant stress and strain changes and potentially severe geomechanical issues, both inside and outside the reservoir. We extend previous research of time‐lapse seismic interpretation by incorporating synthetic near‐offset and full‐offset common‐midpoint reflection data using anisotropic ray tracing to investigate uncertainties in time‐lapse seismic observations. The time‐lapse seismic simulations use dynamic elasticity models built from hydro‐geomechanical simulation output and a stress‐dependent rock physics model. The reservoir model is a conceptual two‐fault graben reservoir, where we allow the fault fluid‐flow transmissibility to vary from high to low to simulate non‐compartmentalized and compartmentalized reservoirs, respectively. The results indicate time‐lapse seismic amplitude changes and travel‐time shifts can be used to qualitatively identify reservoir compartmentalization. Due to the high repeatability and good quality of the time‐lapse synthetic dataset, the estimated travel‐time shifts and amplitude changes for near‐offset data match the true model subsurface changes with minimal errors. A 1D velocity–strain relation was used to estimate the vertical velocity change for the reservoir bottom interface by applying zero‐offset time shifts from both the near‐offset and full‐offset measurements. For near‐offset data, the estimated P‐wave velocity changes were within 10% of the true value. However, for full‐offset data, time‐lapse attributes are quantitatively reliable using standard time‐lapse seismic methods when an updated velocity model is used rather than the baseline model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号