首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
In the traditional inversion of the Rayleigh dispersion curve, layer thickness, which is the second most sensitive parameter of modelling the Rayleigh dispersion curve, is usually assumed as correct and is used as fixed a priori information. Because the knowledge of the layer thickness is typically not precise, the use of such a priori information may result in the traditional Rayleigh dispersion curve inversions getting trapped in some local minima and may show results that are far from the real solution. In this study, we try to avoid this issue by using a joint inversion of the Rayleigh dispersion curve data with vertical electric sounding data, where we use the common‐layer thickness to couple the two methods. The key idea of the proposed joint inversion scheme is to combine methods in one joint Jacobian matrix and to invert for layer S‐wave velocity, resistivity, and layer thickness as an additional parameter, in contrast with a traditional Rayleigh dispersion curve inversion. The proposed joint inversion approach is tested with noise‐free and Gaussian noise data on six characteristic, synthetic sub‐surface models: a model with a typical dispersion; a low‐velocity, half‐space model; a model with particularly stiff and soft layers, respectively; and a model reproduced from the stiff and soft layers for different layer‐resistivity propagation. In the joint inversion process, the non‐linear damped least squares method is used together with the singular value decomposition approach to find a proper damping value for each iteration. The proposed joint inversion scheme tests many damping values, and it chooses the one that best approximates the observed data in the current iteration. The quality of the joint inversion is checked with the relative distance measure. In addition, a sensitivity analysis is performed for the typical dispersive sub‐surface model to illustrate the benefits of the proposed joint scheme. The results of synthetic models revealed that the combination of the Rayleigh dispersion curve and vertical electric sounding methods in a joint scheme allows to provide reliable sub‐surface models even in complex and challenging situations and without using any a priori information.  相似文献   

2.
We present a new inversion method to estimate, from prestack seismic data, blocky P‐ and S‐wave velocity and density images and the associated sparse reflectivity levels. The method uses the three‐term Aki and Richards approximation to linearise the seismic inversion problem. To this end, we adopt a weighted mixed l2, 1‐norm that promotes structured forms of sparsity, thus leading to blocky solutions in time. In addition, our algorithm incorporates a covariance or scale matrix to simultaneously constrain P‐ and S‐wave velocities and density. This a priori information is obtained by nearby well‐log data. We also include a term containing a low‐frequency background model. The l2, 1 mixed norm leads to a convex objective function that can be minimised using proximal algorithms. In particular, we use the fast iterative shrinkage‐thresholding algorithm. A key advantage of this algorithm is that it only requires matrix–vector multiplications and no direct matrix inversion. The latter makes our algorithm numerically stable, easy to apply, and economical in terms of computational cost. Tests on synthetic and field data show that the proposed method, contrarily to conventional l2‐ or l1‐norm regularised solutions, is able to provide consistent blocky and/or sparse estimators of P‐ and S‐wave velocities and density from a noisy and limited number of observations.  相似文献   

3.
The accurate estimation of sub‐seafloor resistivity features from marine controlled source electromagnetic data using inverse modelling is hindered due to the limitations of the inversion routines. The most commonly used one‐dimensional inversion techniques for resolving subsurface resistivity structures are gradient‐based methods, namely Occam and Marquardt. The first approach relies on the smoothness of the model and is recommended when there are no sharp resistivity boundaries. The Marquardt routine is relevant for many electromagnetic applications with sharp resistivity contrasts but subject to the appropriate choice of a starting model. In this paper, we explore the ability of different 1D inversion schemes to derive sub‐seafloor resistivity structures from time domain marine controlled source electromagnetic data measured along an 8‐km‐long profile in the German North Sea. Seismic reflection data reveal a dipping shallow amplitude anomaly that was the target of the controleld source electromagnetic survey. We tested four inversion schemes to find suitable starting models for the final Marquardt inversion. In this respect, as a first scenario, Occam inversion results are considered a starting model for the subsequent Marquardt inversion (Occam–Marquardt). As a second scenario, we employ a global method called Differential Evolution Adaptive Metropolis and sequentially incorporate it with Marquardt inversion. The third approach corresponds to Marquardt inversion introducing lateral constraints. Finally, we include the lateral constraints in Differential Evolution Adaptive Metropolis optimization, and the results are sequentially utilized by Marquardt inversion. Occam–Marquardt may provide accurate estimation of the subsurface features, but it is dependent on the appropriate conversion of different multi‐layered Occam model to an acceptable starting model for Marquardt inversion, which is not straightforward. Employing parameter spaces, the Differential Evolution Adaptive Metropolis approach can be pertinent to determine Marquardt a priori information; nevertheless, the uncertainties in Differential Evolution Adaptive Metropolis optimization will introduce some inaccuracies in Marquardt inversion results. Laterally constrained Marquardt may be promising to resolve sub‐seafloor features, but it is not stable if there are significant lateral changes of the sub‐seafloor structure due to the dependence of the method to the starting model. Including the lateral constraints in Differential Evolution Adaptive Metropolis approach allows for faster convergence of the routine with consistent results, furnishing more accurate estimation of a priori models for the subsequent Marquardt inversion.  相似文献   

4.
Attempts have previously been made to predict anisotropic permeability in fractured reservoirs from seismic Amplitude Versus Angle and Azimuth data on the basis of a consistent permeability‐stiffness model and the anisotropic Gassmann relations of Brown and Korringa. However, these attempts were not very successful, mainly because the effective stiffness tensor of a fractured porous medium under saturated (drained) conditions is much less sensitive to the aperture of the fractures than the corresponding permeability tensor. We here show that one can obtain information about the fracture aperture as well as the fracture density and orientation (which determines the effective permeability) from frequency‐dependent seismic Amplitude Versus Angle and Azimuth data. Our workflow is based on a unified stiffness‐permeability model, which takes into account seismic attenuation by wave‐induced fluid flow. Synthetic seismic Amplitude Versus Angle and Azimuth data are generated by using a combination of a dynamic effective medium theory with Rüger's approximations for PP reflection coefficients in Horizontally Transversely Isotropic media. A Monte Carlo method is used to perform a Bayesian inversion of these synthetic seismic Amplitude Versus Angle and Azimuth data with respect to the parameters of the fractures. An effective permeability model is then used to construct the corresponding probability density functions for the different components of the effective permeability constants. The results suggest that an improved characterization of fractured reservoirs can indeed be obtained from frequency‐dependent seismic Amplitude Versus Angle and Azimuth data, provided that a dynamic effective medium model is used in the inversion process and a priori information about the fracture length is available.  相似文献   

5.
A new tool for two‐dimensional apparent‐resistivity data modelling and inversion is presented. The study is developed according to the idea that the best way to deal with ill‐posedness of geoelectrical inverse problems lies in constructing algorithms which allow a flexible control of the physical and mathematical elements involved in the resolution. The forward problem is solved through a finite‐difference algorithm, whose main features are a versatile user‐defined discretization of the domain and a new approach to the solution of the inverse Fourier transform. The inversion procedure is based on an iterative smoothness‐constrained least‐squares algorithm. As mentioned, the code is constructed to ensure flexibility in resolution. This is first achieved by starting the inversion from an arbitrarily defined model. In our approach, a Jacobian matrix is calculated at each iteration, using a generalization of Cohn's network sensitivity theorem. Another versatile feature is the issue of introducing a priori information about the solution. Regions of the domain can be constrained to vary between two limits (the lower and upper bounds) by using inequality constraints. A second possibility is to include the starting model in the objective function used to determine an improved estimate of the unknown parameters and to constrain the solution to the above model. Furthermore, the possibility either of defining a discretization of the domain that exactly fits the underground structures or of refining the mesh of the grid certainly leads to more accurate solutions. Control on the mathematical elements in the inversion algorithm is also allowed. The smoothness matrix can be modified in order to penalize roughness in any one direction. An empirical way of assigning the regularization parameter (damping) is defined, but the user can also decide to assign it manually at each iteration. An appropriate tool was constructed with the purpose of handling the inversion results, for example to correct reconstructed models and to check the effects of such changes on the calculated apparent resistivity. Tests on synthetic and real data, in particular in handling indeterminate cases, show that the flexible approach is a good way to build a detailed picture of the prospected area.  相似文献   

6.
A major complication caused by anisotropy in velocity analysis and imaging is the uncertainty in estimating the vertical velocity and depth scale of the model from surface data. For laterally homogeneous VTI (transversely isotropic with a vertical symmetry axis) media above the target reflector, P‐wave moveout has to be combined with other information (e.g. borehole data or converted waves) to build velocity models for depth imaging. The presence of lateral heterogeneity in the overburden creates the dependence of P‐wave reflection data on all three relevant parameters (the vertical velocity VP0 and the Thomsen coefficients ε and δ) and, therefore, may help to determine the depth scale of the velocity field. Here, we propose a tomographic algorithm designed to invert NMO ellipses (obtained from azimuthally varying stacking velocities) and zero‐offset traveltimes of P‐waves for the parameters of homogeneous VTI layers separated by either plane dipping or curved interfaces. For plane non‐intersecting layer boundaries, the interval parameters cannot be recovered from P‐wave moveout in a unique way. Nonetheless, if the reflectors have sufficiently different azimuths, a priori knowledge of any single interval parameter makes it possible to reconstruct the whole model in depth. For example, the parameter estimation becomes unique if the subsurface layer is known to be isotropic. In the case of 2D inversion on the dip line of co‐orientated reflectors, it is necessary to specify one parameter (e.g. the vertical velocity) per layer. Despite the higher complexity of models with curved interfaces, the increased angle coverage of reflected rays helps to resolve the trade‐offs between the medium parameters. Singular value decomposition (SVD) shows that in the presence of sufficient interface curvature all parameters needed for anisotropic depth processing can be obtained solely from conventional‐spread P‐wave moveout. By performing tests on noise‐contaminated data we demonstrate that the tomographic inversion procedure reconstructs both the interfaces and the VTI parameters with high accuracy. Both SVD analysis and moveout inversion are implemented using an efficient modelling technique based on the theory of NMO‐velocity surfaces generalized for wave propagation through curved interfaces.  相似文献   

7.
8.
9.
10.
In order to couple spatial data from frequency‐domain helicopter‐borne electromagnetics with electromagnetic measurements from ground geophysics (transient electromagnetics and radiomagnetotellurics), a common 1D weighted joint inversion algorithm for helicopter‐borne electromagnetics, transient electromagnetics and radiomagnetotellurics data has been developed. The depth of investigation of helicopter‐borne electromagnetics data is rather limited compared to time‐domain electromagnetics sounding methods on the ground. In order to improve the accuracy of model parameters of shallow depth as well as of greater depth, the helicopter‐borne electromagnetics, transient electromagnetics, and radiomagnetotellurics measurements can be combined by using a joint inversion methodology. The 1D joint inversion algorithm is tested for synthetic data of helicopter‐borne electromagnetics, transient electromagnetics and radiomagnetotellurics. The proposed concept of the joint inversion takes advantage of each method, thus providing the capability to resolve near surface (radiomagnetotellurics) and deeper electrical conductivity structures (transient electromagnetics) in combination with valuable spatial information (helicopter‐borne electromagnetics). Furthermore, the joint inversion has been applied on the field data (helicopter‐borne electromagnetics and transient electromagnetics) measured in the Cuxhaven area, Germany. In order to avoid the lessening of the resolution capacities of one data type, and thus balancing the use of inherent and ideally complementary information content, a parameter reweighting scheme that is based on the exploration depth ranges of the specific methods is proposed. A comparison of the conventional joint inversion algorithm, proposed by Jupp and Vozoff ( 1975 ), and of the newly developed algorithm is presented. The new algorithm employs the weighting on different model parameters differently. It is inferred from the synthetic and field data examples that the weighted joint inversion is more successful in explaining the subsurface than the classical joint inversion approach. In addition to this, the data fittings in weighted joint inversion are also improved.  相似文献   

11.
Non‐uniqueness occurs with the 1D parametrization of refraction traveltime graphs in the vertical dimension and with the 2D lateral resolution of individual layers in the horizontal dimension. The most common source of non‐uniqueness is the inversion algorithm used to generate the starting model. This study applies 1D, 1.5D and 2D inversion algorithms to traveltime data for a syncline (2D) model, in order to generate starting models for wave path eikonal traveltime tomography. The 1D tau‐p algorithm produced a tomogram with an anticline rather than a syncline and an artefact with a high seismic velocity. The 2D generalized reciprocal method generated tomograms that accurately reproduced the syncline, together with narrow regions at the thalweg with seismic velocities that are less than and greater than the true seismic velocities as well as the true values. It is concluded that 2D inversion algorithms, which explicitly identify forward and reverse traveltime data, are required to generate useful starting models in the near‐surface where irregular refractors are common. The most likely tomogram can be selected as either the simplest model or with a priori information, such as head wave amplitudes. The determination of vertical velocity functions within individual layers is also subject to non‐uniqueness. Depths computed with vertical velocity gradients, which are the default with many tomography programs, are generally 50% greater than those computed with constant velocities for the same traveltime data. The average vertical velocity provides a more accurate measure of depth estimates, where it can be derived. Non‐uniqueness is a fundamental reality with the inversion of all near‐surface seismic refraction data. Unless specific measures are taken to explicitly address non‐uniqueness, then the production of a single refraction tomogram, which fits the traveltime data to sufficient accuracy, does not necessarily demonstrate that the result is either ‘correct’ or the most probable.  相似文献   

12.
Anisotropy is often observed due to the thin layering or aligned micro‐structures, like small fractures. At the scale of cross‐well tomography, the anisotropic effects cannot be neglected. In this paper, we propose a method of full‐wave inversion for transversely isotropic media and we test its robustness against structured noisy data. Optimization inversion techniques based on a least‐square formalism are used. In this framework, analytical expressions of the misfit function gradient, based on the adjoint technique in the time domain, allow one to solve the inverse problem with a high number of parameters and for a completely heterogeneous medium. The wave propagation equation for transversely isotropic media with vertical symmetry axis is solved using the finite difference method on the cylindrical system of coordinates. This system allows one to model the 3D propagation in a 2D medium with a revolution symmetry. In case of approximately horizontal layering, this approximation is sufficient. The full‐wave inversion method is applied to a crosswell synthetic 2‐component (radial and vertical) dataset generated using a 2D model with three different anisotropic regions. Complex noise has been added to these synthetic observed data. This noise is Gaussian and has the same amplitude f?k spectrum as the data. Part of the noise is localized as a coda of arrivals, the other part is not localized. Five parameter fields are estimated, (vertical) P‐wave velocity, (vertical) S‐wave velocity, volumetric mass and the Thomsen anisotropic parameters epsilon and delta. Horizontal exponential correlations have been used. The results show that the full‐wave inversion of cross‐well data is relatively robust for high‐level noise even for second‐order parameters such as Thomsen epsilon and delta anisotropic parameters.  相似文献   

13.
In this paper, we describe a non‐linear constrained inversion technique for 2D interpretation of high resolution magnetic field data along flight lines using a simple dike model. We first estimate the strike direction of a quasi 2D structure based on the eigenvector corresponding to the minimum eigenvalue of the pseudogravity gradient tensor derived from gridded, low‐pass filtered magnetic field anomalies, assuming that the magnetization direction is known. Then the measured magnetic field can be transformed into the strike coordinate system and all magnetic dike parameters – horizontal position, depth to the top, dip angle, width and susceptibility contrast – can be estimated by non‐linear least squares inversion of the high resolution magnetic field data along the flight lines. We use the Levenberg‐Marquardt algorithm together with the trust‐region‐reflective method enabling users to define inequality constraints on model parameters such that the estimated parameters are always in a trust region. Assuming that the maximum of the calculated gzz (vertical gradient of the pseudogravity field) is approximately located above the causative body, data points enclosed by a window, along the profile, centred at the maximum of gzz are used in the inversion scheme for estimating the dike parameters. The size of the window is increased until it exceeds a predefined limit. Then the solution corresponding to the minimum data fit error is chosen as the most reliable one. Using synthetic data we study the effect of random noise and interfering sources on the estimated models and we apply our method to a new aeromagnetic data set from the Särna area, west central Sweden including constraints from laboratory measurements on rock samples from the area.  相似文献   

14.
The inversion of induced‐polarization parameters is important in the characterization of the frequency electrical response of porous rocks. A Bayesian approach is developed to invert these parameters assuming the electrical response is described by a Cole–Cole model in the time or frequency domain. We show that the Bayesian approach provides a better analysis of the uncertainty associated with the parameters of the Cole–Cole model compared with more conventional methods based on the minimization of a cost function using the least‐squares criterion. This is due to the strong non‐linearity of the inverse problem and non‐uniqueness of the solution in the time domain. The Bayesian approach consists of propagating the information provided by the measurements through the model and combining this information with a priori knowledge of the data. Our analysis demonstrates that the uncertainty in estimating the Cole–Cole model parameters from induced‐polarization data is much higher for measurements performed in the time domain than in the frequency domain. Our conclusion is that it is very difficult, if not impossible, to retrieve the correct value of the Cole–Cole parameters from time‐domain induced‐polarization data using standard least‐squares methods. In contrast, the Cole–Cole parameters can be more correctly inverted in the frequency domain. These results are also valid for other models describing the induced‐polarization spectral response, such as the Cole–Davidson or power law models.  相似文献   

15.
In this study, we formulate an improved finite element model‐updating method to address the numerical difficulties associated with ill conditioning and rank deficiency. These complications are frequently encountered model‐updating problems, and occur when the identification of a larger number of physical parameters is attempted than that warranted by the information content of the experimental data. Based on the standard bounded variables least‐squares (BVLS) method, which incorporates the usual upper/lower‐bound constraints, the proposed method (henceforth referred to as BVLSrc) is equipped with novel sensitivity‐based relative constraints. The relative constraints are automatically constructed using the correlation coefficients between the sensitivity vectors of updating parameters. The veracity and effectiveness of BVLSrc is investigated through the simulated, yet realistic, forced‐vibration testing of a simple framed structure using its frequency response function as input data. By comparing the results of BVLSrc with those obtained via (the competing) pure BVLS and regularization methods, we show that BVLSrc and regularization methods yield approximate solutions with similar and sufficiently high accuracy, while pure BVLS method yields physically inadmissible solutions. We further demonstrate that BVLSrc is computationally more efficient, because, unlike regularization methods, it does not require the laborious a priori calculations to determine an optimal penalty parameter, and its results are far less sensitive to the initial estimates of the updating parameters. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
In the Norwegian North Sea, the Sleipner field produces gas with a high CO2 content. For environmental reasons, since 1996, more than 11 Mt of this carbon dioxide (CO2) have been injected in the Utsira Sand saline aquifer located above the hydrocarbon reservoir. A series of seven 3D seismic surveys were recorded to monitor the CO2 plume evolution. With this case study, time‐lapse seismics have been shown to be successful in mapping the spread of CO2 over the past decade and to ensure the integrity of the overburden. Stratigraphic inversion of seismic data is currently used in the petroleum industry for quantitative reservoir characterization and enhanced oil recovery. Now it may also be used to evaluate the expansion of a CO2 plume in an underground reservoir. The aim of this study is to estimate the P‐wave impedances via a Bayesian model‐based stratigraphic inversion. We have focused our study on the 1994 vintage before CO2 injection and the 2006 vintage carried out after a CO2 injection of 8.4 Mt. In spite of some difficulties due to the lack of time‐lapse well log data on the interest area, the full application of our inversion workflow allowed us to obtain, for the first time to our knowledge, 3D impedance cubes including the Utsira Sand. These results can be used to better characterize the spreading of CO2 in a reservoir. With the post‐stack inversion workflow applied to CO2 storage, we point out the importance of the a priori model and the issue to obtain coherent results between sequential inversions of different seismic vintages. The stacking velocity workflow that yields the migration model and the a priori model, specific to each vintage, can induce a slight inconsistency in the results.  相似文献   

17.
Regularization is the most popular technique to overcome the null space of model parameters in geophysical inverse problems, and is implemented by including a constraint term as well as the data‐misfit term in the objective function being minimized. The weighting of the constraint term relative to the data‐fitting term is controlled by a regularization parameter, and its adjustment to obtain the best model has received much attention. The empirical Bayes approach discussed in this paper determines the optimum value of the regularization parameter from a given data set. The regularization term can be regarded as representing a priori information about the model parameters. The empirical Bayes approach and its more practical variant, Akaike's Bayesian Information Criterion, adjust the regularization parameter automatically in response to the level of data noise and to the suitability of the assumed a priori model information for the given data. When the noise level is high, the regularization parameter is made large, which means that the a priori information is emphasized. If the assumed a priori information is not suitable for the given data, the regularization parameter is made small. Both these behaviours are desirable characteristics for the regularized solutions of practical inverse problems. Four simple examples are presented to illustrate these characteristics for an underdetermined problem, a problem adopting an improper prior constraint and a problem having an unknown data variance, all frequently encountered geophysical inverse problems. Numerical experiments using Akaike's Bayesian Information Criterion for synthetic data provide results consistent with these characteristics. In addition, concerning the selection of an appropriate type of a priori model information, a comparison between four types of difference‐operator model – the zeroth‐, first‐, second‐ and third‐order difference‐operator models – suggests that the automatic determination of the optimum regularization parameter becomes more difficult with increasing order of the difference operators. Accordingly, taking the effect of data noise into account, it is better to employ the lower‐order difference‐operator models for inversions of noisy data.  相似文献   

18.
The quantitative explanation of the potential field data of three‐dimensional geological structures remains one of the most challenging issues in modern geophysical inversion. Obtaining a stable solution that can simultaneously resolve complicated geological structures is a critical inverse problem in the geophysics field. I have developed a new method for determining a three‐dimensional petrophysical property distribution, which produces a corresponding potential field anomaly. In contrast with the tradition inverse algorithm, my inversion method proposes a new model norm, which incorporates two important weighting functions. One is the L0 quasi norm (enforcing sparse constraints), and the other is depth‐weighting that counteracts the influence of source depth on the resulting potential field data of the solution. Sparseness constraints are imposed by using the L0 quasinorm on model parameters. To solve the representation problem, an L0 quasinorm minimisation model with different smooth approximations is proposed. Hence, the data space (N) method, which is much smaller than model space (M), combined with the gradient‐projected method, and the model space, combined with the modified Newton method for L0 quasinorm sparse constraints, leads to a computationally efficient method by using an N × N system versus an M × M one because N ? M. Tests on synthetic data and real datasets demonstrate the stability and validity of the L0 quasinorm spare norms inversion method. With the aim of obtaining the blocky results, the inversion method with the L0 quasinorm sparse constraints method performs better than the traditional L2 norm (standard Tikhonov regularisation). It can obtain the focus and sparse results easily. Then, the Bouguer anomaly survey data of the salt dome, offshore Louisiana, is considered as a real case study. The real inversion result shows that the inclusion the L0 quasinorm sparse constraints leads to a simpler and better resolved solution, and the density distribution is obtained in this area to reveal its geological structure. These results confirm the validity of the L0 quasinorm sparse constraints method and indicate its application for other potential field data inversions and the exploration of geological structures.  相似文献   

19.
Magnetic resonance sounding (MRS) has increasingly become an important method in hydrogeophysics because it allows for estimations of essential hydraulic properties such as porosity and hydraulic conductivity. A resistivity model is required for magnetic resonance sounding modelling and inversion. Therefore, joint interpretation or inversion is favourable to reduce the ambiguities that arise in separate magnetic resonance sounding and vertical electrical sounding (VES) inversions. A new method is suggested for the joint inversion of magnetic resonance sounding and vertical electrical sounding data. A one‐dimensional blocky model with varying layer thicknesses is used for the subsurface discretization. Instead of conventional derivative‐based inversion schemes that are strongly dependent on initial models, a global multi‐objective optimization scheme (a genetic algorithm [GA] in this case) is preferred to examine a set of possible solutions in a predefined search space. Multi‐objective joint optimization avoids the domination of one objective over the other without applying a weighting scheme. The outcome is a group of non‐dominated optimal solutions referred to as the Pareto‐optimal set. Tests conducted using synthetic data show that the multi‐objective joint optimization approximates the joint model parameters within the experimental error level and illustrates the range of trade‐off solutions, which is useful for understanding the consistency and conflicts between two models and objectives. Overall, the Levenberg‐Marquardt inversion of field data measured during a survey on a North Sea island presents similar solutions. However, the multi‐objective genetic algorithm method presents an efficient method for exploring the search space by producing a set of non‐dominated solutions. Borehole data were used to provide a verification of the inversion outcomes and indicate that the suggested genetic algorithm method is complementary for derivative‐based inversions.  相似文献   

20.
Hydraulic tomography is an emerging field and modeling method that provides a continuous hydraulic conductivity (K) distribution for an investigated region. Characterization approaches that rely on interpolation between one‐dimensional (1D) profiles have limited ability to accurately identify high‐K channels, juxtapositions of lenses with high K contrast, and breaches in layers or channels between such profiles. However, locating these features is especially important for groundwater flow and transport modeling, and for design and operation of in situ remediation in complex hydrogeologic environments. We use transient hydraulic tomography to estimate 3D K in a volume of 15‐m diameter by 20‐m saturated thickness in a highly heterogeneous unconfined alluvial (clay to sand‐and‐gravel) aquifer with a K range of approximately seven orders of magnitude at an active industrial site in Assemini, Sardinia, Italy. A modified Levenberg‐Marquardt algorithm was used for geostatistical inversion to deal with the nonlinear nature of the highly heterogeneous system. The imaging results are validated with pumping tests not used in the tomographic inversion. These tests were conducted from three of five clusters of continuous multichannel tubing (CMTs) installed for observation in the tomographic testing. Locations of high‐K continuity and discontinuity, juxtaposition of very high‐K and very low‐K lenses, and low‐K “plugs” are evident in regions of the investigated volume where they likely would not have been identified with interpolation from 1D profiles at the positions of the pumping well and five CMT clusters. Quality assessment methods identified a suspect high‐K feature between the tested volume and a lateral boundary of the model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号