首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present article describes the use of plant bioassays for the detection of genotoxins in the aquatic environment and gives an overview of test methods, their detection spectrum for environmental mutagens and carcinogens and their limitations and pitfalls. The most widely used test systems are micronucleus assays with meiotic cells of Tradescantia and with meristematic root tip cells of Allium and Vicia. In the last years, protocols for single cell gel electrophoresis assays have been developed, which can be conducted with a variety of species. Also various gene mutation test procedures have been developed with plants but they have hardly ever been used in environmental studies. Plants detect a broad variety of environmentally relevant genotoxins, in particular directly acting compounds. Many pesticides and industrial chemicals caused positive results; plant assays are also a unique tool for the detection of DNA‐reactive carcinogenic heavy metals in the environment. In many studies with complex mixtures, positive results were obtained which indicates that plants are sufficiently sensitive to detect effects without concentration of water samples. One of the shortcomings of the use of plants as indicators is their lack of sensitivity towards certain classes of promutagens such as nitrosamines, heterocyclic amines, and polycyclic aromatic hydrocarbons (PAHs). However, the former compounds are hardly encountered in the environment and PAHs can easily be detected chemically and in other mutagenicity tests. Taken together, the currently available data show that plant bioassays are a useful component of test batteries for environmental monitoring.  相似文献   

2.
Aromatic sulfonates can be found in drinking water. Thus, they must have passed water treatment and survived ozonation. Degradation of aromatic sulfonates can be achieved by the UV/H2O2‐process. Since drinking water is often treated with chlorine as a disinfectant, the formation of disinfection by‐products has to be considered. Therefore, the production of AOX (on activated carbon adsorbable organic halogens) after chlorination of the sulfonates with and without preoxidation was investigated. Instead of the analysis of the individual degradation products, the determination of the sum parameter AOX was used as a fast screening method. The investigated sulfonates were: anthraquinone‐2‐sulfonate, naphthalene‐2‐sulfonate, 2‐aminonaphthalene‐1‐sulfonate, and 4,4′‐diaminostilbene‐2,2′‐disulfonate. All sulfonates containing amino groups showed high potentials of AOX formation. The preoxidation with ozone increased the potentials of AOX formation in general. Treating the sulfonates by using the UV/H2O2‐process, the formation potentials run to zero after going through a maximum value.  相似文献   

3.
Scattering theory, a form of perturbation theory, is a framework from within which time‐lapse seismic reflection methods can be derived and understood. It leads to expressions relating baseline and monitoring data and Earth properties, focusing on differences between these quantities as it does so. The baseline medium is, in the language of scattering theory, the reference medium and the monitoring medium is the perturbed medium. The general scattering relationship between monitoring data, baseline data, and time‐lapse Earth property changes is likely too complex to be tractable. However, there are special cases that can be analysed for physical insight. Two of these cases coincide with recognizable areas of applied reflection seismology: amplitude versus offset modelling/inversion, and imaging. The main result of this paper is a demonstration that time‐lapse difference amplitude versus offset modelling, and time‐lapse difference data imaging, emerge from a single theoretical framework. The time‐lapse amplitude versus offset case is considered first. We constrain the general time‐lapse scattering problem to correspond with a single immobile interface that separates a static overburden from a target medium whose properties undergo time‐lapse changes. The scattering solutions contain difference‐amplitude versus offset expressions that (although presently acoustic) resemble the expressions of Landro ( 2001 ). In addition, however, they contain non‐linear corrective terms whose importance becomes significant as the contrasts across the interface grow. The difference‐amplitude versus offset case is exemplified with two parameter acoustic (bulk modulus and density) and anacoustic (P‐wave velocity and quality factor Q) examples. The time‐lapse difference data imaging case is considered next. Instead of constraining the structure of the Earth volume as in the amplitude versus offset case, we instead make a small‐contrast assumption, namely that the time‐lapse variations are small enough that we may disregard contributions from beyond first order. An initial analysis, in which the case of a single mobile boundary is examined in 1D, justifies the use of a particular imaging algorithm applied directly to difference data shot records. This algorithm, a least‐squares, shot‐profile imaging method, is additionally capable of supporting a range of regularization techniques. Synthetic examples verify the applicability of linearized imaging methods of the difference image formation under ideal conditions.  相似文献   

4.
Abstract

It is shown that, even for vanishingly small diffusivities of momentum and heat, a rotating stratified zonal shear flow is more unstable to zonally symmetric disturbances than would be indicated by the classical inviscid adiabatic criterion, unless σ, the Prandtl number, = 1. Both monotonic instability, and growing oscillations ("overstability") are involved, the former determining the stability criterion and having the higher growth rates. The more σ differs from 1, the larger the region in parameter space for which the flow is stable by the classical criterion, but actually unstable.

If the baroclinity is sufficiently great for the classical criterion also to indicate instability, the corresponding inviscid adiabatic modes usually have the numerically highest growth rates. An exception is the case of small isotherm slope and small σ.

A single normal mode of the linearized theory is also, formally, a finite amplitude solution; however, no theoretical attempt is made to assess the effect of finite amplitude in general. But, in a following paper, viscous overturning (the mechanism giving rise to the sub‐classical monotonic instability when σ > 1) is shown to play an important role at finite amplitude in certain examples of nonlinear steady thermally‐driven axisymmetric flow of water in a rotating annulus. Irrespective of whether analogous mechanisms turn out to be identifiable and important in large‐scale nature, it appears then that a Prandtl‐type parameter should enter the discussion of any attempt to make laboratory or numerical models of zonally‐symmetric baroclinic geophysical or astrophysical flows.  相似文献   

5.
In their spatial distribution as well as in their different states of activity, rockglaciers imply important information on former and recent permafrost conditions. Two different methods were applied in one study area (Turtmann Valley, Swiss Alps) in order to compare their suitability in assessing rockglacier activity. The comparison of geomorphological mapping and photogrammetric monitoring demonstrated a good accordance, especially on a regional scale. On a local scale, some differences in delimitation of the landforms as well as in the degree of activity were found. One reason for the observed differences is the qualitative character of geomorphological mapping resulting from the variable suitability of single parameters and combinations thereof in the determination of rockglacier activity. Based on these results, geomorphological mapping of rockglaciers can be improved by data from photogrammetric monitoring. Therefore, at best the two methods are combined when analysing former and present permafrost distribution. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

6.
Methods for the determination of three compound classes, i. e. diaminotoluenes, nitrophenols, and chloroaromatics in groundwater of a former ammunition plant are reported. Diaminotoluenes were extracted by discontinuous liquid/liquid-, nitrophenols by continuous liquid/liquid-extraction using dichloromethane, and chloroaromatics by solid-phase extraction. These compound classes may be analyzed by gas chromatography (GC) or gas chromatography coupled to mass spectrometry (GC/MS) without derivatization or after derivatization with N-methyl-bis(trifluoroacetamide) (MBTFA) or heptafluorobutyric anhydride (HFBA) in the case of diaminotoluenes and HFBA or acetic anhydride in the case of nitrophenols. An atomic emission detector (AED) coupled to a gas chromatograph may be employed for the analysis of chloroaromatics. High selectivity can be achieved using the characteristic wavelengths of chlorine. A variety of these compounds were identified and quantified in a groundwater sample from the former ammunition plant Elsnig (Saxony, Germany). Concentrations were in the lower ppb range. Thus, dichlorobenzenes which may have been used as substituents at the end of World War II could be identified in groundwater samples at this site.  相似文献   

7.
Pump‐and‐treat systems can prevent the migration of groundwater contaminants and candidate systems are typically evaluated with groundwater models. Such models should be rigorously assessed to determine predictive capabilities and numerous tools and techniques for model assessment are available. While various assessment methodologies (e.g., model calibration, uncertainty analysis, and Bayesian inference) are well‐established for groundwater modeling, this paper calls attention to an alternative assessment technique known as screening‐level sensitivity analysis (SLSA). SLSA can quickly quantify first‐order (i.e., main effects) measures of parameter influence in connection with various model outputs. Subsequent comparisons of parameter influence with respect to calibration vs. prediction outputs can suggest gaps in model structure and/or data. Thus, while SLSA has received little attention in the context of groundwater modeling and remedial system design, it can nonetheless serve as a useful and computationally efficient tool for preliminary model assessment. To illustrate the use of SLSA in the context of designing groundwater remediation systems, four SLSA techniques were applied to a hypothetical, yet realistic, pump‐and‐treat case study to determine the relative influence of six hydraulic conductivity parameters. Considered methods were: Taguchi design‐of‐experiments (TDOE); Monte Carlo statistical independence (MCSI) tests; average composite scaled sensitivities (ACSS); and elementary effects sensitivity analysis (EESA). In terms of performance, the various methods identified the same parameters as being the most influential for a given simulation output. Furthermore, results indicate that the background hydraulic conductivity is important for predicting system performance, but calibration outputs are insensitive to this parameter (KBK). The observed insensitivity is attributed to a nonphysical specified‐head boundary condition used in the model formulation which effectively “staples” head values located within the conductivity zone. Thus, potential strategies for improving model predictive capabilities include additional data collection targeting the KBK parameter and/or revision of model structure to reduce the influence of the specified head boundary.  相似文献   

8.
A two‐parameter transfer function with an infinite characteristic time is proposed for conceptual rainfall–runoff models. The large time behaviour of the unit response is an inverse power function of time. The infinite characteristic time allows long‐term memory effects to be accounted for. Such effects are observed in mountainous and karst catchments. The governing equation of the model is a fractional differential equation in the limit of long times. Although linear, the proposed transfer function yields discharge signals that can usually be obtained only using non‐linear models. The model is applied successfully to two catchments, the Dud Koshi mountainous catchment in the Himalayas and the Durzon karst catchment in France. It compares favourably to the linear, non‐linear single reservoir models and to the GR4J model. With a single reservoir and a single transfer function, the model is capable of reproducing hysteretic behaviours identified as typical of long‐term memory effects. Computational efficiency is enhanced by approximating the infinite characteristic time transfer function with a sum of simpler, exponential transfer functions. This amounts to partitioning the reservoir into several linear sub‐reservoirs, the output discharges of which are easy to compute. An efficient partitioning strategy is presented to facilitate the practical implementation of the model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
An approximation is developed that allows mapped 4D seismic amplitudes and time‐shifts to be related directly to the weighted linear sum of pore pressure and saturation changes. The weights in this relation are identified as key groups of parameters from a petroelastic model and include the reservoir porosity. This dependence on groups of parameters explains the inherent non‐uniqueness of this problem experienced by previous researchers. The proposed relation is of use in 4D seismic data feasibility studies and inversion and interpretation of the 4D seismic response in terms of pore pressure and water saturation changes. A further result is drawn from analysis of data from the North Sea and West Africa, which reveals that the relative interplay between the effects of pore pressure and saturation changes on the seismic data can be simplified to the control of a single, spatially variant parameter CS/CP. Combining these results with those from published literature, we find that CS/CP = 8 appears to be a generality across a range of clastic reservoirs with a similar mean porosity. Using this CS/CP value, an in situ seismic‐scale constraint for the rock stress sensitivity component of the petroelastic model is constructed considering this component carries the largest uncertainty.  相似文献   

10.
An extension of a previously developed rock physics model is made that quantifies the relationship between the ductile fraction of a brittle/ductile binary mixture and the isotropic seismic reflection response. By making a weak scattering (Born) approximation and plane wave (eikonal) approximation, with a subsequent ordering according to the angles of incidence, singular value decomposition analyses are performed to understand the stack weightings, number of stacks, and the type of stacks that will optimally estimate two fundamental rock physics parameters – the ductile fraction and the compaction and/or diagenesis. It is concluded that the full PP stack, i.e., sum of all PP offset traces, and the “full” PS stack, i.e., linear weighted sum of PS offset traces, are the two optimal stacks needed to estimate the two rock physics parameters. They dominate over both the second‐order amplitude variation offset “gradient” stack, which is a quadratically weighted sum of PP offset traces that is effectively the far offset traces minus the near offset traces, and the higher order fourth order PP stack (even at large angles of incidence). Using this result and model‐based Bayesian inversion, the seismic detectability of the ductile fraction (shown by others to be the important rock property for the geomechanical response of unconventional reservoir fracking) is demonstrated on a model characteristic of the Marcellus shale play.  相似文献   

11.
核磁共振测井T2cutoff确定方法及适用性分析   总被引:4,自引:1,他引:3       下载免费PDF全文
T2cutoff是核磁共振测井中的一个重要参数,它决定了核磁共振测井测量的有效孔隙度、渗透率、自由流体饱和度、束缚水饱和度等参数的精确程度.目前国内外普遍选取的T2cutoff为:砂泥岩储层取33ms,碳酸盐岩储层取92ms.实际研究发现T2cutoff应是变化的量而并非单一值.简单的运用单一的T2cutoff来计算各种地层参数势必会产生误差甚至得出错误的解释结论.叙述了33ms作为T2cutoff的由来及其不合理性,同时分析了目前国内外确定T2cutoff的各种方法及其适用性.  相似文献   

12.
The generalization of the parameters of rainfall–runoff models, to enable application at ungauged sites, is an important and ongoing area of research. This paper compares the performance of three alternative methods of generalization, for two parameter‐sparse conceptual models (PDM and TATE), specifically for use in flood frequency estimation using continuous simulation. Two of the methods are based on fitting regression relationships between catchment properties and calibrated parameter values, using weighted or sequential regression (with weights based on estimates of calibration uncertainty), and the third is based on the use of pooling groups, defined through measures of site‐similarity based on catchment properties. The study uses a relatively large sample of catchments in Britain. For the PDM, the site‐similarity method performs best, but not greatly better than either regression method, so there may be cases where the use of regression would be preferable. For the TATE model, weighted regression performs best (with a very similar level of performance to that of the PDM with site‐similarity), whereas site‐similarity performs worst (due to poor performance for catchments with higher baseflow), indicating that the choice of model and generalization method should not be separated. The use of sequential regression, which was developed to try to allow for parameter interdependence, shows no clear advantage for either model. Other than the poor performance of the TATE model with site‐similarity for catchments with a higher baseflow index, there are no clear relationships between performance of any model/method and catchment type. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
The Vistula (Wisla) river, the biggest river in Poland, is 1038 km long and has a drainage area of 168 689 km2. The river is strongly polluted by wastewaters. Big industrial plants are situated mainly in the upper part of the river, where it is slow‐flowing. This paper presents the results of the analysis of bottom sediment samples gathered from different locations along the Vistula river, from Kraków to Gdansk. The study was conducted in 2005. The following parameters were determined: chloroorganic pesticides, polychlorinated biphenyls (PCBs), chlorophenols, polycyclic aromatic hydrocarbons (PAHs), total organic carbon (TOC). The sum of chloroorganic pesticides was in the range of 2.0 to 77.5 ng/g d.w. (dry wet) with the highest values in the upper part of the river. p,p′‐DDT was found in the highest concentration. The sum of PCBs was in the range of 0.9 to 64.2 ng/g d.w. The sum of chlorophenols varied from 0.48 to 14.3 ng/g d.w. 2,4‐Dichlorophenol and pentachlorophenol occurred in the highest concentration. The sum of PAHs was in the range of 1552 to 7832 ng/g d.w. Phenantren was found in the highest concentration and antracen in the lowest. TOC values varied from 4.3 to 43.9 g/kg d.w. The concentrations of pesticides and PCBs were the highest in the upper part of the river and decreased along the course of the river, but the other determined compounds did not show this trend. However, the highest values occurred always in the upper part of the river.  相似文献   

14.
A value of 0.001 is recommended by the United States Environmental Protection Agency (USEPA) for its groundwater‐to‐indoor air Generic Attenuation Factor (GAFG), used in assessing potential vapor intrusion (VI) impacts to indoor air, given measured groundwater concentrations of volatile chemicals of concern (e.g., chlorinated solvents). The GAFG can, in turn, be used for developing groundwater screening levels for VI given target indoor air quality screening levels. In this study, we examine the validity and applicability of the GAFG both for predicting indoor air impacts and for determining groundwater screening levels. This is done using both analysis of published data and screening model calculations. Among the 774 total paired groundwater‐indoor air measurements in the USEPA's VI database (which were used by that agency to generate the GAFG) we found that there are 427 pairs for which a single groundwater measurement or interpolated value was applied to multiple buildings. In one case, up to 73 buildings were associated with a single interpolated groundwater value and in another case up to 15 buildings were associated with a single groundwater measurement (i.e., that the indoor air contaminant concentrations in all of the associated buildings were influenced by the concentration determined at a single point). In more than 70% of the cases (390 of 536 paired measurements in which horizontal building‐monitoring well distance was recorded) the monitoring wells were located more than 30 m (and one up to over 200 m) from the associated buildings. In a few cases, the measurements in the database even improbably implied that soil gas contaminant concentrations increased, rather than decreased, in an upward direction from a contaminant source to a foundation slab. Such observations indicate problematic source characterization within the data set used to generate the GAFG, and some indicate the possibility of a significant influence of a preferential contaminant pathway. While the inherent value of the USEPA database itself is not being questioned here, the above facts raise the very real possibility that the recommended groundwater attenuation factors are being influenced by variables or conditions that have not thus far been fully accounted for. In addition, the predicted groundwater attenuation factors often fall far beyond the upper limits of predictions from mathematical models of VI, ranging from screening models to detailed computational fluid dynamic models. All these models are based on the same fundamental conceptual site model, involving a vadose zone vapor transport pathway starting at an underlying uniform groundwater source and leading to the foundation of a building of concern. According to the analysis presented here, we believe that for scenarios for which such a “traditional” VI pathway is appropriate, 10?4 is a more appropriately conservative generic groundwater to indoor air attenuation factor than is the EPA‐recommended 10?3. This is based both on the statistical analysis of USEPA's VI database, as well as the traditional mathematical models of VI. This result has been validated by comparison with results from some well‐documented field studies.  相似文献   

15.
We present a seismic Test Line, provided by Saudi Aramco for various research teams, to highlight a few major challenges in land data processing due to near‐surface anomalies. We discuss state‐of‐the‐art methods used to compensate for shallow distortions, including single‐layer, multilayer, plus/minus, refraction and tomostatics methods. They are a starting point for the new technologies presented in other papers, all dealing with the same challenging data described here. The difficulties on the Test Line are mostly due to the assumption of vertical raypaths, inherent in classical applications of near‐surface correction statics. Even the most detailed velocity/depth model presents difficulties, due to the compleX‐raypath. There is a need for methods which are based on more complex models andtheories.  相似文献   

16.
A number of methods have been proposed that utilize the time‐domain transformations of frequency‐dependent dynamic impedance functions to perform a time‐history analysis. Though these methods have been available in literature for a number of years, the methods exhibit stability issues depending on how the model parameters are calibrated. In this study, a novel method is proposed with which the stability of a numerical integration scheme combined with time‐domain representation of a frequency‐dependent dynamic impedance function can be evaluated. The method is verified with three independent recursive parameter models. The proposed method is expected to be a useful tool in evaluating the potential stability issue of a time‐domain analysis before running a full‐fledged nonlinear time‐domain analysis of a soil–structure system in which the dynamic impedance of a soil–foundation system is represented with a recursive parameter model. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Compressional-wave Q estimation from full-waveform sonic data   总被引:1,自引:0,他引:1  
There is significant evidence that the anelastic loss of seismic energy is linked to petrophysical properties such as porosity, permeability and clay content. Thus, reliable estimation of anelastic attenuation from seismic data can lead to improved methods for the prediction of petrophysical properties. This paper is concerned with methods for the estimation of attenuation at sonic frequencies (5–30 KHz) from in situ data. Two independent methods have been developed and tested for estimating compressional‐wave attenuation from full‐waveform sonic data. A well‐established technique, the logarithm spectral ratio (LSR) method, is compared with a new technique, the instantaneous frequency (IF) method. The LSR method uses the whole spectrum of the seismic pulse whilst the IF method uses a carefully estimated value of instantaneous frequency which is representative of the centre frequency of the pulse. In the former case, attenuation estimation is based on the relative variation of amplitudes at different frequencies, whilst in the latter case it is based on the shift of the centre frequency of the pulse to lower values during anelastic wave propagation. The IF method does not assume frequency independence of Q which is a necessary assumption for the LSR method, and it provides a stable frequency log, the peak instantaneous frequency (PIF) log, which may be used as an indicator for attenuation under certain limitations. The development and implementation of the two methods is aimed at minimizing the effect of secondary arrivals, such as leaky modes, and involved a series of parameter tests. Testing of the two methods using full‐waveform sonic data of variable quality, obtained from a gas‐bearing sandstone reservoir, showed that the IF method is in general more stable and suitable for full‐waveform sonic data compared with the LSR method. This was evident especially in data sets with high background noise levels and wave‐interference effects. For good quality data, the two methods gave results that showed good agreement, whilst comparison with other log types further increased confidence in the results obtained. A significant decrease (approximately 5 KHz) in the PIF values was observed in the transition from an evaporite/shale sequence to the gas‐bearing sandstone. Average Q values of 54 and 51 were obtained using good quality data from a test region within the gas‐saturated sandstone reservoir, using the LSR and IF methods, respectively.  相似文献   

18.
The robustness of large quantile estimates of largest elements in a small sample by the methods of moments (MOM), L‐moments (LMM) and maximum likelihood (MLM) was evaluated and compared. Bias (B) and mean square error (MSE) were used to measure the estimation methods performance. Quantiles were estimated by eight two‐parameter probability distributions with the variation coefficient being the shape parameter. The effect of dropping largest elements of the series on large quantile values was assessed for various variation coefficient (CV)/sample size (n) ‘combinations’ with n = 30 as the basic value. To that end, both the Monte Carlo sampling experiments and an asymptotic approach consisting in distribution truncation were applied. In general, both sampling and asymptotic approaches point to MLM as the most robust method of the three considered, with respect to bias of large quantiles. Comparing the performance of two other methods, the MOM estimates were found to be more robust for small and moderate hydrological samples drawn from distributions with zero lower‐bound than were the LMM estimates. Extending the evaluation to outliers, it was shown that all the above findings remain valid. However, using the MSE variation as a measure of performance, the LMM was found to be the best for most distribution/variation coefficient combinations, whereas MOM was found to be the worst. Moreover, removal of the largest sample element need not result in a loss of estimation efficiency. The gain in accuracy is observed for the heavy‐tailed and log‐normal distributions, being particularly distinctive for LMM. In practice, while dealing with a single sample deprived of its largest element, one should choose the estimation method giving the lowest MSE of large quantiles. For n = 30 and several distribution/variation coefficient combinations, the MLM outperformed the two other methods in this respect and its supremacy grew with sample size, while MOM was usually the worst. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
地震电磁卫星的迅速发展也带来了一些相关的研究课题,本文根据地面地震电磁观测中成熟的监测手段和参数,来分析如何根据地震电磁卫星记录的电、磁场分量提取可能与地震电磁异常现象有关的信息.介绍了电磁场时间序列的幅值统计;电磁波谱尤其是自功率谱对噪音干扰敏感,介绍了自功率谱的计算方法;正交电磁场相关度的变化分析方法;空间电磁波平面特征指标求取;以及在简单平面波特征下,提出空间阻抗张量的计算方法.  相似文献   

20.
Soil vapor extraction (SVE) is a prevalent remediation remedy for volatile organic compound (VOC) contaminants in the vadose zone. To support selection of an appropriate condition at which SVE may be terminated for site closure or for transition to another remedy, an evaluation is needed to determine whether vadose zone VOC contamination has been diminished sufficiently to keep groundwater concentrations below threshold values. A conceptual model for this evaluation was developed for VOC fate and transport from a vadose zone source to groundwater when vapor‐phase diffusive transport is the dominant transport process. A numerical analysis showed that, for these conditions, the groundwater concentration is controlled by a limited set of parameters, including site‐specific dimensions, vadose zone properties, and source characteristics. On the basis of these findings, a procedure was then developed for estimating groundwater concentrations using results from the three‐dimensional multiphase transport simulations for a matrix of parameter value combinations and covering a range of potential site conditions. Interpolation and scaling processes are applied to estimate groundwater concentrations at compliance (monitoring) wells for specific site conditions of interest using the data from the simulation results. The interpolation and scaling methodology using these simulation results provides a far less computationally intensive alternative to site‐specific three‐dimensional multiphase site modeling, while still allowing for parameter sensitivity and uncertainty analyses. With iterative application, the approach can be used to consider the effect of a diminishing vadose zone source over time on future groundwater concentrations. This novel approach and related simulation results have been incorporated into a user‐friendly Microsoft® Excel®‐based spreadsheet tool entitled SVEET (Soil Vapor Extraction Endstate Tool), which has been made available to the public.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号