The objective of in situ thermal treatment is typically to reduce the contaminant mass or average soil concentration below a specified value. Evaluation of whether the objective has been met is usually made by averaging soil concentrations from a limited number of soil samples. Results from several field sites indicate large performance uncertainty using this approach, even when the number of samples is large. We propose a method to estimate average soil concentration by fitting a log normal probability model to thermal mass recovery data. A statistical approach is presented for making termination decisions from mass recovery data, soil sample data, or both for an entire treatment volume or for subregions that explicitly considers estimation uncertainty which is coupled to a stochastic optimization algorithm to identify monitoring strategies to meet objectives with minimum expected cost. Early termination of heating in regions that reach cleanup targets sooner enables operating costs to be reduced while ensuring a high likelihood of meeting remediation objectives. Results for an example problem demonstrate that significant performance improvement and cost reductions can be achieved using this approach. 相似文献
This paper presents a general framework for predicting the residual drift of idealized SDOF systems that can be used to represent non‐degrading structures, including those with supplemental dampers. The framework first uses post‐peak oscillation analysis to predict the maximum ratio of residual displacement to the peak transient displacement in a random sample. Then, residual displacement ratios obtained from nonlinear time‐history analyses using both farfield and near‐fault‐pulse records were examined to identify trends, which were explained using the oscillation mechanics of SDOF systems. It is shown that large errors can result in existing probability models that do not capture the influence of key parameters on the residual displacement. Building on the observations that were made, a general probability distribution for the ratio of residual displacement to the peak transient displacement that more accurately reflects the physical bounds obtained from post‐peak oscillation analysis is proposed for capturing the probabilistic residual displacement response of these systems. The proposed distribution is shown to be more accurate when compared with previously proposed distributions in the literature due to its explicit account of dynamic and damping properties, which have a significant impact on the residual displacement. This study provides a rational basis for further development of a residual drift prediction tool for the performance‐based design and analysis of more complex multi‐degree‐of‐freedom systems. 相似文献
This article introduces a type of DBMS called the Intentionally‐Linked Entities (ILE) DBMS for use as the basis for temporal and historical Geographical Information Systems. ILE represents each entity in a database only once, thereby mostly eliminating redundancy and fragmentation, two major problems in Relational and other database systems. These advantages of ILE are realized by using relationship objects and pointers to implement all of the relationships among data entities in a native fashion using dynamically‐allocated linked data structures. ILE can be considered to be a modern and extended implementation of the E/R data model. ILE also facilitates storage of things that are more faithful to the historical records, such as gazetteer entries of places with imprecisely known or unknown locations. This is difficult in Relational database systems but is a routine task using ILE because ILE is implemented using modern memory allocation techniques. We use the China Historical GIS (CHGIS) and other databases to illustrate the advantages of ILE. This is accomplished by modeling these databases in ILE and comparing them to the existing Relational implementations. 相似文献
Fault-controlled hydrothermal dolomitization in tectonically complex basins can occur at any depth and from different fluid compositions, including ‘deep-seated’, ‘crustal’ or ‘basinal’ brines. Nevertheless, many studies have failed to identify the actual source of these fluids, resulting in a gap in our knowledge on the likely source of magnesium of hydrothermal dolomitization. With development of new concepts in hydrothermal dolomitization, the study aims in particular to test the hypothesis that dolomitizing fluids were sourced from either seawater, ultramafic carbonation or a mixture between the two by utilizing the Cambrian Mount Whyte Formation as an example. Here, the large-scale dolostone bodies are fabric-destructive with a range of crystal fabrics, including euhedral replacement (RD1) and anhedral replacement (RD2). Since dolomite is cross-cut by low amplitude stylolites, dolomitization is interpreted to have occurred shortly after deposition, at a very shallow depth (<1 km). At this time, there would have been sufficient porosity in the mudstones for extensive dolomitization to occur, and the necessary high heat flows and faulting associated with Cambrian rifting to transfer hot brines into the near surface. While the δ18Owater and 87Sr/86Sr ratios values of RD1 are comparable with Cambrian seawater, RD2 shows higher values in both parameters. Therefore, although aspects of the fluid geochemistry are consistent with dolomitization from seawater, very high fluid temperature and salinity could be suggestive of mixing with another, hydrothermal fluid. The very hot temperature, positive Eu anomaly, enriched metal concentrations, and cogenetic relation with quartz could indicate that hot brines were at least partially sourced from ultramafic rocks, potentially as a result of interaction between the underlying Proterozoic serpentinites and CO2-rich fluids. This study highlights that large-scale hydrothermal dolostone bodies can form at shallow burial depths via mixing during fluid pulses, providing a potential explanation for the mass balance problem often associated with their genesis. 相似文献
This study is focused on a passive treatment system known as the horizontal reactive treatment well (HRX Well®) installed parallel to groundwater flow, which operates on the principle of flow focusing that results from the hydraulic conductivity (K) ratio of the well and aquifer media. Passive flow and capture in the HRX Well are described by simplified equations adapted from Darcy's Law. A field pilot-scale study (PSS) and numerical simulations using a finite element method (FEM) were conducted to verify the HRX Well concept and test the validity of the HRX Well-simplified equations. The hydraulic performance results from both studies were observed to be within a close agreement to the simplified equations and their hydraulic capture width approximately five times greater than the well diameter (0.20 m). Key parameters affecting capture included the aquifer thickness, well diameter, and permeability ratio of the HRX Well treatment media and aquifer material. During pilot testing, the HRX Well captured 39% of flow while representing 0.5% of the test pit cross-sectional volume, indicating that the well captures a substantial amount of surrounding groundwater. While uncertainty in the aquifer and well properties (porosity, K, well losses), including the effects of boundary conditions, may have caused minor differences in the results, data from this study indicate that the simplified equations are valid for the conceptual design of a field study. A full-scale HRX Well was installed at Site SS003 at Vanderberg Air Force Base, California, in July/August 2018 based on outcomes from this study. 相似文献
The horizontal reactive media treatment well (HRX Well®) uses directionally drilled horizontal wells filled with a treatment media to induce flow-focusing behavior created by the well-to-aquifer permeability contrast to passively capture proportionally large volumes of groundwater. Groundwater is treated in situ as it flows through the HRX Well and downgradient portions of the aquifer are cleaned via elution as these zones are flushed with clean water discharging from the HRX Well. The HRX Well concept is particularly well suited for sites where long-term mass discharge control is a primary performance objective. This concept is appropriate for recalcitrant and difficult-to-treat constituents, including chlorinated solvents, per- and polyfluoroalkyl substances (PFAS), 1,4-dioxane, and metals. A full-scale HRX Well was installed and operated to treat trichloroethene (TCE) with zero valent iron (ZVI). The model-predicted enhanced flow through the HRX Well (compared to the flow in and equivalent cross-sectional area orthogonal to flow in the natural formation before HRX Well installation) and treatment zone width was consistent with flows and widths estimated independently by point velocity probe (PVP) testing, HRX Well tracer testing, and observed treatment in downgradient monitoring wells. The actual average capture zone width was estimated to be between 45 and 69 feet. Total TCE mass discharge reduction was maintained through the duration of the performance monitoring period and exceeded 99.99% (%). Decreases in TCE concentrations were observed at all four downgradient monitoring wells within the treatment zone (ranging from 50 to 74% at day 436), and the first arrival of treated water was consistent with model predictions. The field demonstration confirmed the HRX Well technology is best suited for long-term mass discharge control, can be installed under active infrastructure, requires limited ongoing operation and maintenance, and has low life cycle energy and water requirements. 相似文献
Parameterization of wave runup is of paramount importance for an assessment of coastal hazards. Parametric models employ wave (e.g., Hs and Lp) and beach (i.e., β) parameters to estimate extreme runup (e.g., R2%). Thus, recent studies have been devoted to improving such parameterizations by including additional information regarding wave forcing or beach morphology features. However, the effects of intra-wave dynamics, related to the random nature of the wave transformation process, on runup statistics have not been incorporated. This work employs a phase- and depth- resolving model, based on the Reynolds-averaged Navier-Stokes equations, to investigate different sources of variability associated with runup on planar beaches. The numerical model is validated with laboratory runup data. Subsequently, the role of both aleatory uncertainty and other known sources of runup variability (i.e., frequency spreading and bed roughness) is investigated. Model results show that aleatory uncertainty can be more important than the contributions from other sources of variability such as the bed roughness and frequency spreading. Ensemble results are employed to develop a new parametric model which uses the Hunt (J Waterw Port Coastal Ocean Eng 85:123–152, 1959) scaling parameter \(\beta \left (H_{s}L_{p}\right )^{1/2}\).
Different critical state-related formulas, for the critical state line and the critical state-dependent interlocking effect, have been proposed in constitutive modeling of granular material during last decades, which rises up a confusion on how to select an appropriate model in the geotechnical applications. This paper aims to discuss the selection of these critical state-related formulas and parameters identification. Three formulas of critical state line together with two formulas of critical state-dependent interlocking effect are combined to propose six elasto-plastic models. Drained and undrained triaxial tests on four different granular materials are selected for simulations. In order to eliminate artificial errors, a new hybrid genetic algorithm-based intelligent method is proposed and used to identify parameters and estimate simulations with minimum errors for each granular material and each model. Then, the performance of each CSL and each state parameter is evaluated using two information criteria. Furthermore, the performance was evaluated by simulating three footing tests using finite-element analysis in which the models are implemented. All comparisons demonstrate the incorporation of nonlinear critical state line combined with the state parameter e/ec in constitutive modeling can result in relatively more satisfied simulated results. 相似文献
Using a subset of the SEG Advanced Modeling Program Phase I controlled‐source electromagnetic data, we apply our standard controlled‐source electromagnetic interpretation workflows to delineate a simulated hydrocarbon reservoir. Experience learned from characterizing such a complicated model offers us an opportunity to refine our workflows to achieve better interpretation quality. The exercise proceeded in a blind test style, where the interpreting geophysicists did not know the true resistivity model until the end of the project. Rather, the interpreters were provided a traditional controlled‐source electromagnetic data package, including electric field measurements, interpreted seismic horizons, and well log data. Based on petrophysical analysis, a background resistivity model was established first. Then, the interpreters started with feasibility studies to establish the recoverability of the prospect and carefully stepped through 1D, 2.5D, and 3D inversions with seismic and well log data integrated at each stage. A high‐resistivity zone is identified with 1D analysis and further characterized with 2.5D inversions. Its lateral distribution is confirmed with a 3D anisotropic inversion. The importance of integrating all available geophysical and petrophysical data to derive more accurate interpretation is demonstrated. 相似文献
This issue paper presents how certain policies regarding management of groundwater quality lead to unexpected and undesirable results, despite being backed by seemingly reasonable assumptions. This happened in part because the so-called reasonable decisions were not based on an integrative and quantitative methodology. The policies surveyed here are: (1) implementation of a program for aquifer restoration to pristine conditions followed, after failure, by leaving it to natural attenuation; (2) the "Forget About The Aquifer" (FATA) approach, while ignoring possible damage that contaminated groundwater can inflict on the other environmental systems; (3) groundwater recharge in municipal areas while neglecting the presence of contaminants in the unsaturated zone and conditions exerted by upper impervious surfaces; (4) the Soil Aquifer Treatment (SAT) practice considering aquifers to be "filters of infinite capacity"; and (5) focusing on well contamination vs. aquifer contamination to conveniently defer grappling with the problem of the aquifer as a whole. Possible reasons for the failure of these seemingly rational policies are: (1) the characteristic times of processes associated with groundwater that are usually orders of magnitude greater than the residence times of decision makers in their managerial position; (2) proliferation of improperly trained "groundwater experts" or policymakers with sectoral agendas alongside legitimate differences of opinion among groundwater scientists; (3) the neglect of the cyclic nature of natural phenomena; and (4) ignoring future long-term costs because of immediate costs. 相似文献