首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Small‐scale point velocity probe (PVP)‐derived velocities were compared to conventional large‐scale velocity estimates from Darcy calculations and tracer tests, and the possibility of upscaling PVP data to match the other velocity estimates was evaluated. Hydraulic conductivity was estimated from grain‐size data derived from cores, and single‐well response testing or slug tests of onsite wells. Horizontal hydraulic gradients were calculated using 3‐point estimators from all of the wells within an extensive monitoring network, as well as by representing the water table as a single best fit plane through the entire network. Velocities determined from PVP testing were generally consistent in magnitude with those from depth specific data collected from multilevel monitoring locations in the tracer test, and similar in horizontal flow direction to the average hydraulic gradient. However, scaling up velocity estimates based on PVP measurements for comparison with site‐wide Darcy‐based velocities revealed issues that challenge the use of Darcy calculations as a generally applicable standard for comparison. The Darcy calculations were shown to underestimate the groundwater velocities determined both by the PVPs and large‐scale tracer testing, in a depth‐specific sense and as a site‐wide average. Some of this discrepancy is attributable to the selective placement of the PVPs in the aquifer. Nevertheless, this result has important implications for the design of in situ treatment systems. It is concluded that Darcy estimations of velocity should be supplemented with independent assessments for these kinds of applications.  相似文献   

2.
This paper presents a methodology to optimise measurement networks for the prediction of groundwater flow. Two different strategies are followed: the design of a measurement network that aims at minimizing the log-transmissivity variance (averaged over the domain of interest) or a design that minimises the hydraulic head variance (averaged over the domain of interest). The methodology consists of three steps. In the first step the prior log-transmissivity and hydraulic head variances are estimated. This step is completely general in the sense that the prior variances maybe unconditional, or maybe conditioned to log-transmissivity and/or hydraulic head measurements. In case hydraulic head measurements are available in the first step, the inverse groundwater flow problem is solved by the sequential self-calibrated method. In the second step, the full covariance matrices of hydraulic head and log-transmissivity are calculated numerically on the basis of a sufficiently big number of Monte Carlo realisations. On the basis of the estimated covariances, the impact of an additional measurement in terms of variance reduction is calculated. The measurement that yields the maximum domain averaged variance reduction is selected. Additional measurement locations are selected according to the same procedure.The procedure has been tested for a series of synthetic reference cases. Different sampling designs are tested for each of these cases, and the proposed strategies are compared with other sampling strategies. Although the proposed strategies indeed reach their objective and yield in most cases the lowest posterior log-transmissivity variance or hydraulic head variance, the differences as compared to alternative sampling strategies are frequently small. For the cases considered here, a sampling design that covers more or less regularly the aquifer performs well.The paper also illustrates that for the optimal estimation of a well catchment a heuristic criterion (spreading measurement points as regularly as possible over the zone where there is some uncertainty regarding the capture probability) yields better results than a sampling design that minimises the posterior log-transmissivity variance or posterior hydraulic head variance.  相似文献   

3.
Effects of measurement error on horizontal hydraulic gradient estimates   总被引:2,自引:0,他引:2  
During the design of a natural gradient tracer experiment, it was noticed that the hydraulic gradient was too small to measure reliably on an approximately 500-m(2) site. Additional wells were installed to increase the monitored area to 26,500 m(2), and wells were instrumented with pressure transducers. The resulting monitoring system was capable of measuring heads with a precision of +/-1.3 x 10(-2) m. This measurement error was incorporated into Monte Carlo calculations, in which only hydraulic head values were varied between realizations. The standard deviation in the estimated gradient and the flow direction angle from the x-axis (east direction) were calculated. The data yielded an average hydraulic gradient of 4.5 x 10(-4)+/-25% with a flow direction of 56 degrees southeast +/-18 degrees, with the variations representing 1 standard deviation. Further Monte Carlo calculations investigated the effects of number of wells, aspect ratio of the monitored area, and the size of the monitored area on the previously mentioned uncertainties. The exercise showed that monitored areas must exceed a size determined by the magnitude of the measurement error if meaningful gradient estimates and flow directions are to be obtained. The aspect ratio of the monitored zone should be as close to 1 as possible, although departures as great as 0.5 to 2 did not degrade the quality of the data unduly. Numbers of wells beyond three to five provided little advantage. These conclusions were supported for the general case with a preliminary theoretical analysis.  相似文献   

4.
This paper, based on a real world case study (Limmat aquifer, Switzerland), compares inverse groundwater flow models calibrated with specified numbers of monitoring head locations. These models are updated in real time with the ensemble Kalman filter (EnKF) and the prediction improvement is assessed in relation to the amount of monitoring locations used for calibration and updating. The prediction errors of the models calibrated in transient state are smaller if the amount of monitoring locations used for the calibration is larger. For highly dynamic groundwater flow systems a transient calibration is recommended as a model calibrated in steady state can lead to worse results than a noncalibrated model with a well-chosen uniform conductivity. The model predictions can be improved further with the assimilation of new measurement data from on-line sensors with the EnKF. Within all the studied models the reduction of 1-day hydraulic head prediction error (in terms of mean absolute error [MAE]) with EnKF lies between 31% (assimilation of head data from 5 locations) and 72% (assimilation of head data from 85 locations). The largest prediction improvements are expected for models that were calibrated with only a limited amount of historical information. It is worthwhile to update the model even with few monitoring locations as it seems that the error reduction with EnKF decreases exponentially with the amount of monitoring locations used. These results prove the feasibility of data assimilation with EnKF also for a real world case and show that improved predictions of groundwater levels can be obtained.  相似文献   

5.
1 Introduction The process of remotely sensed data acquisition isaffected by factors such as the rotation of the earth, finite scan rate of some sensors, curvature of the earth, non-ideal sensor, variation in platform altitude, attitude, velocity, etc.[1]. One important procedurewhich should be done prior to analyzing remotely sensed data, is geometric correction (image to map) or registration (image to image) of remotely sensed data. The purpose of geometric correction or registration is to e…  相似文献   

6.
This paper studies the impact of sensor measurement error on designing a water quality monitoring network for a river system, and shows that robust sensor locations can be obtained when an optimization algorithm is combined with a statistical process control (SPC) method. Specifically, we develop a possible probabilistic model of sensor measurement error and the measurement error model is embedded into a simulation model of a river system. An optimization algorithm is used to find the optimal sensor locations that minimize the expected time until a spill detection in the presence of a constraint on the probability of detecting a spill. The experimental results show that the optimal sensor locations are highly sensitive to the variability of measurement error and false alarm rates are often unacceptably high. An SPC method is useful in finding thresholds that guarantee a false alarm rate no more than a pre-specified target level, and an optimization algorithm combined with the thresholds finds a robust sensor network.  相似文献   

7.
A study of the effects of grid discretization on the migration of DNAPL within a discrete-fracture network embedded in a porous rock matrix is presented. It is shown that an insufficiently fine discretization of the fracture elements can lead to an overprediction of the volume of DNAPL that continues to migrate vertically at the intersection of a vertical and horizontal fracture. Uniform discretization of elements at the scale of one centimetre (or less) accurately resolved the density and capillary pressure components of the head gradient in the DNAPL. An alternative, non-uniform method of discretization of elements within the discrete-fracture network is presented whereby only fracture elements immediately adjacent to fracture intersections are refined. To further limit the number of elements employed, the porous matrix elements adjacent to the fracture elements are not similarly refined. Results show this alternative method of discretization reduces the numerical error to an acceptable level, while allowing the simulation of field-scale DNAPL contamination problems. The results from two field-scale simulations of a DNAPL-contaminated carbonate bedrock site in Ontario, Canada are presented. These simulations compare different methods of grid discretization, and highlight the importance of grid refinement when simulating DNAPL migration problems in fractured porous media.  相似文献   

8.
The use of historical data can significantly reduce the uncertainty around estimates of the magnitude of rare events obtained with extreme value statistical models. For historical data to be included in the statistical analysis a number of their properties, e.g. their number and magnitude, need to be known with a reasonable level of confidence. Another key aspect of the historical data which needs to be known is the coverage period of the historical information, i.e. the period of time over which it is assumed that all large events above a certain threshold are known. It might be the case though, that it is not possible to easily retrieve with sufficient confidence information on the coverage period, which therefore needs to be estimated. In this paper methods to perform such estimation are introduced and evaluated. The statistical definition of the problem corresponds to estimating the size of a population for which only few data points are available. This problem is generally refereed to as the German tanks problem, which arose during the second world war, when statistical estimates of the number of tanks available to the German army were obtained. Different estimators can be derived using different statistical estimation approaches, with the maximum spacing estimator being the minimum-variance unbiased estimator. The properties of three estimators are investigated by means of a simulation study, both for the simple estimation of the historical coverage and for the estimation of the extreme value statistical model. The maximum spacing estimator is confirmed to be a good approach to the estimation of the historical period coverage for practical use and its application for a case study in Britain is presented.  相似文献   

9.
Maximum-likelihood estimators properly represent measurement error, thus provide a statistically sound basis for evaluating the adequacy of a model fit and for finding the multivariate parameter confidence region. We demonstrate the advantages of using maximum-likelihood estimators rather than simple least-squares estimators for the problem of finding unsaturated hydraulic parameters. Inversion of outflow data given independent retention data can be treated by an extension to a Bayesian estimator. As an example, we apply the methodology to retention and transient unsaturated outflow observations, both obtained on the same medium sand sample. We found the van Genuchten expression to be adequate for the retention data, as the best fit was within measurement error. The Cramer–Rao confidence bound described the true parameter uncertainty approximately. The Mualem–van Genuchten expression was, however, inadequate for our outflow observations, suggesting that the parameters (, n) may not always be equivalent in describing both retention and unsaturated conductivity.  相似文献   

10.
Vertical hydraulic gradient is commonly measured in rivers, lakes, and streams for studies of groundwater–surface water interaction. While a number of methods with subtle differences have been applied, these methods can generally be separated into two categories; measuring surface water elevation and pressure in the subsurface separately or making direct measurements of the head difference with a manometer. Making separate head measurements allows for the use of electronic pressure sensors, providing large datasets that are particularly useful when the vertical hydraulic gradient fluctuates over time. On the other hand, using a manometer-based method provides an easier and more rapid measurement with a simpler computation to calculate the vertical hydraulic gradient. In this study, we evaluated a wet/wet differential pressure sensor for use in measuring vertical hydraulic gradient. This approach combines the advantage of high-temporal frequency measurements obtained with instrumented piezometers with the simplicity and reduced potential for human-induced error obtained with a manometer board method. Our results showed that the wet/wet differential pressure sensor provided results comparable to more traditional methods, making it an acceptable method for future use.  相似文献   

11.
The key problem in nonparametric frequency analysis of flood and droughts is the estimation of the bandwidth parameter which defines the degree of smoothing. Most of the proposed bandwidth estimators have been based on the density function rather than the cumulative distribution function or the quantile that are the primary interest in frequency analysis. We propose a new bandwidth estimator derived from properties of quantile estimators. The estimator builds on work by Altman and Léger (1995). The estimator is compared to the well-known method of least squares cross-validation (LSCV) using synthetic data generated from various parametric distributions used in hydrologic frequency analysis. Simulations suggest that our estimator performs at least as well as, and in many cases better than, the method of LSCV. In particular, the use of the proposed plug-in estimator reduces bias in the estimation as compared to LSCV. When applied to data sets containing observations with identical values, typically the result of rounding or truncation, the LSCV and most other techniques generally underestimates the bandwidth. The proposed technique performs very well in such situations.  相似文献   

12.
This article presents a method to estimate flow variables for an open channel network governed by the linearized Saint-Venant equations and subject to periodic forcing. The discharge at the upstream end of the system and the stage at the downstream end of the system are defined as the model inputs; the flow properties at selected internal locations, as well as the other external boundary conditions, are defined as the outputs. Both inputs and outputs are affected by noise and we use the model to improve the data quality. A spatially dependent transfer matrix in the frequency domain is constructed to relate the model input and output using modal decomposition. A data reconciliation technique is used to incorporate the error in the measured data and results in a set of reconciliated external boundary conditions; subsequently, the flow properties at any location in the system can be accurately estimated from the input measurements. The applicability and effectiveness of the method is demonstrated with a case study of the river flow subject to tidal forcing in the Sacramento-San Joaquin Delta, in California. We used existing USGS sensors in place in the Delta as measurement points, and deployed our own sensors at selected locations to produce data used for the validation. The proposed method gives an accurate estimation of the flow properties at intermediate locations within the channel network.  相似文献   

13.
A challenge in microseismic monitoring is quantification of survey acquisition and processing errors, and how these errors jointly affect estimated locations. Quantifying acquisition and processing errors and uncertainty has multiple benefits, such as more accurate and precise estimation of locations, anisotropy, moment tensor inversion and, potentially, allowing for detection of 4D reservoir changes. Here, we quantify uncertainty due to acquisition, receiver orientation error, and hodogram analysis. Additionally, we illustrate the effects of signal to noise ratio variances upon event detection. We apply processing steps to a downhole microseismic dataset from Pouce Coupe, Alberta, Canada. We use a probabilistic location approach to identify the optimal bottom well location based upon known source locations. Probability density functions are utilized to quantify uncertainty and propagate it through processing, including in source location inversion to describe the three-dimensional event location likelihood. Event locations are calculated and an amplitude stacking approach is used to reduce the error associated with first break picking and the minimization with modelled travel times. Changes in the early processing steps have allowed for understanding of location uncertainty of the mapped microseismic events.  相似文献   

14.
15.
In Seo and Smith (this issue), a set of estimators was built in a Bayesian framework to estimate rainfall depth at an ungaged location using raingage measurements and radar rainfall data. The estimators are equivalent to lognormal co-kriging (simple co-kriging in the Gaussian domain) with uncertain mean and variance of gage rainfall. In this paper, the estimators are evaluated via cross-validation using hourly radar rainfall data and simulated hourly raingage data. Generation of raingage data is based on sample statistics of actual raingage measurements and radar rainfall data. The estimators are compared with lognormal co-kriging and nonparametric estimators. The Bayesian estimators are shown to provide some improvement over lognormal co-kriging under the criteria of mean error, root mean square error, and standardized mean square error. It is shown that, if the prior could be assessed more accurately, the margin of improvement in predicting estimation variance could be larger. In updating the uncertain mean and variance of gage rainfall, inclusion of radar rainfall data is seen to provide little improvement over using raingage data only.  相似文献   

16.
基于微震特性的相对震级技术研究及应用   总被引:1,自引:0,他引:1       下载免费PDF全文
随着非常规气藏的开采开发,微地震监测成为压裂效果评估的关键技术.四川盆地非常规油气藏开采开发处于早期,井网密度极低导致在压裂井附近难以找到匹配深井作为观测井,而地面、浅井等替代观测方式面临无法有效探测微地震信号的风险.微地震事件能量弱和辐射的方向性使得观测方位预判及有效监测距离的评估成为微地震监测成败的关键因素.本文提出一种基于压裂微地震能量辐射模式和地层传播特征的相对震级计算技术,模拟微地震事件能量辐射模式及在地层传播过程中的动力学特征,达到评估微地震相对震级与检波器方位、地层传播距离的非线性关系的目的.通过理论分析和实际微地震监测资料验证,该方法能有效地解决微地震监测最佳观测方位的优选和有效传播距离的评估问题.  相似文献   

17.
We present results of processed microseismic events induced by hydraulic fracturing and detected using dual downhole monitoring arrays. The results provide valuable insight into hydraulic fracturing. For our study, we detected and located microseismic events and determined their magnitudes, source mechanisms and inverted stress field orientation. Event locations formed a distinct linear trend above the stimulated intervals. Source mechanisms were only computed for high‐quality events detected on a sufficient number of receivers. All the detected source mechanisms were dip‐slip mechanisms with steep and nearly horizontal nodal planes. The source mechanisms represented shear events and the non‐double‐couple components were very small. Such small, non‐double‐couple components are consistent with a noise level in the data and velocity model uncertainties. Strikes of inverted mechanisms corresponding to the nearly vertical fault plane are (within the error of measurements) identical with the strike of the location trend. Ambient principal stress directions were inverted from the source mechanisms. The least principal stress, σ3, was determined perpendicular to the strike of the trend of the locations, indicating that the hydraulic fracture propagated in the direction of maximum horizontal stress. Our analysis indicated that the source mechanisms observed using downhole instruments are consistent with the source mechanisms observed in microseismic monitoring arrays in other locations. Furthermore, the orientation of the inverted principal components of the ambient stress field is in agreement with the orientation of the known regional stress, implying that microseismic events induced by hydraulic fracturing are controlled by the regional stress field.  相似文献   

18.
Borehole flowmeters that measure horizontal flow velocity and direction of groundwater flow are being increasingly applied to a wide variety of environmental problems. This study was carried out to evaluate the measurement accuracy of several types of flowmeters in an unconsolidated aquifer simulator. Flowmeter response to hydraulic gradient, aquifer properties, and well‐screen construction was measured during 2003 and 2005 at the U.S. Geological Survey Hydrologic Instrumentation Facility in Bay St. Louis, Mississippi. The flowmeters tested included a commercially available heat‐pulse flowmeter, an acoustic Doppler flowmeter, a scanning colloidal borescope flowmeter, and a fluid‐conductivity logging system. Results of the study indicated that at least one flowmeter was capable of measuring borehole flow velocity and direction in most simulated conditions. The mean error in direction measurements ranged from 15.1° to 23.5° and the directional accuracy of all tested flowmeters improved with increasing hydraulic gradient. The range of Darcy velocities examined in this study ranged 4.3 to 155 ft/d. For many plots comparing the simulated and measured Darcy velocity, the squared correlation coefficient (r2) exceeded 0.92. The accuracy of velocity measurements varied with well construction and velocity magnitude. The use of horizontal flowmeters in environmental studies appears promising but applications may require more than one type of flowmeter to span the range of conditions encountered in the field. Interpreting flowmeter data from field settings may be complicated by geologic heterogeneity, preferential flow, vertical flow, constricted screen openings, and nonoptimal screen orientation.  相似文献   

19.
区域地震台网地震定位能力分析   总被引:6,自引:2,他引:6       下载免费PDF全文
赵仲和 《地震学报》1983,5(4):467-476
本文提出了分析区域地震台网地震定位能力的一种通用方法.在分析过程中考虑了如下事实:当改变地震震级和震源在台网区域内的位置时,能读出 P 震相和 S 震相到时的台站组合也随之改变.作为一个实例,具体分析了北京遥测地震台网的定位能力.北京台网现有19个台站,覆盖面积约为300公里400公里.根据观测资料,建立了确定每个台站检测能力(作为台站仪器放大倍数和地震震级的函数)的经验公式.然后,对每个给定震级和震源位置的可能事件,确定其子台网的构成.在这些子台网的基础上,利用奇异值分解技术,对事先设定的到时数据标准误差计算出震源座标的误差分布和线性化条件方程组的条件数分布.对给定震级(例如,ML=1.0,2.0或3.0),将计算结果绘成等值线图.为了进行比较,还分析了一个包括61个台站的北京台网扩建方案,从而可以预先估计未来台网的地震定位能力.   相似文献   

20.
An effective bias correction procedure using gauge measurement is a significant step for radar data processing to reduce the systematic error in hydrological applications. In these bias correction methods, the spatial matching of precipitation patterns between radar and gauge networks is an important premise. However, the wind-drift effect on radar measurement induces an inconsistent spatial relationship between radar and gauge measurements as the raindrops observed by radar do not fall vertically to the ground. Consequently, a rain gauge does not correspond to the radar pixel based on the projected location of the radar beam. In this study, we introduce an adjustment method to incorporate the wind-drift effect into a bias correlation scheme. We first simulate the trajectory of raindrops in the air using downscaled three-dimensional wind data from the weather research and forecasting model (WRF) and calculate the final location of raindrops on the ground. The displacement of rainfall is then estimated and a radar–gauge spatial relationship is reconstructed. Based on this, the local real-time biases of the bin-average radar data were estimated for 12 selected events. Then, the reference mean local gauge rainfall, mean local bias, and adjusted radar rainfall calculated with and without consideration of the wind-drift effect are compared for different events and locations. There are considerable differences for three estimators, indicating that wind drift has a considerable impact on the real-time radar bias correction. Based on these facts, we suggest bias correction schemes based on the spatial correlation between radar and gauge measurements should consider the adjustment of the wind-drift effect and the proposed adjustment method is a promising solution to achieve this.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号