首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Monitoring and estimation of snow depth in alpine catchments is needed for a proper assessment of management alternatives for water supply in these water resources systems. The distribution of snowpack thickness is usually approached by using field data that come from snow samples collected at a given number of locations that constitute the monitoring network. Optimal design of this network is required to obtain the best possible estimates. Assuming that there is an existing monitoring network, its optimization may imply the selection of an optimal network as a subset of the existing one (if there are no funds to maintain them) or enlarging the existing network by one or more stations (optimal augmentation problem). We propose an optimization procedure that minimizes the total variance in the estimate of snowpack thickness. The novelty of this work is to treat, for the first time, the problem of snow observation network optimization for an entire mountain range rather than for small catchments as done in the previous studies. Taking into account the reduced data available, which is a common problem in many mountain ranges, the importance of a proper design of these observation networks is even larger. Snowpack thickness is estimated by combining regression models to approach the effect of the explanatory variables and kriging techniques to consider the influence of the stakes location. We solve the optimization problems under different hypotheses, studying the impacts of augmentation and reduction, both, one by one and in pairs. We also analyse the sensitivity of results to nonsnow measurements deduced from satellite information. Finally, we design a new optimal network by combining the reduction and augmentation methods. The methodology has been applied to the Sierra Nevada mountain range (southern Spain), where very limited resources are employed to monitor snowfall and where an optimal snow network design could prove critical. An optimal snow observation network is defined by relocating some observation points. It would reduce the estimation variance by around 600 cm2 (15%).  相似文献   

2.
Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data‐worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south‐western Germany, which has been established to monitor river—groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model‐based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy‐to‐implement tools for an otherwise complex task and (2) yet to consider data‐worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types.  相似文献   

3.
MAROS: a decision support system for optimizing monitoring plans   总被引:3,自引:0,他引:3  
The Monitoring and Remediation Optimization System (MAROS), a decision-support software, was developed to assist in formulating cost-effective ground water long-term monitoring plans. MAROS optimizes an existing ground water monitoring program using both temporal and spatial data analyses to determine the general monitoring system category and the locations and frequency of sampling for future compliance monitoring at the site. The objective of the MAROS optimization is to minimize monitoring locations in the sampling network and reduce sampling frequency without significant loss of information, ensuring adequate future characterization of the contaminant plume. The interpretive trend analysis approach recommends the general monitoring system category for a site based on plume stability and site-specific hydrogeologic information. Plume stability is characterized using primary lines of evidence (i.e., Mann-Kendall analysis and linear regression analysis) based on concentration trends, and secondary lines of evidence based on modeling results and empirical data. The sampling optimization approach, consisting of a two-dimensional spatial sampling reduction method (Delaunay method) and a temporal sampling analysis method (Modified CES method), provides detailed sampling location and frequency results. The Delaunay method is designed to identify and eliminate redundant sampling locations without causing significant information loss in characterizing the plume. The Modified CES method determines the optimal sampling frequency for a sampling location based on the direction, magnitude, and uncertainty in its concentration trend. MAROS addresses a variety of ground water contaminants (fuels, solvents, and metals), allows import of various data formats, and is designed for continual modification of long-term monitoring plans as the plume or site conditions change over time.  相似文献   

4.
A new methodology is proposed to optimize monitoring networks for identification of the extent of contaminant plumes. The optimal locations for monitoring wells are determined as the points where maximal decreases are expected in the quantified uncertainty about contaminant existence after well installation. In this study, hydraulic conductivity is considered to be the factor that causes uncertainty. The successive random addition (SRA) method is used to generate random fields of hydraulic conductivity. The expected value of information criterion for the existence of a contaminant plume is evaluated based on how much the uncertainty of plume distribution reduces with increases in the size of the monitoring network. The minimum array of monitoring wells that yields the maximum information is selected as the optimal monitoring network. In order to quantify the uncertainty of the plume distribution, the probability map of contaminant existence is made for all generated contaminant plume realizations on the domain field. The uncertainty is defined as the sum of the areas where the probability of contaminant existence or nonexistence is uncertain. Results of numerical experiments for determination of optimal monitoring networks in heterogeneous conductivity fields are presented.  相似文献   

5.
Non-perennial streams comprise over half of the global stream network and impact downstream water quality. Although aridity is a primary driver of stream drying globally, surface flow permanence varies spatially and temporally within many headwater streams, suggesting that these complex drying patterns may be driven by topographic and subsurface factors. Indeed, these factors affect shallow groundwater flows in perennial systems, but there has been only limited characterisation of shallow groundwater residence times and groundwater contributions to intermittent streams. Here, we asked how groundwater residence times, shallow groundwater contributions to streamflow, and topography interact to control stream drying in headwater streams. We evaluated this overarching question in eight semi-arid headwater catchments based on surface flow observations during the low-flow period, coupled with tracer-based groundwater residence times. For one headwater catchment, we analysed stream drying during the seasonal flow recession and rewetting period using a sensor network that was interspersed between groundwater monitoring locations, and linked drying patterns to groundwater inputs and topography. We found a poor relationship between groundwater residence times and flowing network extent (R2 < 0.24). Although groundwater residence times indicated that old groundwater was present in all headwater streams, surface drying also occurred in each of them, suggesting old, deep flowpaths are insufficient to sustain surface flows. Indeed, the timing of stream drying at any given point typically coincided with a decrease in the contribution from near-surface sources and an increased relative contribution of groundwater to streamflow at that location, whereas the spatial pattern of drying within the stream network typically correlated with locations where groundwater inputs were most seasonally variable. Topographic metrics only explained ~30% of the variability in seasonal flow permanence, and surprisingly, we found no correlation with seasonal drying and down-valley subsurface storage area. Because we found complex spatial patterns, future studies should pair dense spatial observations of subsurface properties, such as hydraulic conductivity and transmissivity, to observations of seasonal flow permanence.  相似文献   

6.
We present a methodology for global optimal design of ground water quality monitoring networks using a linear mixed-integer formulation. The proposed methodology incorporates ordinary kriging (OK) within the decision model formulation for spatial estimation of contaminant concentration values. Different monitoring network design models incorporating concentration estimation error, variance estimation error, mass estimation error, error in locating plume centroid, and spatial coverage of the designed network are developed. A big-M technique is used for reformulating the monitoring network design model to a linear decision model while incorporating different objectives and OK equations. Global optimality of the solutions obtained for the monitoring network design can be ensured due to the linear mixed-integer programming formulations proposed. Performances of the proposed models are evaluated for both field and hypothetical illustrative systems. Evaluation results indicate that the proposed methodology performs satisfactorily. These performance evaluation results demonstrate the potential applicability of the proposed methodology for optimal ground water contaminant monitoring network design.  相似文献   

7.
This paper studies the impact of sensor measurement error on designing a water quality monitoring network for a river system, and shows that robust sensor locations can be obtained when an optimization algorithm is combined with a statistical process control (SPC) method. Specifically, we develop a possible probabilistic model of sensor measurement error and the measurement error model is embedded into a simulation model of a river system. An optimization algorithm is used to find the optimal sensor locations that minimize the expected time until a spill detection in the presence of a constraint on the probability of detecting a spill. The experimental results show that the optimal sensor locations are highly sensitive to the variability of measurement error and false alarm rates are often unacceptably high. An SPC method is useful in finding thresholds that guarantee a false alarm rate no more than a pre-specified target level, and an optimization algorithm combined with the thresholds finds a robust sensor network.  相似文献   

8.
v--v Continuous seismic threshold monitoring is a technique that has been developed over the past several years to assess the upper magnitude limit of possible seismic events that might have occurred in a geographical target area. The method provides continuous time monitoring at a given confidence level, and can be applied in a site-specific, regional or global context.¶In this paper (Part 1) and a companion paper (Part 2) we address the problem of optimizing the site-specific approach in order to achieve the highest possible automatic monitoring capability of particularly interesting areas. The present paper addresses the application of the method to cases where a regional monitoring network is available. We have in particular analyzed events from the region around the Novaya Zemlya nuclear test site to develop a set of optimized processing parameters for the arrays SPITS, ARCES, FINES, and NORES. From analysis of the calibration events we have derived values for beam-forming steering delays, filter bands, short-term average (STA) lengths, phase travel times (P and S waves), and amplitude-magnitude relationships for each array. By using these parameters for threshold monitoring of the Novaya Zemlya testing area, we obtain a monitoring capability varying between mb 2.0 and 2.5 during normal noise conditions.¶The advantage of using a network, rather than a single station or array, for monitoring purposes becomes particularly evident during intervals with high global seismic activity (aftershock sequences), high seismic noise levels (wind, water waves, ice cracks) or station outages. For the time period November-December 1997, all time intervals with network magnitude thresholds exceeding mb 2.5 were visually analyzed, and we found that all of these threshold peaks could be explained by teleseismic, regional, or local signals from events outside the Novaya Zemlya testing area. We could therefore conclude within the confidence level provided by the method, that no seismic event of magnitude exceeding 2.5 occurred at the Novaya Zemlya test site during this two-month time interval.¶As an example of particular interest in a monitoring context, we apply optimized threshold processing of the SPITS array for a time interval around 16 August 1997 mb 3.5 event in the Kara Sea. We show that this processing enables us to detect a second, smaller event from the same site (mb 2.6), occurring about 4 hours later. This second event was not defined automatically by standard processing.  相似文献   

9.
A methodology is developed for optimal operation of reservoirs to control water quality requirements at downstream locations. The physicochemical processes involved are incorporated using a numerical simulation model. This simulation model is then linked externally with an optimization algorithm. This linked simulation–optimization‐based methodology is used to obtain optimal reservoir operation policy. An elitist genetic algorithm is used as the optimization algorithm. This elitist‐genetic‐algorithm‐based linked simulation–optimization model is capable of evolving short‐term optimal operation strategies for controlling water quality downstream of a reservoir. The performance of the methodology developed is evaluated for an illustrative example problem. Different plausible scenarios of management are considered. The operation policies obtained are tested by simulating the resulting pollutant concentrations downstream of the reservoir. These performance evaluations consider various scenarios of inflow, permissible concentration limits, and a number of management periods. These evaluations establish the potential applicability of the developed methodology for optimal control of water quality downstream of a reservoir. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

10.
A new method is developed to design a multi-objective and multi-pollutant sensitive air quality monitoring network (AQMN) for an industrial district. A dispersion model is employed to estimate the ground level concentration of the air pollutants emitted from different emission sources. The primary objective of AQMN is providing the maximum information about the pollutant with respect to (1) maximum coverage area, (2) maximum detection of violations over ambient air standards and (3) sensitivity of monitoring stations to emission sources. Ant Colony Optimization algorithm (ACO) and Genetic Algorithm (GA) are adopted as the optimization tools to identify the optimal configuration of the monitoring network. The comparison between the results of ACO and GA shows that the performance of both algorithms is acceptable in finding the optimal configuration of AQMN. The application of the method to a network of existing refinery stacks indicates that three stations are suitable to cover the study area. The sensitivity of the three optimal station locations to emission sources is investigated and a database including the sensitivity of stations to each source is created.  相似文献   

11.
Bayesian data fusion in a spatial prediction context: a general formulation   总被引:1,自引:1,他引:1  
In spite of the exponential growth in the amount of data that one may expect to provide greater modeling and predictions opportunities, the number and diversity of sources over which this information is fragmented is growing at an even faster rate. As a consequence, there is real need for methods that aim at reconciling them inside an epistemically sound theoretical framework. In a statistical spatial prediction framework, classical methods are based on a multivariate approach of the problem, at the price of strong modeling hypotheses. Though new avenues have been recently opened by focusing on the integration of uncertain data sources, to the best of our knowledges there have been no systematic attemps to explicitly account for information redundancy through a data fusion procedure. Starting from the simple concept of measurement errors, this paper proposes an approach for integrating multiple information processing as a part of the prediction process itself through a Bayesian approach. A general formulation is first proposed for deriving the prediction distribution of a continuous variable of interest at unsampled locations using on more or less uncertain (soft) information at neighboring locations. The case of multiple information is then considered, with a Bayesian solution to the problem of fusing multiple information that are provided as separate conditional probability distributions. Well-known methods and results are derived as limit cases. The convenient hypothesis of conditional independence is discussed by the light of information theory and maximum entropy principle, and a methodology is suggested for the optimal selection of the most informative subset of information, if needed. Based on a synthetic case study, an application of the methodology is presented and discussed.  相似文献   

12.
ABSTRACT

A novel methodology for the optimal placement of sensors in watercourses is presented. The methodology aimed at finding the optimal locations for the placement of sensors to maximize the amount of information to monitor flow and velocity in watercourses. The methodology is based on the maximization of the Gram determinant of the sensor responses at each possible location or a combination of locations. Two illustrative examples are presented wherein locations for one- and two-sensor cases for the velocity and flow monitoring of the Venero Claro River Basin, Spain, are selected. The kinematic wave method was used to evaluate the sensor response at every possible location. The results confirm the suitability of this methodology for finding the optimal locations for sensors to monitor flow or velocity in watercourses.  相似文献   

13.
Mapping groundwater quality in the Netherlands   总被引:4,自引:0,他引:4  
Maps of 25 groundwater quality variables were obtained by estimating 4 km × 4 km block median concentrations. Estimates were presented as approximate 95% confidence intervals related to four concentration levels mostly obtained from critical levels for human consumption. These maps were based on measurements from 425 monitoring sites of national and provincial groundwater quality monitoring networks. The estimation procedure was based on a stratification by soil type and land use. Within each soil-land use category, measurements were interpolated. Spatial dependence between measurements and regional differences in mean level were taken into account. Stratification turned out to be essential: no or partial stratification (using either soil type or land use) results in essentially different maps. The effect of monitoring network density was studied by leaving out the 173 monitoring sites of the provincial monitoring networks. Important changes in resulting maps were assigned to loss of information on short-distance variation, as well as loss of location-specific information. For 12 variables, maps of changes in groundwater quality were made by spatial interpolation of short-term predictions calculated for each well screen from time series of yearly measurements over 5–7 years, using a simple regression model for variation over time and taking location-specific time-prediction uncertainties into account.

From a policy point of view, the resulting maps can be used either for quantifying diffuse groundwater contamination and location-specific background concentrations (in order to assist local contamination assessment) or for input and validation of policy supporting regional or national groundwater quality models. The maps can be considered as a translation of point information obtained from the monitoring networks into information on spatial units, the size of which is used in regional groundwater models. The maps enable location-specific network optimization. In general, the maps give little reason for reducing the monitoring network density (wide confidence intervals).  相似文献   


14.
This paper investigates a methodology for locating strong motion accelerographs in a seismically active region. Starting with the probability density of earthquakes in a given region, the paper attempts, within the framework of optimization theory, to formulate the following two questions: (1) given N accelerographs, where should they be located in a seismically active zone, and (2) having fixed these N accelerographs, where should the next M be located? Three different cost functions are presented. Some closed form solutions are illustrated for problems when N and M are small. For larger arrays, numerical optimization is resorted to. To demonstrate the methodology, a region with J faults, with given spatial locations is selected. An efficient algorithm for optimization is utilized and the technique illustrated. Good agreement with closed form solutions obtained in some simple cases is indicated. Specific application of the method to the placing of twenty strong motion instruments in a seismically active area has been carried out, and the patterns of sensor location, for each of the cost functions, illustrated.  相似文献   

15.
Several environmental health studies suggest birth weight is associated with outdoor air pollution during gestation. In these studies, exposure assignments are usually based on measurements collected at air quality monitoring stations that do not coincide with health data locations. So, estimated exposures can be misleading if they do not take into account the uncertainty of exposure estimates. In this article we conducted a semi-ecological study to analyze associations between air quality during gestation and birth weight. Air quality during gestation was measured using a biomonitor, as an alternative to traditional air quality monitoring stations data, in order to increase spatial resolution of exposure measurements. To our knowledge this is the first time that the association between air quality and birth weight is studied using biomonitors. To address exposure uncertainty at health locations, we applied geostatistical simulation on biomonitoring data that provided multiple equally probable realizations of biomonitoring data, with reproduction of observed histogram and spatial covariance while matching for conditioning data. Each simulation represented a measure of exposure at each location. The set of simulations provided a measure of exposure uncertainty at each location. To incorporate uncertainty in our analysis we used generalized linear models, fitted simulation outputs and health data on birth weights and assessed statistical significance of exposure parameter using non-parametric bootstrap techniques. We found a positive association between air quality and birth weight. However, this association was not statistically significant. We also found a modest but significant association between air quality and birth weight among babies exposed to gestational tobacco smoke.  相似文献   

16.
 The need for high resolution rainfall data at temporal scales varying from daily to hourly or even minutes is a very important problem in hydrology. For many locations of the world, rainfall data quality is very poor and reliable measurements are only available at a coarse time resolution such as monthly. The purpose of this work is to apply a stochastic disaggregation method of monthly to daily precipitation in two steps: 1. Initialization of the daily rainfall series by using the truncated normal model as a reference distribution. 2.␣Restructuring of the series according to various time series statistics (autocorrelation function, scaling properties, seasonality) by using a Markov chain Monte Carlo based algorithm. The method was applied to a data set from a rainfall network of the central plains of Venezuela, in where rainfall is highly seasonal and data availability at a daily time scale or even higher temporal resolution is very limited. A detailed analysis was carried out to study the seasonal and spatial variability of many properties of the daily rainfall as scaling properties and autocorrelation function in order to incorporate the selected statistics and their annual cycle into an objective function to be minimized in the simulation procedure. Comparisons between the observed and simulated data suggest the adequacy of the technique in providing rainfall sequences with consistent statistical properties at a daily time scale given the monthly totals. The methodology, although highly computationally intensive, needs a moderate number of statistical properties of the daily rainfall. Regionalization of these statistical properties is an important next step for the application of this technique to regions in where daily data is not available.  相似文献   

17.
A multivariate spatial sampling design that uses spatial vine copulas is presented that aims to simultaneously reduce the prediction uncertainty of multiple variables by selecting additional sampling locations based on the multivariate relationship between variables, the spatial configuration of existing locations and the values of the observations at those locations. Novel aspects of the methodology include the development of optimal designs that use spatial vine copulas to estimate prediction uncertainty and, additionally, use transformation methods for dimension reduction to model multivariate spatial dependence. Spatial vine copulas capture non-linear spatial dependence within variables, whilst a chained transformation that uses non-linear principal component analysis captures the non-linear multivariate dependence between variables. The proposed design methodology is applied to two environmental case studies. Performance of the proposed methodology is evaluated through partial redesigns of the original spatial designs. The first application is a soil contamination example that demonstrates the ability of the proposed methodology to address spatial non-linearity in the data. The second application is a forest biomass study that highlights the strength of the methodology in incorporating non-linear multivariate dependence into the design.  相似文献   

18.
Several risk and decision analysis applications are characterized by spatial elements: there are spatially dependent uncertain variables of interest, decisions are made at spatial locations, and there are opportunities for spatial data acquisition. Spatial dependence implies that the data gathered at one coordinate could inform and assist a decision maker at other locations as well, and one should account for this learning effect when analyzing and comparing information gathering schemes. In this paper, we present concepts and methods for evaluating sequential information gathering schemes in spatial decision situations. Static and sequential information gathering schemes are outlined using the decision theoretic notion of value of information, and we use heuristics for approximating the value of sequential information in large-size spatial applications. We illustrate the concepts using a Bayesian network example motivated from risks associated with CO2 sequestration. We present a case study from mining where there are risks of rock hazard in the tunnels, and information about the spatial distribution of joints in the rocks may lead to a better allocation of resources for choosing rock reinforcement locations. In this application, the spatial variables are modeled by a Gaussian process. In both examples there can be large values associated with adaptive information gathering.  相似文献   

19.
1992年四川力马地震序列期间激发的孕震结构   总被引:2,自引:1,他引:2  
赵珠  陈农 《中国地震》1996,12(1):64-74
对小震序列的深入研究或许可以作为对大震序列震后趋势研究的替代,本文将地震模拟量已转化为数字记录的马力小震序列来尝试这种替代。。  相似文献   

20.
The present study demonstrates a methodology for optimization of environmental data acquisition. Based on the premise that the worth of data increases in proportion to its ability to reduce the uncertainty of key model predictions, the methodology can be used to compare the worth of different data types, gathered at different locations within study areas of arbitrary complexity. The method is applied to a hypothetical nonlinear, variable density numerical model of salt and heat transport. The relative utilities of temperature and concentration measurements at different locations within the model domain are assessed in terms of their ability to reduce the uncertainty associated with predictions of movement of the salt water interface in response to a decrease in fresh water recharge. In order to test the sensitivity of the method to nonlinear model behavior, analyses were repeated for multiple realizations of system properties. Rankings of observation worth were similar for all realizations, indicating robust performance of the methodology when employed in conjunction with a highly nonlinear model. The analysis showed that while concentration and temperature measurements can both aid in the prediction of interface movement, concentration measurements, especially when taken in proximity to the interface at locations where the interface is expected to move, are of greater worth than temperature measurements. Nevertheless, it was also demonstrated that pairs of temperature measurements, taken in strategic locations with respect to the interface, can also lead to more precise predictions of interface movement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号