首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
The groundwater inverse problem of estimating heterogeneous groundwater model parameters (hydraulic conductivity in this case) given measurements of aquifer response (such as hydraulic heads) is known to be an ill-posed problem, with multiple parameter values giving similar fits to the aquifer response measurements. This problem is further exacerbated due to the lack of extensive data, typical of most real-world problems. In such cases, it is desirable to incorporate expert knowledge in the estimation process to generate more reasonable estimates. This work presents a novel interactive framework, called the ‘Interactive Multi-Objective Genetic Algorithm’ (IMOGA), to solve the groundwater inverse problem considering different sources of quantitative data as well as qualitative expert knowledge about the site. The IMOGA is unique in that it looks at groundwater model calibration as a multi-objective problem consisting of quantitative objectives – calibration error and regularization – and a ‘qualitative’ objective based on the preference of the geological expert for different spatial characteristics of the conductivity field. All these objectives are then included within a multi-objective genetic algorithm to find multiple solutions that represent the best combination of all quantitative and qualitative objectives. A hypothetical aquifer case-study (based on the test case presented by Freyberg [Freyberg DL. An exercise in ground-water model calibration and prediction. Ground Water 1988;26(3)], for which the ‘true’ parameter values are known, is used as a test case to demonstrate the applicability of this method. It is shown that using automated calibration techniques without using expert interaction leads to parameter values that are not consistent with site-knowledge. Adding expert interaction is shown to not only improve the plausibility of the estimated conductivity fields but also the predictive accuracy of the calibrated model.  相似文献   

2.
We examine the value of additional information in multiple objective calibration in terms of model performance and parameter uncertainty. We calibrate and validate a semi‐distributed conceptual catchment model for two 11‐year periods in 320 Austrian catchments and test three approaches of parameter calibration: (a) traditional single objective calibration (SINGLE) on daily runoff; (b) multiple objective calibration (MULTI) using daily runoff and snow cover data; (c) multiple objective calibration (APRIORI) that incorporates an a priori expert guess about the parameter distribution as additional information to runoff and snow cover data. Results indicate that the MULTI approach performs slightly poorer than the SINGLE approach in terms of runoff simulations, but significantly better in terms of snow cover simulations. The APRIORI approach is essentially as good as the SINGLE approach in terms of runoff simulations but is slightly poorer than the MULTI approach in terms of snow cover simulations. An analysis of the parameter uncertainty indicates that the MULTI approach significantly decreases the uncertainty of the model parameters related to snow processes but does not decrease the uncertainty of other model parameters as compared to the SINGLE case. The APRIORI approach tends to decrease the uncertainty of all model parameters as compared to the SINGLE case. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

3.
The self-calibrated method has been extended for the generation of equally likely realizations of transmissivity and storativity conditional to transmissivity and storativity data and to steady-state and transient hydraulic head data. Conditioning to transmissivity and storativity data is achieved by means of standard geostatistical co-simulation algorithms, whereas conditioning to hydraulic head data, given its non-linear relation to transmissivity and storativity, is achieved through non-linear optimization, similar to standard inverse algorithms. The algorithm is demonstrated in a synthetic study based on data from the WIPP site in New Mexico. Seven alternative scenarios are investigated, generating 100 realizations for each of them. The differences among the scenarios range from the number of conditioning data, to their spatial configuration, to the pumping strategies at the pumping wells. In all scenarios, the self-calibrated algorithm is able to generate transmissivity–storativity realization couples conditional to all the sample data. For the specific case studied here the results are not surprising. Of the piezometric head data, the steady-state values are the most consequential for transmissivity characterization. Conditioning to transient head data only introduces local adjustments on the transmissivity fields and serves to improve the characterization of the storativity fields.  相似文献   

4.
Ground water model calibration using pilot points and regularization   总被引:9,自引:0,他引:9  
Doherty J 《Ground water》2003,41(2):170-177
Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of "pilot points" as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.  相似文献   

5.
A comparison of two stochastic inverse methods in a field-scale application   总被引:1,自引:0,他引:1  
Inverse modeling is a useful tool in ground water flow modeling studies. The most frequent difficulties encountered when using this technique are the lack of conditioning information (e.g., heads and transmissivities), the uncertainty in available data, and the nonuniqueness of the solution. These problems can be addressed and quantified through a stochastic Monte Carlo approach. The aim of this work was to compare the applicability of two stochastic inverse modeling approaches in a field-scale application. The multi-scaling (MS) approach uses a downscaling parameterization procedure that is not based on geostatistics. The pilot point (PP) approach uses geostatistical random fields as initial transmissivity values and an experimental variogram to condition the calibration. The studied area (375 km2) is part of a regional aquifer, northwest of Montreal in the St. Lawrence lowlands (southern Québec). It is located in limestone, dolomite, and sandstone formations, and is mostly a fractured porous medium. The MS approach generated small errors on heads, but the calibrated transmissivity fields did not reproduce the variogram of observed transmissivities. The PP approach generated larger errors on heads but better reproduced the spatial structure of observed transmissivities. The PP approach was also less sensitive to uncertainty in head measurements. If reliable heads are available but no transmissivities are measured, the MS approach provides useful results. If reliable transmissivities with a well inferred spatial structure are available, then the PP approach is a better alternative. This approach however must be used with caution if measured transmissivities are not reliable.  相似文献   

6.
In this study, we examine the effects of conditioning spatially variable transmissivity fields using head and/or transmissivity measurements on well-capture zones. In order to address the challenge posed by conditioning a flow model with spatially varying parameters, an innovative inverse algorithm, the Representers method, is employed. The method explicitly considers this spatial variability.

A number of uniform measurement grids with different densities are used to condition transmissivity fields using the Representers method. Deterministic and stochastic analysis of well-capture zones are then examined. The deterministic study focuses on comparison between reference well-capture zones and their estimated mean conditioned on head data. It shows that model performance due to head conditioning on well-capture zone estimation is related to pumping rate. At moderate pumping rates transmissivity observations are more crucial to identify effects arising from small-scale variations in pore water velocity. However, with more aggressive pumping these effects are reduced, consequently model performance, through incorporating head observations, markedly improves. In the stochastic study, the effect of conditioning using head and/or transmissivity data on well-capture zone uncertainty is examined. The Representers method is coupled with the Monte Carlo method to propagate uncertainty in transmissivity fields to well-capture zones. For the scenario studied, the results showed that a combination of 48 head and transmissivity data could reduce the area of uncertainty (95% confidence interval) in well-capture zone location by over 50%, compared to a 40% reduction using either head or transmissivity data. This performance was comparable to that obtained through calibrating on three and a half times the number of head observations alone.  相似文献   


7.
A Monte Carlo approach is described for the quantification of uncertainty on travel time estimates. A real (non synthetic) and exhaustive data set of natural genesis is used for reference. Using an approach based on binary indicators, constraint interval data are easily accommodated in the modeling process. It is shown how the incorporation of imprecise data can reduce drastically the uncertainty in the estimates. It is also shown that unrealistic results are obtained when a deterministic modeling is carried out using a kriging estimate of the transmissivity field. Problems related with using sequential indicator simulation for the generation of fields incorporating constraint interval data are discussed. The final results consists of 95% probability intervals of arrival times at selected control planes reflecting the original uncertainty on the transmissivity maps.  相似文献   

8.
A Monte Carlo approach is described for the quantification of uncertainty on travel time estimates. A real (non synthetic) and exhaustive data set of natural genesis is used for reference. Using an approach based on binary indicators, constraint interval data are easily accommodated in the modeling process. It is shown how the incorporation of imprecise data can reduce drastically the uncertainty in the estimates. It is also shown that unrealistic results are obtained when a deterministic modeling is carried out using a kriging estimate of the transmissivity field. Problems related with using sequential indicator simulation for the generation of fields incorporating constraint interval data are discussed. The final results consists of 95% probability intervals of arrival times at selected control planes reflecting the original uncertainty on the transmissivity maps.  相似文献   

9.
In many fields of study, and certainly in hydrogeology, uncertainty propagation is a recurring subject. Usually, parametrized probability density functions (PDFs) are used to represent data uncertainty, which limits their use to particular distributions. Often, this problem is solved by Monte Carlo simulation, with the disadvantage that one needs a large number of calculations to achieve reliable results. In this paper, a method is proposed based on a piecewise linear approximation of PDFs. The uncertainty propagation with these discretized PDFs is distribution independent. The method is applied to the upscaling of transmissivity data, and carried out in two steps: the vertical upscaling of conductivity values from borehole data to aquifer scale, and the spatial interpolation of the transmissivities. The results of this first step are complete PDFs of the transmissivities at borehole locations reflecting the uncertainties of the conductivities and the layer thicknesses. The second step results in a spatially distributed transmissivity field with a complete PDF at every grid cell. We argue that the proposed method is applicable to a wide range of uncertainty propagation problems.  相似文献   

10.
Abstract

In catchments characterized by spatially varying hydrological processes and responses, the optimal parameter values or regions of attraction in parameter space may differ with location-specific characteristics and dominating processes. This paper evaluates the value of semi-distributed calibration parameters for large-scale streamflow simulation using the spatially distributed LISFLOOD model. We employ the Shuffled Complex Evolution Metropolis (SCEM-UA) global optimization algorithm to infer the calibration parameters using daily discharge observations. The resulting posterior parameter distribution reflects the uncertainty about the model parameters and forms the basis for making probabilistic flow predictions. We assess the value of semi-distributing the calibration parameters by comparing three different calibration strategies. In the first calibration strategy uniform values over the entire area of interest are adopted for the unknown parameters, which are calibrated against discharge observations at the downstream outlet of the catchment. In the second calibration strategy the parameters are also uniformly distributed, but they are calibrated against observed discharges at the catchment outlet and at internal stations. In the third strategy a semi-distributed approach is adopted. Starting from upstream, parameters in each subcatchment are calibrated against the observed discharges at the outlet of the subcatchment. In order not to propagate upstream errors in the calibration process, observed discharges at upstream catchment outlets are used as inflow when calibrating downstream subcatchments. As an illustrative example, we demonstrate the methodology for a part of the Morava catchment, covering an area of approximately 10 000 km2. The calibration results reveal that the additional value of the internal discharge stations is limited when applying a lumped parameter approach. Moving from a lumped to a semi-distributed parameter approach: (i) improves the accuracy of the flow predictions, especially in the upstream subcatchments; and (ii) results in a more correct representation of flow prediction uncertainty. The results show the clear need to distribute the calibration parameters, especially in large catchments characterized by spatially varying hydrological processes and responses.  相似文献   

11.
A parameter-estimation technique based on existing hydrological, geophysical, and geological data was developed to approximate transmissivity values for use in a ground-water flow model of the Animas Valley, southwest New Mexico. Complete Bouguer gravity anomaly maps together with seismic-refraction profiles, geologic maps, geologic, geophysical, and drillers' logs, water levels, and pumping-test data provide insight into the transmissivity of bolson deposits throughout the basin. The transmissivity distribution was primarily based on reported pumping and specific-capacity tests in conjunction with complete Bouguer gravity anomaly maps and well log data. Reported transmissivity values were characterized by gravity values and well log data. In grid blocks lacking pumping and specific-capacity tests, transmissivity values were assigned based on the relationship of gravity values and well log data within the grid block to gravity values and well log data within other grid blocks for which transmissivity values are available. A two-dimensional, finite-difference, ground-water flow computer code was used to evaluate the effectiveness of the parameter-estimation technique. Although the trial-and-error method of calibration was employed, the actual computer time necessary for model calibration was minimal. The conceptually straightforward approach for parameter estimation utilizing existing hydrological, geophysical, and geological data provides realistic parameter estimates.  相似文献   

12.
Water level time series from groundwater production wells offer a transient dataset that can be used to estimate aquifer properties in areas with active groundwater development. This article describes a new parameter estimation method to infer aquifer properties from such datasets. Specifically, the method analyzes long‐term water level measurements from multiple, interacting groundwater production wells and relies on temporal water level derivatives to estimate the aquifer transmissivity and storativity. Analytically modeled derivatives are compared to derivatives calculated directly from the observed water level data; an optimization technique is used to identify best‐fitting transmissivity and storativity values that minimize the difference between modeled and observed derivatives. We demonstrate how the consideration of derivative (slope) behavior eliminates uncertainty associated with static water levels and well‐loss coefficients, enabling effective use of water level data from groundwater production wells. The method is applied to time‐series data collected over a period of 6 years from a municipal well field operating in the Denver Basin, Colorado (USA). The estimated aquifer properties are shown to be consistent with previously published values. The parameter estimation method is further tested using synthetic water level time series generated with a numerical model that incorporates the style of heterogeneity that occurs in the Denver Basin sandstone aquifers.  相似文献   

13.
A common approach for the performance assessment of radionuclide migration from a nuclear waste repository is by means of Monte-Carlo techniques. Multiple realizations of the parameters controlling radionuclide transport are generated and each one of these realizations is used in a numerical model to provide a transport prediction. The statistical analysis of all transport predictions is then used in performance assessment. In order to reduce the uncertainty on the predictions is necessary to incorporate as much information as possible in the generation of the parameter fields. In this regard, this paper focuses in the impact that conditioning the transmissivity fields to geophysical data and/or piezometric head data has on convective transport predictions in a two-dimensional heterogeneous formation. The Walker Lake data based is used to produce a heterogeneous log-transmissivity field with distinct non-Gaussian characteristics and a secondary variable that represents some geophysical attribute. In addition, the piezometric head field resulting from the steady-state solution of the groundwater flow equation is computed. These three reference fields are sampled to mimic a sampling campaign. Then, a series of Monte-Carlo exercises using different combinations of sampled data shows the relative worth of secondary data with respect to piezometric head data for transport predictions. The analysis shows that secondary data allows to reproduce the main spatial patterns of the reference transmissivity field and improves the mass transport predictions with respect to the case in which only transmissivity data is used. However, a few piezometric head measurements could be equally effective for the characterization of transport predictions.  相似文献   

14.
A common approach for the performance assessment of radionuclide migration from a nuclear waste repository is by means of Monte-Carlo techniques. Multiple realizations of the parameters controlling radionuclide transport are generated and each one of these realizations is used in a numerical model to provide a transport prediction. The statistical analysis of all transport predictions is then used in performance assessment. In order to reduce the uncertainty on the predictions is necessary to incorporate as much information as possible in the generation of the parameter fields. In this regard, this paper focuses in the impact that conditioning the transmissivity fields to geophysical data and/or piezometric head data has on convective transport predictions in a two-dimensional heterogeneous formation. The Walker Lake data based is used to produce a heterogeneous log-transmissivity field with distinct non-Gaussian characteristics and a secondary variable that represents some geophysical attribute. In addition, the piezometric head field resulting from the steady-state solution of the groundwater flow equation is computed. These three reference fields are sampled to mimic a sampling campaign. Then, a series of Monte-Carlo exercises using different combinations of sampled data shows the relative worth of secondary data with respect to piezometric head data for transport predictions. The analysis shows that secondary data allows to reproduce the main spatial patterns of the reference transmissivity field and improves the mass transport predictions with respect to the case in which only transmissivity data is used. However, a few piezometric head measurements could be equally effective for the characterization of transport predictions.  相似文献   

15.
If a parameter field to be calibrated consists of more than one statistical population, usually not only the parameter values are uncertain, but the spatial distributions of the populations are uncertain as well. In this study, we demonstrate the potential of the multimodal calibration method we proposed recently for the calibration of such fields, as applied to real-world ground water models with several additional stochastic parameter fields. Our method enables the calibration of the spatial distribution of the statistical populations, as well as their spatially correlated parameterization, while honoring the complete prior geostatistical definition of the multimodal parameter field. We illustrate the implications of the method in terms of the reliability of the posterior model by comparing its performance to that of a "conventional" calibration approach in which the positions of the statistical populations are not allowed to change. Information from synthetic calibration runs is used to show how ignoring the uncertainty involved in the positions of the statistical populations not only denies the modeler the opportunity to use the measurement information to improve these positions but also unduly influences the posterior intrapopulation distributions, causes unjustified adjustments to the cocalibrated parameter fields, and results in poorer observation reproduction. The proposed multimodal calibration allows a more complete treatment of the relevant uncertainties, which prevents the abovementioned adverse effects and renders a more trustworthy posterior model.  相似文献   

16.
Seismic impedance inversion is a well-known method used to obtain the image of subsurface geological structures. Utilizing the spatial coherence among seismic traces, the laterally constrained multitrace impedance inversion (LCI) is superior to trace-by-trace inversion and can produce a more realistic image of the subsurface structures. However, when the traces are numerous, it will take great computational cost and a lot of memory to solve the large-scale matrix in the multitrace inversion, which restricts the efficiency and applicability of the existing multitrace inversion algorithm. In addition, the multitrace inversion methods are not only needed to consider the lateral correlation but also should take the constraints in temporal dimension into account. As usual, these vertical constraints represent the stratigraphic characteristics of the reservoir. For instance, total-variation regularization is adopted to obtain the blocky structure. However, it still limits the magnitude of model parameter variation and therefore somewhat distorts the real image. In this paper, we propose two schemes to solve these issues. Firstly, we introduce a fast algorithm called blocky coordinate descent (BCD) to derive a new framework of laterally constrained multitrace impedance inversion. This new BCD-based inversion approach is fast and spends fewer memories. Next, we introduce a minimum gradient support regularization into the BCD-based laterally constrained inversion. This new approach can adapt to sharp layer boundaries and keep the spatial coherence. The feasibility of the proposed method is illustrated by numerical tests for both synthetic data and field seismic data.  相似文献   

17.
18.
Parameter uncertainty involved in hydrological and sediment modeling often refers to the parameter dispersion and the sensitivity of the parameter. However, a limitation of the previous studies lies in that the assignment of range and specification of probability distribution for each parameter is usually difficult and subjective. Therefore, there is great uncertainty in the process of parameter calibration and model prediction. In this study, the impact of probability parameter distribution on hydrological and sediment modeling was evaluated using a semi-distributed model—the Soil and Water Assessment Tool (SWAT) and Monte Carlo method (MC)—in the Daning River watershed of the Three Gorges Reservoir Region (TGRA), China. The classic types of parameter distribution such as uniform, normal and logarithmic normal distribution were involved in this study. Based on results, parameter probability distribution showed a diverse degree of influence on the hydrological and sediment prediction, such as the sampling size, the width of 95% confidence interval (CI), the ranking of the parameter related to uncertainty, as well as the sensitivity of the parameter on model output. It can be further inferred that model parameters presented greater uncertainty in certain regions of the primitive parameter range and parameter samples densely obtained from these regions would lead to a wider 95 CI, resulting in a more doubtful prediction. This study suggested the value of the optimized value obtained by the parameter calibration process could may also be of vital importance in selecting the probability distribution function (PDF). Such cases, where parameter value corresponds to the watershed characteristic, can be used to provide a more credible distribution for both hydrological and sediment modeling.  相似文献   

19.
Probabilistic-fuzzy health risk modeling   总被引:3,自引:2,他引:1  
Health risk analysis of multi-pathway exposure to contaminated water involves the use of mechanistic models that include many uncertain and highly variable parameters. Currently, the uncertainties in these models are treated using statistical approaches. However, not all uncertainties in data or model parameters are due to randomness. Other sources of imprecision that may lead to uncertainty include scarce or incomplete data, measurement error, data obtained from expert judgment, or subjective interpretation of available information. These kinds of uncertainties and also the non-random uncertainty cannot be treated solely by statistical methods. In this paper we propose the use of fuzzy set theory together with probability theory to incorporate uncertainties into the health risk analysis. We identify this approach as probabilistic-fuzzy risk assessment (PFRA). Based on the form of available information, fuzzy set theory, probability theory, or a combination of both can be used to incorporate parameter uncertainty and variability into mechanistic risk assessment models. In this study, tap water concentration is used as the source of contamination in the human exposure model. Ingestion, inhalation and dermal contact are considered as multiple exposure pathways. The tap water concentration of the contaminant and cancer potency factors for ingestion, inhalation and dermal contact are treated as fuzzy variables while the remaining model parameters are treated using probability density functions. Combined utilization of fuzzy and random variables produces membership functions of risk to individuals at different fractiles of risk as well as probability distributions of risk for various alpha-cut levels of the membership function. The proposed method provides a robust approach in evaluating human health risk to exposure when there is both uncertainty and variability in model parameters. PFRA allows utilization of certain types of information which have not been used directly in existing risk assessment methods.  相似文献   

20.
As continental to global scale high-resolution meteorological datasets continue to be developed, there are sufficient meteorological datasets available now for modellers to construct a historical forcing ensemble. The forcing ensemble can be a collection of multiple deterministic meteorological datasets or come from an ensemble meteorological dataset. In hydrological model calibration, the forcing ensemble can be used to represent forcing data uncertainty. This study examines the potential of using the forcing ensemble to identify more robust parameters through model calibration. Specifically, we compare an ensemble forcing-based calibration with two deterministic forcing-based calibrations and investigate their flow simulation and parameter estimation properties and the ability to resist poor-quality forcings. The comparison experiment is conducted with a six-parameter hydrological model for 30 synthetic studies and 20 real data studies to provide a better assessment of the average performance of the deterministic and ensemble forcing-based calibrations. Results show that the ensemble forcing-based calibration generates parameter estimates that are less biased and have higher frequency of covering the true parameter values than the deterministic forcing-based calibration does. Using a forcing ensemble in model calibration reduces the risk of inaccurate flow simulation caused by poor-quality meteorological inputs, and improves the reliability and overall simulation skill of ensemble simulation results. The poor-quality meteorological inputs can be effectively filtered out via our ensemble forcing-based calibration methodology and thus discarded in any post-calibration model applications. The proposed ensemble forcing-based calibration method can be considered as a more generalized framework to include parameter and forcing uncertainties in model calibration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号