首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Flow and transport models in heterogeneous geological formations are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting subsurface flow and transport often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field parameter representing hydrogeological characteristics of the aquifer. The physical resolution (e.g. spatial grid resolution) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We develop an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model prediction and physical errors corresponding to numerical grid resolution. Computational resources are allocated by considering the overall error based on a joint statistical–numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The performance of the framework is tested against computationally extensive simulations of flow and transport in spatially heterogeneous aquifers. Results show that modelers can achieve optimum physical and statistical resolutions while keeping a minimum error for a given computational time. The physical and statistical resolutions obtained through our analysis yield lower computational costs when compared to the results obtained with prevalent recommendations in the literature. Lastly, we highlight the significance of the geometrical characteristics of the contaminant source zone on the optimum physical and statistical resolutions.  相似文献   

2.
《水文科学杂志》2012,57(15):1803-1823
ABSTRACT

A new methodology is proposed for improving the accuracy of groundwater-level estimations and increasing the efficiency of groundwater-level monitoring networks. Three spatio-temporal (S-T) simulation models, numerical groundwater flow, artificial neural network and S-T kriging, are implemented to simulate water-table level variations. Individual models are combined using model fusion techniques and the more accurate of the individual and combined simulation models is selected for the estimation. Leave-one-out cross-validation shows that the estimation error of the best fusion model is significantly less than that of the three individual models. The selected fusion model is then considered for optimal S-T redesign of the groundwater monitoring network of the Dehgolan Plain (Iran). Using a Bayesian maximum entropy interpolation technique, soft data are included in the geostatistical analyses. Different scenarios are defined to incorporate economic considerations and different levels of precision in selecting the best monitoring network; a network of 37 wells is proposed as the best configuration. The mean variance estimation errors of all scenarios decrease significantly compared to that of the existing monitoring network. A reduction in equivalent uniform annual costs of different scenarios is achieved.  相似文献   

3.
Data-based models, namely artificial neural network (ANN), support vector machine (SVM), genetic programming (GP) and extreme learning machine (ELM), were developed to approximate three-dimensional, density-dependent flow and transport processes in a coastal aquifer. A simulation model, SEAWAT, was used to generate data required for the training and testing of the data-based models. Statistical analysis of the simulation results obtained by the four models show that the data-based models could simulate the complex salt water intrusion process successfully. The selected models were also compared based on their computational ability, and the results show that the ELM is the fastest technique, taking just 0.5 s to simulate the dataset; however, the SVM is the most accurate, with a Nash-Sutcliffe efficiency (NSE) ≥ 0.95 and correlation coefficient R ≥ 0.92 for all the wells. The root mean square error (RMSE) for the SVM is also significantly less, ranging from 12.28 to 77.61 mg/L.  相似文献   

4.
Hydrologic analysis of urban drainage networks often encounters a number of issues, including data acquisition and preparation for modelling, which can be costly and time‐consuming processes. Moreover, it can get more challenging with missing data and complex loops inside networks. In this article, Gibbs’ model is applied to urban drainage networks to investigate the possibility of replacing an actual existing urban drainage network in terms of the shape and peak flow of the hydrographs at the outlet. The characteristic network configuration is given as a value of a parameter β of Gibbs’ model. Instead of the actual network, stochastic networks from Monte‐Carlo simulation are utilized to obtain a synthetic width function from the generated networks, and runoff hydrographs are estimated based on it. The results show that the synthetic width function and the resulting hydrographs obtained from the networks simulated by Gibbs’ model are close to those from the actual network. The result also shows that even the behaviour of a looped network can be approximated by equivalent dendritic networks generated by Gibbs’ model. The applicability of a stochastic network model in urban catchment implies a complement to modelling approaches in case of data unavailability. Moreover, the network property (β) is utilized not only to estimate the discharge hydrograph of a catchment but also as a key link to evaluate the effect from rainstorm movement in urban catchments. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
A graphical method was devised for designing contaminant detection monitoring networks in aquifers. The approach eliminates bias in detection efficiency among well pairs, thereby improving the overall efficiency of a ground water monitoring network. In the equidistant configurations derived by the graphical approach, all wells are located the same distance from a landfill, but the distance is measured parallel to ground water flow, Measured perpendicular to ground water flow, there is also an equal spacing between wells in an equidistant network. A simulation model was used to compare an equidistant network to a peripheral monitoring configuration, in which wells were spaced evenly along the downgradient boundaries of a landfill. The equidistant network yielded a 12.4% higher detection efficiency and also facilitated earlier release detection. In practice, the graphical approach that yields equidistant configurations can be used to identify candidate monitoring networks to detect potential releases from landfills.  相似文献   

6.
Detailed numerical flow and radionuclide simulations are used to predict the flux of radionuclides from three underground nuclear tests located in the Climax granite stock on the Nevada Test Site. The numerical modeling approach consists of both a regional-scale and local-scale flow model. The regional-scale model incorporates conceptual model uncertainty through the inclusion of five models of hydrostratigraphy and five models describing recharge processes for a total of 25 hydrostratigraphic–recharge combinations. Uncertainty from each of the 25 models is propagated to the local-scale model through constant head boundary conditions that transfer hydraulic gradients and flow patterns from each of the model alternatives in the vicinity of the Climax stock, a fluid flux calibration target, and model weights that describe the plausibility of each conceptual model. The local-scale model utilizes an upscaled discrete fracture network methodology where fluid flow and radionuclides are restricted to an interconnected network of fracture zones mapped onto a continuum grid. Standard Monte Carlo techniques are used to generate 200 random fracture zone networks for each of the 25 conceptual models for a total of 5,000 local-scale flow and transport realizations. Parameters of the fracture zone networks are based on statistical analysis of site-specific fracture data, with the exclusion of fracture density, which was calibrated to match the amount of fluid flux simulated through the Climax stock by the regional-scale models. Radionuclide transport is simulated according to a random walk particle method that tracks particle trajectories through the fracture continuum flow fields according to advection, dispersion and diffusional mass exchange between fractures and matrix. The breakthrough of a conservative radionuclide with a long half-life is used to evaluate the influence of conceptual and parametric uncertainty on radionuclide mass flux estimates. The fluid flux calibration target was found to correlate with fracture density, and particle breakthroughs were generally found to increase with increases in fracture density. Boundary conditions extrapolated from the regional-scale model exerted a secondary influence on radionuclide breakthrough for models with equal fracture density. The incorporation of weights into radionuclide flux estimates resulted in both noise about the original (unweighted) mass flux curves and decreases in the variance and expected value of radionuclide mass flux.  相似文献   

7.
Developing a hydrological forecasting model based on past records is crucial to effective hydropower reservoir management and scheduling. Traditionally, time series analysis and modeling is used for building mathematical models to generate hydrologic records in hydrology and water resources. Artificial intelligence (AI), as a branch of computer science, is capable of analyzing long-series and large-scale hydrological data. In recent years, it is one of front issues to apply AI technology to the hydrological forecasting modeling. In this paper, autoregressive moving-average (ARMA) models, artificial neural networks (ANNs) approaches, adaptive neural-based fuzzy inference system (ANFIS) techniques, genetic programming (GP) models and support vector machine (SVM) method are examined using the long-term observations of monthly river flow discharges. The four quantitative standard statistical performance evaluation measures, the coefficient of correlation (R), Nash–Sutcliffe efficiency coefficient (E), root mean squared error (RMSE), mean absolute percentage error (MAPE), are employed to evaluate the performances of various models developed. Two case study river sites are also provided to illustrate their respective performances. The results indicate that the best performance can be obtained by ANFIS, GP and SVM, in terms of different evaluation criteria during the training and validation phases.  相似文献   

8.
This study presents several new observations from the study of a numerically simulated warm-core ring (WCR) in the Gulf of Mexico based on the ECCO2 global ocean simulation. Using Lagrangian coherent structures (LCS) techniques to investigate this flow reveals a pattern of transversely intersecting LCS in the mixed layer of the WCR which experiences consistent stretching behavior over a large region of space and time. A detailed analysis of this flow region leads to an analytical model of the velocity field which captures the essential elements that generate the transversely intersecting LCS. The model parameters are determined from the simulated WCR and the resulting LCS show excellent agreement with those observed in the WCR. The three-dimensional transport behavior that creates these structures relies on the small radial outflow that is present in the mixed layer and is not seen below the pycnocline, leading to a sharp change in the character of the LCS at the bottom of the mixed layer. The flow behavior revealed by the LCS limits fluid exchange between the WCR and the surrounding ocean, contributing to the long life of WCRs. Further study of these structures and their associated transport behavior may lead to further insights into the development and persistence of such geophysical vortices as well as their transport behavior.  相似文献   

9.
The Climate impact studies in hydrology often rely on climate change information at fine spatial resolution. However, general circulation models (GCMs), which are among the most advanced tools for estimating future climate change scenarios, operate on a coarse scale. Therefore the output from a GCM has to be downscaled to obtain the information relevant to hydrologic studies. In this paper, a support vector machine (SVM) approach is proposed for statistical downscaling of precipitation at monthly time scale. The effectiveness of this approach is illustrated through its application to meteorological sub-divisions (MSDs) in India. First, climate variables affecting spatio-temporal variation of precipitation at each MSD in India are identified. Following this, the data pertaining to the identified climate variables (predictors) at each MSD are classified using cluster analysis to form two groups, representing wet and dry seasons. For each MSD, SVM- based downscaling model (DM) is developed for season(s) with significant rainfall using principal components extracted from the predictors as input and the contemporaneous precipitation observed at the MSD as an output. The proposed DM is shown to be superior to conventional downscaling using multi-layer back-propagation artificial neural networks. Subsequently, the SVM-based DM is applied to future climate predictions from the second generation Coupled Global Climate Model (CGCM2) to obtain future projections of precipitation for the MSDs. The results are then analyzed to assess the impact of climate change on precipitation over India. It is shown that SVMs provide a promising alternative to conventional artificial neural networks for statistical downscaling, and are suitable for conducting climate impact studies.  相似文献   

10.
Soil moisture is an integral quantity in hydrology that represents the average conditions in a finite volume of soil. In this paper, a novel regression technique called Support Vector Machine (SVM) is presented and applied to soil moisture estimation using remote sensing data. SVM is based on statistical learning theory that uses a hypothesis space of linear functions based on Kernel approach. SVM has been used to predict a quantity forward in time based on training from past data. The strength of SVM lies in minimizing the empirical classification error and maximizing the geometric margin by solving inverse problem. SVM model is applied to 10 sites for soil moisture estimation in the Lower Colorado River Basin (LCRB) in the western United States. The sites comprise low to dense vegetation. Remote sensing data that includes backscatter and incidence angle from Tropical Rainfall Measuring Mission (TRMM), and Normalized Difference Vegetation Index (NDVI) from Advanced Very High Resolution Radiometer (AVHRR) are used to estimate soil water content (SM). Simulated SM (%) time series for the study sites are available from the Variable Infiltration Capacity Three Layer (VIC) model for top 10 cm layer of soil for the years 1998–2005. SVM model is trained on 5 years of data, i.e. 1998–2002 and tested on 3 years of data, i.e. 2003–2005. Two models are developed to evaluate the strength of SVM modeling in estimating soil moisture. In model I, training and testing are done on six sites, this results in six separate SVM models – one for each site. Model II comprises of two subparts: (a) data from all six sites used in model I is combined and a single SVM model is developed and tested on same sites and (b) a single model is developed using data from six sites (same as model II-A) but this model is tested on four separate sites not used to train the model. Model I shows satisfactory results, and the SM estimates are in good agreement with the estimates from VIC model. The SM estimate correlation coefficients range from 0.34 to 0.77 with RMSE less than 2% at all the selected sites. A probabilistic absolute error between the VIC SM and modeled SM is computed for all models. For model I, the results indicate that 80% of the SM estimates have an absolute error of less than 5%, whereas for model II-A and II-B, 80% and 60% of the SM estimates have an error less than 10% and 15%, respectively. SVM model is also trained and tested for measured soil moisture in the LCRB. Results with RMSE, MAE and R of 2.01, 1.97, and 0.57, respectively show that the SVM model is able to capture the variability in measured soil moisture. Results from the SVM modeling are compared with the estimates obtained from feed forward-back propagation Artificial Neural Network model (ANN) and Multivariate Linear Regression model (MLR); and show that SVM model performs better for soil moisture estimation than ANN and MLR models.  相似文献   

11.
This paper, based on a real world case study (Limmat aquifer, Switzerland), compares inverse groundwater flow models calibrated with specified numbers of monitoring head locations. These models are updated in real time with the ensemble Kalman filter (EnKF) and the prediction improvement is assessed in relation to the amount of monitoring locations used for calibration and updating. The prediction errors of the models calibrated in transient state are smaller if the amount of monitoring locations used for the calibration is larger. For highly dynamic groundwater flow systems a transient calibration is recommended as a model calibrated in steady state can lead to worse results than a noncalibrated model with a well-chosen uniform conductivity. The model predictions can be improved further with the assimilation of new measurement data from on-line sensors with the EnKF. Within all the studied models the reduction of 1-day hydraulic head prediction error (in terms of mean absolute error [MAE]) with EnKF lies between 31% (assimilation of head data from 5 locations) and 72% (assimilation of head data from 85 locations). The largest prediction improvements are expected for models that were calibrated with only a limited amount of historical information. It is worthwhile to update the model even with few monitoring locations as it seems that the error reduction with EnKF decreases exponentially with the amount of monitoring locations used. These results prove the feasibility of data assimilation with EnKF also for a real world case and show that improved predictions of groundwater levels can be obtained.  相似文献   

12.
The use of the shear wave velocity data as a field index for evaluating the liquefaction potential of sands is receiving increased attention because both shear wave velocity and liquefaction resistance are similarly influenced by many of the same factors such as void ratio, state of stress, stress history and geologic age. In this paper, the potential of support vector machine (SVM) based classification approach has been used to assess the liquefaction potential from actual shear wave velocity data. In this approach, an approximate implementation of a structural risk minimization (SRM) induction principle is done, which aims at minimizing a bound on the generalization error of a model rather than minimizing only the mean square error over the data set. Here SVM has been used as a classification tool to predict liquefaction potential of a soil based on shear wave velocity. The dataset consists the information of soil characteristics such as effective vertical stress (σ′v0), soil type, shear wave velocity (Vs) and earthquake parameters such as peak horizontal acceleration (amax) and earthquake magnitude (M). Out of the available 186 datasets, 130 are considered for training and remaining 56 are used for testing the model. The study indicated that SVM can successfully model the complex relationship between seismic parameters, soil parameters and the liquefaction potential. In the model based on soil characteristics, the input parameters used are σ′v0, soil type, Vs, amax and M. In the other model based on shear wave velocity alone uses Vs, amax and M as input parameters. In this paper, it has been demonstrated that Vs alone can be used to predict the liquefaction potential of a soil using a support vector machine model.  相似文献   

13.
Due to the complexity of influencing factors and the limitation of existing scientific knowledge, current monthly inflow prediction accuracy is unable to meet the requirements of various water users yet. A flow time series is usually considered as a combination of quasi-periodic signals contaminated by noise, so prediction accuracy can be improved by data preprocess. Singular spectrum analysis (SSA), as an efficient preprocessing method, is used to decompose the original inflow series into filtered series and noises. Current application of SSA only selects filtered series as model input without considering noises. This paper attempts to prove that noise may contain hydrological information and it cannot be ignored, a new method that considerers both filtered and noises series is proposed. Support vector machine (SVM), genetic programming (GP), and seasonal autoregressive (SAR) are chosen as the prediction models. Four criteria are selected to evaluate the prediction model performance: Nash–Sutcliffe efficiency, Water Balance efficiency, relative error of annual average maximum (REmax) monthly flow and relative error of annual average minimum (REmin) monthly flow. The monthly inflow data of Three Gorges Reservoir is analyzed as a case study. Main results are as following: (1) coupling with the SSA, the performance of the SVM and GP models experience a significant increase in predicting the inflow series. However, there is no significant positive change in the performance of SAR (1) models. (2) After considering noises, both modified SSA-SVM and modified SSA-GP models perform better than SSA-SVM and SSA-GP models. Results of this study indicated that the data preprocess method SSA can significantly improve prediction precision of SVM and GP models, and also proved that noises series still contains some information and has an important influence on model performance.  相似文献   

14.
为准确预测地震死亡人数,提出了基于主成分分析法(PCA)和粒子群算法(PSO)优化的支持向量机(SVM)模型。首先利用主成分分析法对地震死亡人数7个影响因子中的6个进行数据降维,同时对第7个发震时刻因子单独进行区间分类,然后对提取出的主成分进行归一化处理,将归一化的主成分数据作为支持向量机的输入向量,通过粒子群算法寻优获得最优支持向量机模型参数,最终建立基于PCA-PSO-SVM的地震死亡人数预测模型,并对5组样本进行死亡人数预测,同时对比分析包含和不包含发震时刻因子的2种情况下的模型预测效果。结果表明:在不考虑发震时刻因子的情况下,使用PCA-PSO-SVM模型的最小误差、最大误差和平均误差分别为0.85%、20%、10%,其平均误差相比PSO-SVM、SVM模型分别降低2.08%、2.28%;输入向量加入发震时刻因子分类数据后,PCA-PSO-SVM模型的最小误差、最大误差和平均误差分别为0.25%、20%、7.18%,其平均误差相比PSO-SVM、SVM模型分别降低3.34%、3.50%。因此,加入发震时刻因子后3种模型的平均误差明显降低,同时由于PCA-PSO-SVM模型进行主成分降维处理,能够明显提高运行效率和预测精度,故降低了模型复杂度。  相似文献   

15.
Constructed wetlands are being utilized worldwide to effectively reduce excess nutrients in agricultural runoff and wastewater. Despite their frequency, a multi‐dimensional, physically based, spatially distributed modelling approach has rarely been applied for flow and solute transport in treatment wetlands. This article presents a two‐dimensional hydrodynamic and solute transport modelling of a large‐scaled, subtropical, free water surface constructed wetland of about 8 km2 in the Everglades of Florida, USA. In this study, MIKE 21 was adopted as the basic model framework. Field monitoring of the time series hydrological and chloride data, as well as spatially distributed data such as bathymetry and vegetation distribution, provided the necessary model input and testing data. Simulated water level profiles were in good agreement with the spatio‐temporal variations of measured ones. On average, the root‐mean‐square error of model calibration on annual water level fluctuations was 0·09 m. Manning's roughness coefficients for the dense emergent and submerged aquatic vegetation areas, which were estimated as a function of vegetation type, ranged from 0·67 to 1·0 and 0·12 to 0·15 s/m1/3, respectively. The solute transport model calibration for four monitoring sites agreed well with the measured annual variations in chloride concentration with an average percent model error of about 15%. The longitudinal dispersivity was estimated to be about 2 m and was more than an order of magnitude higher than the transverse one. This study is expected to play the role of a stepping stone for future modelling efforts on the development and application of more advanced flow and transport models applicable to a variety of constructed wetland systems, as well as to the Everglades stormwater treatment areas in operation or in preparation. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

16.
It is increasingly recognized that effective river management requires a catchment scale approach. Sediment transport processes are relevant to a number of river functions but quantifying sediment fluxes at network scales is hampered by the difficulty of measuring the variables required for most sediment transport equations (e.g. shear stress, velocity, and flow depth). We develop new bedload and total load sediment transport equations based on specific stream power. These equations use data that are relatively easy to collect or estimate throughout stream networks using remote sensing and other available data: slope, discharge, channel width, and grain size. The new equations are parsimonious yet have similar accuracy to other, more established, alternatives. We further confirm previous findings that the dimensionless critical specific stream power for incipient particle motion is generally consistent across datasets, and that the uncertainty in this parameter has only a minor impact on calculated sediment transport rates. Finally, we test the new bedload transport equation by applying it in a simple channel incision model. Our model results are in close agreement to flume observations and can predict incision rates more accurately than a more complicated morphodynamic model. These new sediment transport equations are well suited for use at stream network scales, allowing quantification of this important process for river management applications. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

17.
The Karhunen-Loeve (KL) decomposition and the polynomial chaos (PC) expansion are elegant and efficient tools for uncertainty propagation in porous media. Over recent years, KL/PC-based frameworks have successfully been applied in several contributions for the flow problem in the subsurface context. It was also shown, however, that the accurate solution of the transport problem with KL/PC techniques is more challenging. We propose a framework that utilizes KL/PC in combination with sparse Smolyak quadrature for the flow problem only. In a subsequent step, a Lagrangian sampling technique is used for transport. The flow field samples are calculated based on a PC expansion derived from the solutions at relatively few quadrature points. To increase the computational efficiency of the PC-based flow field sampling, a new reduction method is applied. For advection dominated transport scenarios, where a Lagrangian approach is applicable, the proposed PC/Monte Carlo method (PCMCM) is very efficient and avoids accuracy problems that arise when applying KL/PC techniques to both flow and transport. The applicability of PCMCM is demonstrated for transport simulations in multivariate Gaussian log-conductivity fields that are unconditional and conditional on conductivity measurements.  相似文献   

18.
19.
支持向量机及其在地震预报中的应用前景   总被引:2,自引:0,他引:2       下载免费PDF全文
统计学习理论(SLT)是研究小样本情况下机器学习规律的理论。支持向量机(SVM)基于统计学习理论,可以处理高度非线性分类和回归等问题,不但较好地解决了小样本、过学习、高维数、局部最小等实际难题,而且具有很强的泛化(预测)能力。本文介绍了支持向量机的分类、回归方法,分析了这一方法的特点,讨论了该方法在地震预报中的应用前景。  相似文献   

20.
A method is presented to design monitoring networks for detecting groundwater pollution at industrial sites. The goal is to detect the pollution at some distance from the site’s boundary so that it can be cleaned up or hydrologically contained before contaminating groundwater outside the site. It is assumed that pollution may occur anywhere on the site, that transport is by advection only and that no retardation and chemical reactions take place. However, the approach can be easily extended to include designated (and uncertain) source areas, dispersion and reactive transport. The method starts from the premise that it is impossible to detect 100% of all the contaminant plumes with reasonable costs and therefore seeks a balance between the risk of pollution and network density. The design approach takes account of uncertainty in the flow field by simulating realisations of conductivity, groundwater head and associated flow fields, using geostatistical simulation and a groundwater flow model. The realisations are conditioned to conductivity and head observations that may already be present on the site. The result is an ensemble of flow fields that is further analysed using a particle track program. From this the probability of missing a contaminant plume originating anywhere on the terrain can be estimated for a given network. From this probability follows the risk, i.e. the expected costs of an undetected pollution. The total costs of the monitoring strategy are calculated by adding the risk of pollution to the costs of installing and maintaining the monitoring wells and the routinely performed chemical analyses. By repeating this procedure for networks of varying well numbers, the best network is chosen as the one that minimises total cost. The method is illustrated with a simulated example showing the added worth of exploratory wells for characterising hydraulic conductivity of a site.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号