首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mapping saturation areas during rainfall events is important for understanding the dynamics of overland flow. In this study, we evaluate the potential of high temporal resolution time‐lapse photography for mapping the dynamics of saturation areas (i.e., areas where water is visually ponding on the surface) on the hillslope scale during natural rainfall. We take 1 image per minute over a 100 × 15 m2 depression area on an agricultural field in the Hydrological Open Air Laboratory, Austria. The images are georectified and classified by an automated procedure, using grey intensity as a threshold to identify saturation area. The optimum threshold T is obtained by comparing saturation areas from the automated analysis with the manual analysis of 149 images. T is found to be highly correlated with an image brightness characteristic defined as the greyscale image histogram mode M (Pearson correlation r = 0.91). We estimate T as T = M + C where C is a calibration parameter assumed to be constant during each event. The automated procedure estimates the total saturation area close to the manual analysis with mean normalized root mean square error of 9% and 21% if C is calibrated for each event and taken constant for all events, respectively. The spatial patterns of saturation are estimated with a geometric mean accuracy index of 94% as compared to the manual analysis of the same photos. The patterns are tested against field observations for one date as a preliminary demonstration, which yields a root mean square error of the shortest distance between the measured boundary points and the automatically classified boundary as 23 cm. The usefulness of the patterns is illustrated by exploring run‐off generation processes of an example event. Overall, the proposed classification method based on grey intensity is found to process images with highly varying brightnesses well. It is more efficient than the manual tracing for a large number of images, which allows the exploration of surface flow processes at high temporal resolution.  相似文献   

2.
The correct identification of homogeneous areas in regional rainfall frequency analysis is fundamental to ensure the best selection of the probability distribution and the regional model which produce low bias and low root mean square error of quantiles estimation. In an attempt at rainfall spatial homogeneity, the paper explores a new approach that is based on meteo-climatic information. The results are verified ex-post using standard homogeneity tests applied to the annual maximum daily rainfall series. The first step of the proposed procedure selects two different types of homogeneous large regions: convective macro-regions, which contain high values of the Convective Available Potential Energy index, normally associated with convective rainfall events, and stratiform macro-regions, which are characterized by low values of the Q vector Divergence index, associated with dynamic instability and stratiform precipitation. These macro-regions are identified using Hot Spot Analysis to emphasize clusters of extreme values of the indexes. In the second step, inside each identified macro-region, homogeneous sub-regions are found using kriging interpolation on the mean direction of the Vertically Integrated Moisture Flux. To check the proposed procedure, two detailed examples of homogeneous sub-regions are examined.  相似文献   

3.
Low‐flow characteristics can be estimated by multiple linear regressions or the index‐streamgage approach. The latter transfers streamflow information from a hydrologically similar, continuously gaged basin (‘index streamgage’) to one with a very limited streamflow record, but often results in biased estimates. The application of the index‐streamgage approach can be generalized into three steps: (1) selection of streamflow information of interest, (2) definition of hydrologic similarity and selection of index streamgage, and (3) application of an information‐transfer approach. Here, we explore the effects of (1) the range of streamflow values, (2) the areal density of streamgages, and (3) index‐streamgage selection criteria on the bias of three information‐transfer approaches on estimates of the 7‐day, 10‐year minimum streamflow (Q7, 10). The three information‐transfer approaches considered are maintenance of variance extension, base‐flow correlation, and ratio of measured to concurrent gaged streamflow (Q‐ratio invariance). Our results for 1120 streamgages throughout the United States suggest that only a small portion of the total bias in estimated streamflow values is explained by the areal density of the streamgages and the hydrologic similarity between the two basins. However, restricting the range of streamflow values used in the index‐streamgage approach reduces the bias of estimated Q7, 10 values substantially. Importantly, estimated Q7, 10 values are heavily biased when the observed Q7, 10 values are near zero. Results of the analysis also showed that Q7, 10 estimates from two of the three index‐streamgage approaches have lower root‐mean‐square error values than estimates derived from multiple regressions for the large regions considered in this study. Published in 2011 by John Wiley & Sons, Ltd.  相似文献   

4.
A 10‐km gridded snow water equivalent (SWE) dataset is developed over the Saint‐Maurice River basin region in southern Québec from kriging of observed snow survey data for evaluation of SWE products. The gridded SWE dataset covers 1980–2014 and is based on manual gravimetric snow surveys carried out on February 1, March 1, March 15, April 1, and April 15 of each snow season, which captures the annual maximum SWE (SWEM) with a mean interpolation error of ±19%. The dataset is used to evaluate SWEM from a range of sources including satellite retrievals, reanalyses, Canadian regional climate models, and the Canadian Meteorological Centre operational snow depth analysis. We also evaluate a number of solid precipitation datasets to determine their contribution to systematic errors in estimated SWEM. None of the evaluated datasets is able to provide estimates of SWEM that are within operational requirements of ±15% error, and insufficient solid precipitation is determined to be one of the main reasons. The Climate System Forecast Reanalysis is the only dataset where snowfall is sufficiently large to generate SWEM values comparable to observations. Inconsistencies in precipitation are also found to have a strong impact on year‐to‐year variability in SWEM dataset performance and spread. Version 3.6.1 of the Canadian Land Surface Scheme land surface scheme driven with ERA‐Interim output downscaled by Version 5.0.1 of the Canadian Regional Climate Model was the best physically based model at explaining the observed spatial and temporal variability in SWEM (root‐mean‐square error [RMSE] = 33%) and has potential for lower error with adjusted precipitation. Operational snow products relying on the real‐time snow depth observing network performed poorly due to a lack of real‐time data and the strong local scale variability of point snow depth observations. The results underscore the need for more effort to be invested in improving solid precipitation estimates for use in snow hydrology applications.  相似文献   

5.
Stream water temperature plays a significant role in aquatic ecosystems where it controls many important biological and physical processes. Reliable estimates of water temperature at the daily time step are critical in managing water resources. We developed a parsimonious piecewise Bayesian model for estimating daily stream water temperatures that account for temporal autocorrelation and both linear and nonlinear relationships with air temperature and discharge. The model was tested at 8 climatically different basins of the USA and at 34 sites within the mountainous Boise River Basin (Idaho, USA). The results show that the proposed model is robust with an average root mean square error of 1.25 °C and Nash–Sutcliffe coefficient of 0.92 over a 2‐year period. Our approach can be used to predict historic daily stream water temperatures in any location using observed daily stream temperature and regional air temperature data.  相似文献   

6.
ABSTRACT

This study compares model averaging and model selection methods to estimate design floods, while accounting for the observation error that is typically associated with annual maximum flow data. Model selection refers to methods where a single distribution function is chosen based on prior knowledge or by means of selection criteria. Model averaging refers to methods where the results of multiple distribution functions are combined. Numerical experiments were carried out by generating synthetic data using the Wakeby distribution function as the parent distribution. For this study, comparisons were made in terms of relative error and root mean square error (RMSE) referring to the 1-in-100 year flood. The experiments show that model averaging and model selection methods lead to similar results, especially when short samples are drawn from a highly asymmetric parent. Also, taking an arithmetic average of all design flood estimates gives estimated variances similar to those obtained with more complex weighted model averaging.  相似文献   

7.
The index flood method is widely used in regional flood frequency analysis (RFFA) but explicitly relies on the identification of ‘acceptable homogeneous regions’. This paper presents an alternative RFFA method, which is particularly useful when ‘acceptably homogeneous regions’ cannot be identified. The new RFFA method is based on the region of influence (ROI) approach where a ‘local region’ can be formed to estimate statistics at the site of interest. The new method is applied here to regionalize the parameters of the log‐Pearson 3 (LP3) flood probability model using Bayesian generalized least squares (GLS) regression. The ROI approach is used to reduce model error arising from the heterogeneity unaccounted for by the predictor variables in the traditional fixed‐region GLS analysis. A case study was undertaken for 55 catchments located in eastern New South Wales, Australia. The selection of predictor variables was guided by minimizing model error. Using an approach similar to stepwise regression, the best model for the LP3 mean was found to use catchment area and 50‐year, 12‐h rainfall intensity as explanatory variables, whereas the models for the LP3 standard deviation and skewness only had a constant term for the derived ROIs. Diagnostics based on leave‐one‐out cross validation show that the regression model assumptions were not inconsistent with the data and, importantly, no genuine outlier sites were identified. Significantly, the ROI GLS approach produced more accurate and consistent results than a fixed‐region GLS model, highlighting the superior ability of the ROI approach to deal with heterogeneity. This method is particularly applicable to regions that show a high degree of regional heterogeneity. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
Currently, the distribution areas of aquatic species are studied by using air temperature as a proxy of water temperature, which is not available at a regional scale. To simulate water temperature at a regional scale, a physically based model using the equilibrium temperature concept and including upstream‐downstream propagation of the thermal signal is proposed. This model, called Temperature‐NETwork (T‐NET), is based on a hydrographical network topology and was tested at the Loire basin scale (105 km2). The T‐NET model obtained a mean root mean square error of 1.6 °C at a daily time step on the basis of 128 water temperature stations (2008–2012). The model obtained excellent performance at stations located on small and medium rivers (distance from headwater <100 km) that are strongly influenced by headwater conditions (median root mean square error of 1.8 °C). The shading factor and the headwater temperature were the most important variables on the mean simulated temperature, while the river discharge influenced the daily temperature variation and diurnal amplitude. The T‐NET model simulates specific events, such as temperature of the Loire during the floods of June 1992 and the thermal regime response of streams during the heatwave of August 2003, much more efficiently than a simple point‐scale heat balance model. The T‐NET model is very consistent at a regional scale and could easily be transposed to changing forcing conditions and to other catchments. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
An algorithm for the Theis solution of pumping test data has been developed taking into account the basic principles of graphical approach of curve-matching. The method is simple and does not need initial approximation of transmissivity and storativity as required by approaches suggested by Saleem (1970) and Mc Elwee (1980). As a measure of error of fitting, integral square error is computed between the observed drawdown and drawdown calculated from the theoretical equation with the values of coefficients estimated by the procedure. Also root mean square deviation in drawdown is calculated. The algorithm is capable of identifying data with errors in observation or recording. The reliability of the algorithm and its limitations are discussed on the basis of test runs with synthetic data having varying magnitudes of error and varying distributions of error points in the data set. The estimates of parameters by the proposed algorithm for a typical field test data compare very well with the estimates by the sensitivity approach developed by Mc Elwee (1980).  相似文献   

10.
Estimation of evapotranspiration (ET) is of great significance in modeling the water and energy interactions between land and atmosphere. Negative correlation of surface temperature (Ts) versus vegetation index (VI) from remote sensing data provides diagnosis on the spatial pattern of surface soil moisture and ET. This study further examined the applicability of Ts–VI triangle method with a newly developed edges determination technique in estimating regional evaporative fraction (EF) and ET at MODIS pixel scale through comparison with large aperture scintillometer (LAS) and high‐level eddy covariance measurements collected at Changwu agro‐ecological experiment station from late June to late October, 2009. An algorithm with merely land and atmosphere products from MODIS onboard Terra satellite was used to estimate the surface net radiation (Rn) and soil heat flux. In most cases, the estimated instantaneous Rn was in good agreement with surface measurement with slight overestimation by 12 W/m2. Validation results from LAS measurement showed that the root mean square error is 0.097 for instantaneous EF, 48 W/m2 for instantaneous sensible heat flux, and 30 W/m2 for daily latent heat flux. This paper successfully presents a miniature of the overall capability of Ts–VI triangle in estimating regional EF and ET from limited number of data. For a thorough interpretation, further comprehensive investigation needs to be done with more integration of remote sensing data and in‐situ surface measurements. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
A soil moisture retrieval method is proposed, in the absence of ground-based auxiliary measurements, by deriving the soil moisture content relationship from the satellite vegetation index-based evapotranspiration fraction and soil moisture physical properties of a soil type. A temperature–vegetation dryness index threshold value is also proposed to identify water bodies and underlying saturated areas. Verification of the retrieved growing season soil moisture was performed by comparative analysis of soil moisture obtained by observed conventional in situ point measurements at the 239-km2 Reynolds Creek Experimental Watershed, Idaho, USA (2006–2009), and at the US Climate Reference Network (USCRN) soil moisture measurement sites in Sundance, Wyoming (2012–2015), and Lewistown, Montana (2014–2015). The proposed method best represented the effective root zone soil moisture condition, at a depth between 50 and 100 cm, with an overall average R2 value of 0.72 and average root mean square error (RMSE) of 0.042.  相似文献   

12.
Forecasting of space–time groundwater level is important for sparsely monitored regions. Time series analysis using soft computing tools is powerful in temporal data analysis. Classical geostatistical methods provide the best estimates of spatial data. In the present work a hybrid framework for space–time groundwater level forecasting is proposed by combining a soft computing tool and a geostatistical model. Three time series forecasting models: artificial neural network, least square support vector machine and genetic programming (GP), are individually combined with the geostatistical ordinary kriging model. The experimental variogram thus obtained fits a linear combination of a nugget effect model and a power model. The efficacy of the space–time models was decided on both visual interpretation (spatial maps) and calculated error statistics. It was found that the GP–kriging space–time model gave the most satisfactory results in terms of average absolute relative error, root mean square error, normalized mean bias error and normalized root mean square error.  相似文献   

13.
ABSTRACT

Streamflow prediction is useful for robust water resources engineering and management. This paper introduces a new methodology to generate more effective features for streamflow prediction based on the concept of “interaction effect”. The new features (input variables) are derived from the original features in a process called feature generation. It is necessary to select the most efficient input variables for the modelling process. Two feature selection methods, least absolute shrinkage and selection operator (LASSO) and particle swarm optimization-artificial neural networks (PSO-ANN), are used to select the effective features. Principal components analysis (PCA) is used to reduce the dimensions of selected features. Then, optimized support vector regression (SVR) is used for monthly streamflow prediction at the Karaj River in Iran. The proposed method provided accurate prediction results with a root mean square error (RMSE) of 2.79 m3/s and determination coefficient (R2 ) of 0.92.  相似文献   

14.
Daily actual evapotranspiration (AET) and seasonal AET values are of great practical importance in the management of regional water resources and hydrological modelling. Remotely sensed AET models and Landsat satellite images have been used widely in producing AET estimates at the field scale. However, the lack of validation at a high spatial frequency under different soil water conditions and vegetation coverages limits their operational applications. To assess the accuracies of remote sensing‐based AET in an oasis‐desert region, a total of 59 local‐scale daily AET time series, simulated using HYDRUS‐1D calibrated with soil moisture profiles, were used as ground truth values. Of 59 sampling sites, 31 sites were located in the oasis subarea and 28 sites were located in the desert subarea. Additionally, the locally validated mapping evapotranspiration at high resolution with internalized calibration surface energy balance model was employed to estimate instantaneous AET values in the area containing all 59 of the sampling sites using seven Landsat subimages acquired from June 5 to August 24 in 2011. Daily AET was obtained using extrapolation and interpolation methods with the instantaneous AET maps. Compared against HYDRUS‐1D, the remote sensing‐based method produced reasonably similar daily AET values for the oasis sites, while no correlation was observed for daily AET estimated using these two methods for the desert sites. Nevertheless, a reasonable monthly AET could be estimated. The correlation analysis between HYDRUS‐1D‐simulated and remote sensing‐estimated monthly AET values showed relative root‐mean‐square error values of 15.1%, 12.1%, and 12.3% for June, July, and August, respectively. The root mean square error of the summer AET was 10.0%. Overall, remotely sensed models can provide reasonable monthly and seasonal AET estimates based on periodic snapshots from Landsat images in this arid oasis‐desert region.  相似文献   

15.
《水文科学杂志》2012,57(15):1824-1842
ABSTRACT

In this research, five hybrid novel machine learning approaches, artificial neural network (ANN)-embedded grey wolf optimizer (ANN-GWO), multi-verse optimizer (ANN-MVO), particle swarm optimizer (ANN-PSO), whale optimization algorithm (ANN-WOA) and ant lion optimizer (ANN-ALO), were applied for modelling monthly reference evapotranspiration (ETo) at Ranichauri (India) and Dar El Beida (Algeria) stations. The estimates yielded by hybrid machine learning models were compared against three models, Valiantzas-1, 2 and 3 based on root mean square error (RMSE), Nash-Sutcliffe efficiency (NSE), Pearson correlation coefficient (PCC) and Willmott index (WI). The results of comparison show that the ANN-GWO-1 model with five input variables (Tmin, Tmax, RH, Us, Rs) provides better estimates at both study stations (RMSE = 0.0592/0.0808, NSE = 0.9972/0.9956, PCC = 0.9986/0.9978, and WI = 0.9993/0.9989). Also, the adopted modelling strategy can build a truthful expert intelligent system for estimating the monthly ETo at study stations.  相似文献   

16.
Spectral shape,epsilon and record selection   总被引:4,自引:0,他引:4  
Selection of earthquake ground motions is considered with the goal of accurately estimating the response of a structure at a specified ground motion intensity, as measured by spectral acceleration at the first‐mode period of the structure, Sa(T1). Consideration is given to the magnitude, distance and epsilon (ε) values of ground motions. First, it is seen that selecting records based on their ε values is more effective than selecting records based on magnitude and distance. Second, a method is discussed for finding the conditional response spectrum of a ground motion, given a level of Sa(T1) and its associated mean (disaggregation‐based) causal magnitude, distance and ε value. Records can then be selected to match the mean of this target spectrum, and the same benefits are achieved as when records are selected based on ε. This mean target spectrum differs from a Uniform Hazard Spectrum, and it is argued that this new spectrum is a more appropriate target for record selection. When properly selecting records based on either spectral shape or ε, the reductions in bias and variance of resulting structural response estimates are comparable to the reductions achieved by using a vector‐valued measure of earthquake intensity. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
In tight gas sands, the signal‐to‐noise ratio of nuclear magnetic resonance log data is usually low, which limits the application of nuclear magnetic resonance logs in this type of reservoir. This project uses the method of wavelet‐domain adaptive filtering to denoise the nuclear magnetic resonance log data from tight gas sands. The principles of the maximum correlation coefficient and the minimum root mean square error are used to decide on the optimal basis function for wavelet transformation. The feasibility and the effectiveness of this method are verified by analysing the numerical simulation results and core experimental data. Compared with the wavelet thresholding denoise method, this adaptive filtering method is more effective in noise filtering, which can improve the signal‐to‐noise ratio of nuclear magnetic resonance data and the inversion precision of transverse relaxation time T2 spectrum. The application of this method to nuclear magnetic resonance logs shows that this method not only can improve the accuracy of nuclear magnetic resonance porosity but also can enhance the recognition ability of tight gas sands in nuclear magnetic resonance logs.  相似文献   

18.
Due to the severity related to extreme flood events, recent efforts have focused on the development of reliable methods for design flood estimation. Historical streamflow series correspond to the most reliable information source for such estimation; however, they have temporal and spatial limitations that may be minimized by means of regional flood frequency analysis (RFFA). Several studies have emphasized that the identification of hydrologically homogeneous regions is the most important and challenging step in an RFFA. This study aims to identify state‐of‐the‐art clustering techniques (e.g., K ‐means, partition around medoids, fuzzy C‐means, K ‐harmonic means, and genetic K ‐means) with potential to form hydrologically homogeneous regions for flood regionalization in Southern Brazil. The applicability of some probability density function, such as generalized extreme value, generalized logistic, generalized normal, and Pearson type 3, was evaluated based on the regions formed. Among all the 15 possible combinations of the aforementioned clustering techniques and the Euclidian, Mahalanobis, and Manhattan distance measures, the five best were selected. Several watersheds' physiographic and climatological attributes were chosen to derive multiple regression equations for all the combinations. The accuracy of the equations was quantified with respect to adjusted coefficient of determination, root mean square error, and Nash–Sutcliffe coefficient, whereas, a cross‐validation procedure was applied to check their reliability. It was concluded that reliable results were obtained when using robust clustering techniques based on fuzzy logic (e.g., K ‐harmonic means), which have not been commonly used in RFFA. Furthermore, the probability density functions were capable of representing the regional annual maximum streamflows. Drainage area, main river length, and mean altitude of the watershed were the most recurrent attributes for modelling of mean annual maximum streamflow. Finally, an integration of all the five best combinations stands out as a robust, reliable, and simple tool for estimation of design floods.  相似文献   

19.
Radar accuracy in estimating qualitative precipitation estimation at distances larger than 120 km degrades rapidly because of increased volume coverage and beam height. The performance of the recently upgraded dual‐polarized technology to the NEXRAD network and its capabilities are in need of further examination, as improved rainfall estimates at large distances would allow for significant hydrological modelling improvements. Parameter based methods were applied to radars from St. Louis (KLSX) and Kansas City (KEAX), Missouri, USA, to test the precision and accuracy of both dual‐ and single‐polarized parameter estimations of precipitation at large distances. Hourly aggregated precipitation data from terrestrial‐based tipping buckets provided ground‐truthed reference data. For all KLSX data tested, an R(Z,ZDR) algorithm provided the smallest absolute error (3.7 mm h?1) and root‐mean‐square‐error (45%) values. For most KEAX data, R(ZDR,KDP) and R(KDP) algorithms performed best, with RMSE values of 37%. With approximately 100 h of precipitation data between April and October of 2014, nearly 800 and 400 mm of precipitation were estimated by radar precipitation algorithms but was not observed by terrestrial‐based precipitation gauges for KLSX and KEAX, respectively. Additionally, nearly 30 and 190 mm of measured precipitation observed by gauges were not detected by the radar rainfall estimates from KLSX and KEAX, respectively. Results improve understanding of radar based precipitation estimates from long ranges thereby advancing applications for hydrometeorological modelling and flood forecasting. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
The idea of this paper is to present estimators for combining terrestrial gravity data with Earth gravity models and produce a high‐quality source of the Earth's gravity field data through all wavelengths. To do so, integral and point‐wise estimators are mathematically developed, based on the spectral combination theory, in such a way that they combine terrestrial data with one and/or two Earth gravity models. The integral estimators are developed so that they become biased or unbiased to a priori information. For testing the quality of the estimators, their global mean square errors are generated using an Earth gravity model08 model and one of the recent products of the gravity field and steady‐state ocean circulation explorer mission. Numerical results show that the integral estimators have smaller global root mean square errors than the point‐wise ones but they are not efficient practically. The integral estimator of the biased type is the most suited due to its smallest global root mean square error comparing to the rest of the estimators. Due largely to the omission errors of Earth gravity models the point‐wise estimators are not sensitive to the Earth gravity model commission error; therefore, the use of high‐degree Earth gravity models is very influential for reduction of their root mean square errors. Also it is shown that the use of the ocean circulation explorer Earth gravity model does not significantly reduce the root mean square errors of the presented estimators in the presence of Earth gravity model08. All estimators are applied in the region of Fennoscandia and a cap size of 2° for numerical integration and a maximum degree of 2500 for generation of band‐limited kernels are found suitable for the integral estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号