首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
—We use advanced methods to extract quantitative time dynamics from geomagnetic signals. In particular we analyse daily geomagnetic time series measured at three stations in Norway. The dynamics of geomagnetic measurements has been investigated using autoregressive models. The procedure is based on two forecasting approaches: the global autoregressive approximation and the local autoregressive approximation. The first technique views the data as a realisation of a linear stochastic process, whereas the second considers them as a realisation of a deterministic process, supposedly non-linear. The comparison of the predictive skill of the two techniques is a strong test to discriminate between low-dimensional chaos and stochastic dynamics. Our findings suggest that the physical system governing the phenomena is characterised by a stochastic dynamics, and the process could be described by numerous degrees of freedom. We also investigated the kind of stochasticity of the geomagnetic signals, analysing the power spectrum density. We identify a power law P(?)∝?, with the scaling exponent α which is a typical fingerprint of irregular processes. In this analysis we use the Higuchi method, which presents an interesting relationship between the fractal dimension D and the spectral power law scaling index α.  相似文献   

2.
3.
Linking atmospheric and hydrological models is challenging because of a mismatch of spatial and temporal resolutions in which the models operate: dynamic hydrological models need input at relatively fine temporal (daily) scale, but the outputs from general circulation models are usually not realistic at the same scale, even though fine scale outputs are available. Temporal dimension downscaling methods called disaggregation are designed to produce finer temporal-scale data from reliable larger temporal-scale data. Here, we investigate a hybrid stochastic weather-generation method to simulate a high-frequency (daily) precipitation sequence based on lower frequency (monthly) amounts. To deal with many small precipitation amounts and capture large amounts, we divide the precipitation amounts on rainy days (with non-zero precipitation amounts) into two states (named moist and wet states, respectively) by a pre-defined threshold and propose a multi-state Markov chain model for the occurrences of different states (also including non-rain days called dry state). The truncated Gamma and censored extended Burr XII distributions are then employed to model the precipitation amounts in the moist and wet states, respectively. This approach avoids the need to deal with discontinuity in the distribution, and ensures that the states (dry, moist and wet) and corresponding amounts in rainy days are well matched. The method also considers seasonality by constructing individual models for different months, and monthly variation by incorporating the low-frequency amounts as a model predictor. The proposed method is compared with existing models using typical catchment data in Australia with different climate conditions (non-seasonal rainfall, summer rainfall and winter rainfall patterns) and demonstrates better performances under several evaluation criteria which are important in hydrological studies.  相似文献   

4.
The meaningful quantification of uncertainty in hydrological model outputs is a challenging task since complete knowledge about the hydrologic system is still lacking. Owing to the nonlinearity and complexity associated with the hydrological processes, Artificial neural network (ANN) based models have gained lot of attention for its effectiveness in function approximation characteristics. However, only a few studies have been reported for assessment of uncertainty associated with ANN outputs. This study uses a simple method for quantifying predictive uncertainty of ANN model output through first order Taylor series expansion. The first order partial differential equations of non-linear function approximated by the ANN with respect to weights and biases of the ANN model are derived. A bootstrap technique is employed in estimating the values of the mean and the standard deviation of ANN parameters, and is used to quantify the predictive uncertainty. The method is demonstrated through the case study of Upper White watershed located in the United States. The quantitative assessment of uncertainty is carried out with two measures such as percentage of coverage and average width. In order to show the magnitude of uncertainty in different flow domains, the values are statistically categorized into low-, medium- and high-flow series. The results suggest that the uncertainty bounds of ANN outputs can be effectively quantified using the proposed method. It is observed that the level of uncertainty is directly proportional to the magnitude of the flow and hence varies along time. A comparison of the uncertainty assessment shows that the proposed method effectively quantifies the uncertainty than bootstrap method.  相似文献   

5.
Higher-order approximation techniques for estimating stochastic parameter of the non-homogeneous Poisson (NHP) model are presented. The NHP model is characterized by a two-parameter cumulative probability distribution function (CDF) of sediment displacement. Those two parameters are the temporal and spatial intensity functions, physically representing the inverse of the average rest period and step length of sediment particles, respectively. Difficulty of estimating the parameters has, however, restricted the applications of the NHP model. The approximation techniques are proposed to address such problem. The basic idea of the method is to approximate a model involving stochastic parameters by Taylor series expansion. The expansion preserves certain higher-order terms of interest. Using the experimental (laboratory or field) data, one can determine the model parameters through a system of equations that are simplified by the approximation technique. The parameters so determined are used to predict the cumulative distribution of sediment displacement. The second-order approximation leads to a significant reduction of the CDF error (of the order of 47%) compared to the first-order approximation. Error analysis is performed to evaluate the accuracy of the first- and second-order approximations with respect to the experimental data. The higher-order approximations provide better estimations of the sediment transport and deposition that are critical factors for such environment as spawning gravel-bed.  相似文献   

6.
Higher-order approximation techniques for estimating stochastic parameter of the non-homogeneous Poisson (NHP) model are presented. The NHP model is characterized by a two-parameter cumulative probability distribution function (CDF) of sediment displacement. Those two parameters are the temporal and spatial intensity functions, physically representing the inverse of the average rest period and step length of sediment particles, respectively. Difficulty of estimating the parameters has, however, restricted the applications of the NHP model. The approximation techniques are proposed to address such problem. The basic idea of the method is to approximate a model involving stochastic parameters by Taylor series expansion. The expansion preserves certain higher-order terms of interest. Using the experimental (laboratory or field) data, one can determine the model parameters through a system of equations that are simplified by the approximation technique. The parameters so determined are used to predict the cumulative distribution of sediment displacement. The second-order approximation leads to a significant reduction of the CDF error (of the order of 47%) compared to the first-order approximation. Error analysis is performed to evaluate the accuracy of the first- and second-order approximations with respect to the experimental data. The higher-order approximations provide better estimations of the sediment transport and deposition that are critical factors for such environment as spawning gravel-bed.  相似文献   

7.
When evaluating water quality, the influence of physical weight of the observed index is normally taken into account, but the influence of stochastic observation error (SOE) is not adequately considered. Using Monte Carlo simulation, combined with Shannon entropy, the Principle of Maximum Entropy (POME) and Tsallis entropy, this study investigates the influence of stochastic observation error (SOE) for two cases of the observed index: small observation error and large observation error. Randomness and fuzziness represent two types of uncertainties that are deemed significant and should be considered simultaneously when developing or evaluating water quality models. To that end, three models are employed here: two of the models, named as model I and model II, consider both the fuzziness and randomness, and another model, considers only fuzziness. The results from three representative lakes in China show that for all three models, the influence of stochastic observation error (SOE) on water quality evaluation can be significant irrespective of whether the water quality index has a small observation error or a large observation error. Furthermore, when there is a significant difference in the accuracy of observations, the influence of stochastic observation error (SOE) on water quality evaluation increases. The water quality index whose SOE is minimum determines the results of evaluation.  相似文献   

8.
9.
The stochastic model has been widely used for the simulation study. However, there was a difficulty in the reproduction of the skewness of observed series and so the stochastic model for the skewness preservation was appeared. While the skewness in the residuals of the stochastic model has been considered for the skewness preservation this study uses a random resampling technique of residuals from the stochastic models for the simulation study and for the investigation of the skewness coefficient. The main advantage of this resampling scheme, called the bootstrap method is that it does not rely on the assumption of population distribution and this study uses the combined model of the stochastic and bootstrapped models. The stochastic and bootstrapped stochastic (or combined) models are used for the investigations of skewness preservation and of the reproduction of probability density function between the simulated series. The models are applied to the annual and monthly streamflows of Yongdam site in Korea and Yakima river, Washington, USA for the streamflow simulation study then the statistics and probability density functions for the observed and simulated streamflows are compared. As the results the bootstrapped stochastic model reproduces the skewness and probability density function much better than the stochastic model. This evidences suggest that the bootstrapped stochastic model might be more appropriate than the stochastic model for the preservation of skewness and for simulation purposes of the series.  相似文献   

10.
A method for variance component estimation (VCE) in errors-in-variables (EIV) models is proposed, which leads to a novel rigorous total least-squares (TLS) approach. To achieve a realistic estimation of parameters, knowledge about the stochastic model, in addition to the functional model, is required. For an EIV model, the existing TLS techniques either do not consider the stochastic model at all or assume approximate models such as those with only one variance component. In contrast to such TLS techniques, the proposed method considers an unknown structure for the stochastic model in the adjustment of an EIV model. It simultaneously predicts the stochastic model and estimates the unknown parameters of the functional model. Moreover the method shows how an EIV model can support the Gauss-Helmert model in some cases. To make the VCE theory into EIV model more applicable, two simplified algorithms are also proposed. The proposed methods can be applied to linear regression and datum transformation. We apply these methods to these examples. In particular a 3-D non-linear close to identical similarity transformation is performed. Two simulation studies besides an experimental example give insight into the efficiency of the algorithms.  相似文献   

11.
Mean monthly flows of the Tatry alpine mountain region in Slovakia are predominantly fed by snowmelt in the spring and convective precipitation in the summer. Therefore their regime properties exhibit clear seasonal patterns. Positive deviations from these trends have substantially different features than the negative ones. This provides intuitive justification for the application of nonlinear two-regime models for modelling and forecasting of these time series. Nonlinear time series structures often have lead to good fitting performances, however these do not guarantee an equally good forecasting performance. In this paper therefore the forecasting performance of several nonlinear time series models is compared with respect to their capabilities of forecasting monthly and seasonal flows in the Tatry region. A new type of regime-switching models is also proposed and tested. The best predictive performance was achieved for a new model subclass involving aggregation operators.  相似文献   

12.
In this paper we present a stochastic model reduction method for efficiently solving nonlinear unconfined flow problems in heterogeneous random porous media. The input random fields of flow model are parameterized in a stochastic space for simulation. This often results in high stochastic dimensionality due to small correlation length of the covariance functions of the input fields. To efficiently treat the high-dimensional stochastic problem, we extend a recently proposed hybrid high-dimensional model representation (HDMR) technique to high-dimensional problems with multiple random input fields and integrate it with a sparse grid stochastic collocation method (SGSCM). Hybrid HDMR can decompose the high-dimensional model into a moderate M-dimensional model and a few one-dimensional models. The moderate dimensional model only depends on the most M important random dimensions, which are identified from the full stochastic space by sensitivity analysis. To extend the hybrid HDMR, we consider two different criteria for sensitivity test. Each of the derived low-dimensional stochastic models is solved by the SGSCM. This leads to a set of uncoupled deterministic problems at the collocation points, which can be solved by a deterministic solver. To demonstrate the efficiency and accuracy of the proposed method, a few numerical experiments are carried out for the unconfined flow problems in heterogeneous porous media with different correlation lengths. The results show that a good trade-off between computational complexity and approximation accuracy can be achieved for stochastic unconfined flow problems by selecting a suitable number of the most important dimensions in the M-dimensional model of hybrid HDMR.  相似文献   

13.
This study compares the predictive accuracy of eight state‐of‐the‐art modelling techniques for 12 landforms types in a cold environment. The methods used are Random Forest (RF), Artificial Neural Networks (ANN), Generalized Boosting Methods (GBM), Generalized Linear Models (GLM), Generalized Additive Models (GAM), Multivariate Adaptive Regression Splines (MARS), Classification Tree Analysis (CTA) and Mixture Discriminant Analysis (MDA). The spatial distributions of 12 periglacial landforms types were recorded in sub‐Arctic landscape of northern Finland in 2032 grid squares at a resolution of 25 ha. First, three topographic variables were implemented into the eight modelling techniques (simple model), and then six other variables were added (three soil and three vegetation variables; complex model) to reflect the environmental conditions of each grid square. The predictive accuracy was measured by two methods: the area under the curve (AUC) of a receiver operating characteristic (ROC) plot, and the Kappa index (κ), based on spatially independent model evaluation data. The mean AUC values of the simple models varied between 0·709 and 0·796, whereas the AUC values of the complex model ranged from 0·725 to 0·825. For both simple and complex models GAM, GLM, ANN and GBM provided the highest predictive performances based on both AUC and κ values. The results encourage further applications of the novel modelling methods in geomorphology. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
A stochastic model for the analysis of the temporal change of dry spells   总被引:2,自引:2,他引:0  
In the present paper a stochastic approach which considers the arrival of rainfall events as a Poisson process is proposed to analyse the sequences of no rainy days. Particularly, among the different Poisson models, a non-homogeneous Poisson model was selected and then applied to the daily rainfall series registered at the Cosenza rain gauge (Calabria, southern Italy), as test series. The aim was to evaluate the different behaviour of the dry spells observed in two different 30-year periods, i.e. 1951–1980 and 1981–2010. The analyses performed through Monte Carlo simulations assessed the statistical significance of the variation of the mean expected values of dry spells observed at annual scale in the second period with respect to those observed in the first. The model has then been verified by comparing the results of the test series with the ones obtained from other three rain gauges of the same region. Moreover, greater occurrence probabilities for long dry spells in 1981–2010 than in 1951–1980 were detected for the test series. Analogously, the return periods evaluated for fixed long dry spells through the synthetic data of the period 1981–2010 resulted less than half of the corresponding ones evaluated with the data generated for the previous 30-year period.  相似文献   

15.
In this paper a very general rainfall-runoff model structure (described below) is shown to reduce to a unit hydrograph model structure. For the general model, a multi-linear unit hydrograph approach is used to develop subarea runoff, and is coupled to a multi-linear channel flow routing method to develop a link-node rainfall-runoff model network. The spatial and temporal rainfall distribution over the catchment is probabilistically related to a known rainfall data source located in the catchment in order to account for the stochastic nature of rainfall with respect to the rain gauge measured data. The resulting link node model structure is a series of stochastic integral equations, one equation for each subarea. A cumulative stochastic integral equation is developed as a sum of the above series, and includes the complete spatial and temporal variabilities of the rainfall over the catchment. The resulting stochastic integral equation is seen to be an extension of the well-known single area unit hydrograph method, except that the model output of a runoff hydrograph is a distribution of outcomes (or realizations) when applied to problems involving prediction of storm runoff; that is, the model output is a set of probable runoff hydrographs, each outcome being the results of calibration to a known storm event.  相似文献   

16.
In this paper a very general rainfall-runoff model structure (described below) is shown to reduce to a unit hydrograph model structure. For the general model, a multi-linear unit hydrograph approach is used to develop subarea runoff, and is coupled to a multi-linear channel flow routing method to develop a link-node rainfall-runoff model network. The spatial and temporal rainfall distribution over the catchment is probabilistically related to a known rainfall data source located in the catchment in order to account for the stochastic nature of rainfall with respect to the rain gauge measured data. The resulting link node model structure is a series of stochastic integral equations, one equation for each subarea. A cumulative stochastic integral equation is developed as a sum of the above series, and includes the complete spatial and temporal variabilities of the rainfall over the catchment. The resulting stochastic integral equation is seen to be an extension of the well-known single area unit hydrograph method, except that the model output of a runoff hydrograph is a distribution of outcomes (or realizations) when applied to problems involving prediction of storm runoff; that is, the model output is a set of probable runoff hydrographs, each outcome being the results of calibration to a known storm event.  相似文献   

17.
Clustering stochastic point process model for flood risk analysis   总被引:7,自引:0,他引:7  
Since the introduction into flood risk analysis, the partial duration series method has gained increasing acceptance as an appealing alternative to the annual maximum series method. However, when the base flow is low, there is clustering in the flood peak or flow volume point process. In this case, the general stochastic point process model is not suitable to risk analysis. Therefore, two types of models for flood risk analysis are derived on the basis of clustering stochastic point process theory in this paper. The most remarkable characteristic of these models is that the flood risk is considered directly within the time domain. The acceptability of different models are also discussed with the combination of the flood peak counted process in twenty years at Yichang station on the Yangtze river. The result shows that the two kinds of models are suitable ones for flood risk analysis, which are more flexible compared with the traditional flood risk models derived on the basis of annual maximum series method or the general stochastic point process theory. Received: September 29, 1997  相似文献   

18.
Since the introduction into flood risk analysis, the partial duration series method has gained increasing acceptance as an appealing alternative to the annual maximum series method. However, when the base flow is low, there is clustering in the flood peak or flow volume point process. In this case, the general stochastic point process model is not suitable to risk analysis. Therefore, two types of models for flood risk analysis are derived on the basis of clustering stochastic point process theory in this paper. The most remarkable characteristic of these models is that the flood risk is considered directly within the time domain. The acceptability of different models are also discussed with the combination of the flood peak counted process in twenty years at Yichang station on the Yangtze river. The result shows that the two kinds of models are suitable ones for flood risk analysis, which are more flexible compared with the traditional flood risk models derived on the basis of annual maximum series method or the general stochastic point process theory. Received: September 29, 1997  相似文献   

19.
Stochastic dynamic game models can be applied to derive optimal reservoir operation policies by considering interactions among water users and reservoir operator, their preferences, their levels of information availability and cooperative behaviors. The stochastic dynamic game model with perfect information (PSDNG) has been developed by [Ganji A, Khalili D, Karamouz M. Development of stochastic dynamic Nash game model for reservoir operation. I. The symmetric stochastic model with perfect information. Adv Water Resour, this issue]. This paper develops four additional versions of stochastic dynamic game model of water users interactions based on the cooperative behavior and hydrologic information availability of beneficiary sectors of reservoir systems. It is shown that the proposed models are quite capable of providing appropriate reservoir operating policies when compared with alternative operating models, as indicated by several reservoir performance characteristics. Among the proposed models, the selected model by considering cooperative behavior and additional hydrologic information (about the randomness nature of reservoir operation parameters), as exercised by reservoir operator, provides the highest attained level of performance and efficiency. Furthermore, the selected model is more realistic since it also considers actual behavior of water users and reservoir operator in the analysis.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号