首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

The natural variability of precipitation in agricultural regions both in time and space is modelled using extensions of Box & Jenkins (1976) methodology based on the ARMA procedure. This broad class of aggregate regional models belongs to the general family of Space-Time Autoregressive Moving Average (STARMA) processes. The paper develops a three-stage iterative procedure for building a ST ARMA model of multiple precipitation series. The identified model is STMA (13). The emphasis is placed on the three stages of the model building procedure, namely identification, parameter estimation and diagnostic checking. In the parameter estimation stage the polytope (or simplex) method and three further classical nonlinear optimization algorithms are used, namely two conjugate gradient methods and a quasi-Newton method. The polytope method has been adopted and the developed model performed well in describing the spatio-temporal characteristics of the multiple precipitation series. Application has been attempted in a rural watershed in southern Canada.  相似文献   

2.
Streamflow time series in arid and semi‐arid regions can be characterized as a sequence of single discrete flow episodes or clusters of hydrographs separated by periods of zero discharge. Here, two point process models are presented for the joint occurrence of flow events at neighbouring river sites. The first allows for excess clustering by adding autocorrelated errors to an empirically derived seasonally varying probability of an event and is extended to the case of the joint occurrence of flow events in two catchments. The second approach is to explicitly model the occurrences of clusters of events and the bivariate point process of event occurrences within them at both sites. For the two models, the magnitude of event peaks are assumed to be drawn from continuous distributions with seasonally varying parameters. Rises and recessions in discharge are interpolated between the peaks using regression estimates of hydrographs. The models are fitted to mean daily flows at two sites in Namibia and demonstrated to provide realistic simulations of the hydrology. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

3.
Long-term time-dependent stochastic modelling of extreme waves   总被引:1,自引:3,他引:1  
This paper presents a literature survey on time-dependent statistical modelling of extreme waves and sea states. The focus is twofold: on statistical modelling of extreme waves and space- and time-dependent statistical modelling. The first part will consist of a literature review of statistical modelling of extreme waves and wave parameters, most notably on the modelling of extreme significant wave height. The second part will focus on statistical modelling of time- and space-dependent variables in a more general sense, and will focus on the methodology and models used also in other relevant application areas. It was found that limited effort has been put on developing statistical models for waves incorporating spatial and long-term temporal variability and it is suggested that model improvements could be achieved by adopting approaches from other application areas. In particular, Bayesian hierarchical space–time models were identified as promising tools for spatio-temporal modelling of extreme waves. Finally, a review of projections of future extreme wave climate is presented.  相似文献   

4.
Kalman filtering for stochastic dynamic tidal models, is a hyperbolic filtering problem. The questions of observability and stability of the filter as well as the effects of the finite difference approximation on the filter performance are studied. The degradation of the performance of the filter, in case an erroneous filter model is used, is investigated. In this paper we discuss these various practical aspects of the application of Kalman filtering for tidal flow identification problems. Filters are derived on the basis of the linear shallow water equations. Analytical methods are used to study the performance of the filters under a variety of circumstances.  相似文献   

5.
ABSTRACT

The old principle of parsimonious modelling of natural processes has regained its importance in the last few years. The inevitability of uncertainty and risk, and the value of stochastic modelling in dealing with them, are also again appreciated, after a period of growing hopes for radical reduction of uncertainty. Yet, in stochastic modelling of natural processes several families of models are used that are often non-parsimonious, unnatural or artificial, theoretically unjustified and, eventually, unnecessary. Here we develop a general methodology for more theoretically justified stochastic processes, which evolve in continuous time and stem from maximum entropy production considerations. The discrete-time properties thereof are theoretically derived from the continuous-time ones and a general simulation methodology in discrete time is built, which explicitly handles the effects of discretization and truncation. Some additional modelling issues are discussed with a focus on model identification and fitting, which are often made using inappropriate methods.

EDITOR Z.W. Kundzewicz ASSOCIATE EDITOR S. Grimaldi  相似文献   

6.
Stochastic modelling is a useful way of simulating complex hard-rock aquifers as hydrological properties (permeability, porosity etc.) can be described using random variables with known statistics. However, very few studies have assessed the influence of topological uncertainty (i.e. the variability of thickness of conductive zones in the aquifer), probably because it is not easy to retrieve accurate statistics of the aquifer geometry, especially in hard rock context. In this paper, we assessed the potential of using geophysical surveys to describe the geometry of a hard rock-aquifer in a stochastic modelling framework.The study site was a small experimental watershed in South India, where the aquifer consisted of a clayey to loamy–sandy zone (regolith) underlain by a conductive fissured rock layer (protolith) and the unweathered gneiss (bedrock) at the bottom. The spatial variability of the thickness of the regolith and fissured layers was estimated by electrical resistivity tomography (ERT) profiles, which were performed along a few cross sections in the watershed. For stochastic analysis using Monte Carlo simulation, the generated random layer thickness was made conditional to the available data from the geophysics. In order to simulate steady state flow in the irregular domain with variable geometry, we used an isoparametric finite element method to discretize the flow equation over an unstructured grid with irregular hexahedral elements.The results indicated that the spatial variability of the layer thickness had a significant effect on reducing the simulated effective steady seepage flux and that using the conditional simulations reduced the uncertainty of the simulated seepage flux.As a conclusion, combining information on the aquifer geometry obtained from geophysical surveys with stochastic modelling is a promising methodology to improve the simulation of groundwater flow in complex hard-rock aquifers.  相似文献   

7.
Specially designed arrays of strong motion seismographs located near earthquake sources are required for engineering studies of near-source earthquake properties as well as spatial variation of seismic waves. The SMART-1 array in Tath provides good records for this type of study. Based on the SMART-1 array data, the analysis of the principal direction wave propagation and the space-time correlation of some events recorded by SMART-1 have been studied. A stoce model for predicting the differential ground movement was also developed. This stochastic model includes the effect of source characteristics, attenuation of wave passage and spatial correlation characteristics. The performance of this more discussed and compared with the ground movement recorded by the SMART-1 array. From the present study, it is that spatial correlations do exist as seismic waves propagate across the array site. Generally, the loss of coherence is direction of propagation can be explained by energy at the same frequency exhibiting a slightly different velocity with the measurement intervals. It is also concluded that the phase velocity of seismic waves and the corner frequency of the grep displacement spectrum are controlling factors in the prediction of the root mean square of differential grep displacement.  相似文献   

8.
Abstract

The uncertainty associated with a rainfall–runoff and non-point source loading (NPS) model can be attributed to both the parameterization and model structure. An interesting implication of the areal nature of NPS models is the direct relationship between model structure (i.e. sub-watershed size) and sample size for the parameterization of spatial data. The approach of this research is to find structural limitations in scale for the use of the conceptual NPS model, then examine the scales at which suitable stochastic depictions of key parameter sets can be generated. The overlapping regions are optimal (and possibly the only suitable regions) for conducting meaningful stochastic analysis with a given NPS model. Previous work has sought to find optimal scales for deterministic analysis (where, in fact, calibration can be adjusted to compensate for sub-optimal scale selection); however, analysis of stochastic suitability and uncertainty associated with both the conceptual model and the parameter set, as presented here, is novel; as is the strategy of delineating a watershed based on the uncertainty distribution. The results of this paper demonstrate a narrow range of acceptable model structure for stochastic analysis in the chosen NPS model. In the case examined, the uncertainties associated with parameterization and parameter sensitivity are shown to be outweighed in significance by those resulting from structural and conceptual decisions.

Citation Parker, G. T. Rennie, C. D. & Droste, R. L. (2011) Model structure and uncertainty for stochastic non-point source modelling applications. Hydrol. Sci. J. 56(5), 870–882.  相似文献   

9.
This paper presents the development of a seismological model for the Tehran area. This modelling approach, which was originally developed in Eastern North America, has been used successfully in other parts of the world including Australia and China for simulating accelerograms and elastic response spectra. Parameters required for input into the model were inferred from seismological and geological information obtained locally. The attenuation properties of the earth crust were derived from the analysis of accelerogram records that had been collated from within the region in a previous study. In modelling local modifications of seismic waves in the upper crust, shear-wave velocity profiles have been constructed in accordance with the power law. Information inferred from micro-zonation studies (for near-surface conditions) and from measurements of teleseismic P-waves reflected from the deeper crusts as reported in the literature has been used to constrain parameters in the power-law relationships. This method of obtaining amplification factors for the upper crust distinguishes this study from earlier studies in the Tehran area (in which site amplification factors were inferred from the H/V ratio of the recorded ground motions). The regional specific seismological model so constructed from the study enabled accelerograms to be simulated and elastic response spectra calculated for a series of magnitude–distance combinations. Importantly, elastic response spectra calculated from the simulated accelerograms have been compared with those calculated from accelerograms recorded from earthquakes with magnitudes ranging between M6.3 and M7.4. The peak ground velocity values calculated from the simulated accelerograms have also been correlated with values inferred from macro-seismic intensity data of 17 historical earthquakes with magnitudes varying between 5.4 and 7.7 and with distances varying between 40 and 230 km. This paper forms part of the long-term strategy of the authors of applying modern techniques for modelling the attenuation behaviour of earthquakes in countries which are lacking in instrumental data of earthquakes.  相似文献   

10.
11.
12.
A Bayesian post‐processor is used to generate a representation of the likely hydrograph forecast flow error distribution using raingauge and radar input to a stochastic catchment model and its deterministic equivalent. A hydrograph ensemble is so constructed. Experiments are analysed using the model applied to the River Croal in north‐west England. It is found that for rainfall input to the model having errors less than 3mm h?1, corresponding to about a 15% error in peak flow, the stochastic model outperforms the deterministic model. The range of hydrographs associated with the different model simulations and the measured hydrographs are compared. The significant improvement possible using a stochastic approach is demonstrated for a specific case study, although the mean hydrograph derived using the stochastic model has an error range associated with it. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

13.
Extreme environmental events have considerable impacts on society. Preparation to mitigate or forecast accurately these events is a growing concern for governments. In this regard, policy and decision makers require accurate tools for risk estimation in order to take informed decisions. This work proposes a Bayesian framework for a unified treatment and statistical modeling of the main components of risk: hazard, vulnerability and exposure. Risk is defined as the expected economic loss or population affected as a consequence of a hazard event. The vulnerability is interpreted as the loss experienced by an exposed population due to hazard events. The framework combines data of different spatial and temporal supports. It produces a sequence of temporal risk maps for the domain of interest including a measure of uncertainty for the hazard and vulnerability. In particular, the considered hazard (rainfall) is interpolated from point-based measured rainfall data using a hierarchical spatio-temporal Kriging model, whose parameters are estimated using the Bayesian paradigm. Vulnerability is modeled using zero-inflated distributions with parameters dependent on climatic variables at local and large scales. Exposure is defined as the total population settled in the spatial domain and is interpolated using census data. The proposed methodology was applied to the Vargas state of Venezuela to map the spatio-temporal risk for the period 1970–2006. The framework highlights both high and low risk areas given extreme rainfall events.  相似文献   

14.
J. J. Yu  X. S. Qin  O. Larsen 《水文研究》2015,29(6):1267-1279
A generalized likelihood uncertainty estimation (GLUE) method incorporating moving least squares (MLS) with entropy for stochastic sampling (denoted as GLUE‐MLS‐E) was proposed for uncertainty analysis of flood inundation modelling. The MLS with entropy (MLS‐E) was established according to the pairs of parameters/likelihoods generated from a limited number of direct model executions. It was then applied to approximate the model evaluation to facilitate the target sample acceptance of GLUE during the Monte‐Carlo‐based stochastic simulation process. The results from a case study showed that the proposed GLUE‐MLS‐E method had a comparable performance as GLUE in terms of posterior parameter estimation and predicted confidence intervals; however, it could significantly reduce the computational cost. A comparison to other surrogate models, including MLS, quadratic response surface and artificial neural networks (ANN), revealed that the MLS‐E outperformed others in light of both the predicted confidence interval and the most likely value of water depths. ANN was shown to be a viable alternative, which performed slightly poorer than MLS‐E. The proposed surrogate method in stochastic sampling is of practical significance in computationally expensive problems like flood risk analysis, real‐time forecasting, and simulation‐based engineering design, and has a general applicability in many other numerical simulation fields that requires extensive efforts in uncertainty assessment. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
Three simple models of the behaviour of a series of basaltic eruptions have been tested against the eruptive history of Nyamuragira. The data set contains the repose periods and the volumes of lava emitted in 22 eruptions since 1901. Model 1 is fully stochastic and eruptions of any volume with random repose intervals are possible. Models 2 and 3 are constrained by deterministic limits on the maximum capacity of the magma reservoir and on the lowest drainage level of the reservoir respectively. The method of testing these models involves (1) seeking change points in the time series to determine regimes of uniform magma supply rate, and (2) applying linear regression to these regimes, which for models 2 and 3 are the determinsstic limits to those models. Two change points in the time series for Nyamuragira, in 1958 and 1980, were determined using a Kolmogorov-Smirnov technique. The latter change involved an increase in the magma supply rate by a factor of 2.5, from 0.55 to 1.37 m3s-1. Model 2 provides the best fit to the behavior of Nyamuragira with the ratio of variation explained by the model to total variation. R2, being greater than 0.9 for all three regimes. This fit can be interpreted to mean that there is a determinstic limit to the elastic strength of the magma reservoir 4–8 km below the summit of the volcano.  相似文献   

16.
The 2013 Aigion earthquake swarm that took place in the west part of Corinth Gulf is investigated for revealing faulting and seismicity properties of the activated area. The activity started on May 21 and was appreciably intense in the next 3 months. The recordings of the Hellenic Unified Seismological Network (HUSN), which is adequately dense around the affected area, were used to accurately locate 1501 events. The double difference (hypoDD) technique was employed for the manually picked P and S phases along with differential times derived from waveform cross-correlation for improving location accuracy. The activated area with dimensions 6?×?2 km is located approximately 5 km SE of Aigion. Focal mechanisms of 77 events with M?≥?2.0 were determined from P wave first motions and used for the geometry identification of the ruptured segments. Spatio-temporal distribution of earthquakes revealed an eastward and westward hypocentral migration from the starting point suggesting the division of the seismic swarm into four major clusters. The hypocentral migration was corroborated by the Coulomb stress change calculation, indicating that four fault segments involved in the rupture process successively failed by stress change encouragement. Examination of fluid flow brought out that it cannot be unambiguously considered as the driving mechanism for the successive failures.  相似文献   

17.
Stochastic models are recent but unavoidable tools for snow avalanche hazard mapping that can be described in a general system framework. For the computation of design return periods, magnitude and frequency have to be evaluated. The magnitude model consists of a set of physical equations for avalanche propagation associated with a statistical formalism adapted to the input–output data structure. The friction law includes at least one latent friction coefficient. The Bayesian paradigm and the associated simulation techniques assist considerably in performing the inference and taking estimation errors into account for prediction. Starting from the general case, simplifying hypotheses allows computing the predictive distribution of high return periods on a case-study. Only release and runout altitudes are considered so that the model can use the French database. An inversible propagation model makes it possible to work with the latent friction coefficient as if it is observed. Prior knowledge is borrowed from an avalanche path with similar topographical characteristics. Justifications for the working hypotheses and further developments are discussed. In particular, the whole approach is positioned with respect to both deterministic and stochastic hydrology.  相似文献   

18.
19.
Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential and hybrid exponential/Pareto distributions) are evaluated on their ability to reproduce the statistics of the original observed time series. Each probability distribution is also indirectly assessed by looking at its ability to reproduce key hydrological variables after being used as inputs to a lumped hydrological model. Data from 24 weather stations and two watersheds (Chute‐du‐Diable and Yamaska watersheds) in the province of Quebec (Canada) were used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three‐parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear‐cut when the simulated time series are used to drive a hydrological model. Although the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modelling. The implications of choosing a distribution function with respect to hydrological modelling and climate change impact studies are also discussed. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号