首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Model performance evaluation for real-time flood forecasting has been conducted using various criteria. Although the coefficient of efficiency (CE) is most widely used, we demonstrate that a model achieving good model efficiency may actually be inferior to the naïve (or persistence) forecasting, if the flow series has a high lag-1 autocorrelation coefficient. We derived sample-dependent and AR model-dependent asymptotic relationships between the coefficient of efficiency and the coefficient of persistence (CP) which form the basis of a proposed CECP coupled model performance evaluation criterion. Considering the flow persistence and the model simplicity, the AR(2) model is suggested to be the benchmark model for performance evaluation of real-time flood forecasting models. We emphasize that performance evaluation of flood forecasting models using the proposed CECP coupled criterion should be carried out with respect to individual flood events. A single CE or CP value derived from a multi-event artifactual series by no means provides a multi-event overall evaluation and may actually disguise the real capability of the proposed model.  相似文献   

2.
Despite significant research advances achieved during the last decades, seemingly inconsistent forecasting results related to stochastic, chaotic, and black-box approaches have been reported. Herein, we attempt to address the entropy/complexity resulting from hydrological and climatological conditions. Accordingly, mutual information function, correlation dimension, averaged false nearest neighbor with E1 and E2 quantities, and complexity analysis that uses sample entropy coupled with iterative amplitude adjusted Fourier transform were employed as nonlinear deterministic identification tools. We investigated forecasting of daily streamflow for three climatologically different Swedish rivers, Helge, Ljusnan, and Kalix Rivers using self-exciting threshold autoregressive (SETAR), k-nearest neighbor (k-nn), and artificial neural networks (ANN). The results suggest that the streamflow in these rivers during the 1957–2012 period exhibited dynamics from low to high complexity. Specifically, (1) lower complexity lead to higher predictability at all lead-times and the models’ worst performances were obtained for the most complex streamflow (Ljusnan River), (2) ANN was the best model for 1-day ahead forecasting independent of complexity, (3) SETAR was the best model for 7-day ahead forecasting by means of performance indices, especially for less complexity, (4) the largest error propagation was obtained with the k-nn and ANN and thus these models should be carefully used beyond 2-day forecasting, and (5) higher number input variables except for the dominant variables made insignificant impact on forecasting performances for ANN and k-nn models.  相似文献   

3.
Effects of temporally correlated infiltration on water flow in an unsaturated–saturated system were investigated. Both white noise and exponentially correlated infiltration processes were considered. The moment equations of the pressure head (ψ) were solved numerically to obtain the variance and autocorrelation functions of ψ at 14 observation points. Monte Carlo simulations were conducted to verify the numerical results and to estimate the power spectrum of ψ (S ψψ ). It was found that as the water flows through the system, the variance of the ψ (\( \sigma_{\psi }^{2} \)) were damped by the system: the deeper in the system, the smaller the \( \sigma_{\psi }^{2} \), and the larger the correlation timescale of the infiltration process (λ I ), the larger the \( \sigma_{\psi }^{2} \). The unsaturated–saturated system gradually filters out the short-term fluctuations of ψ and the damping effect is most significant in the upper part of the system. The fluctuations of ψ is non-stationary at early time and becomes stationary as time progresses: the larger the value of λ I , the longer the non-stationary period. The correlation timescale of the ψ (λ ψ ) increases with depth and approaches a constant value at depth: the larger the value of λ I , the larger the value of λ ψ . The results of the estimated S ψψ is consistent with those of the variance and autocorrelation function.  相似文献   

4.
Seismicity has been identified as an example of a natural, nonlinear system for which the distribution of frequency and event size follow a power law called the “Gutenberg–Richter (G-R) law.” The parameters of the G-R law, namely b- and a-values, have been widely used in many studies about seismic hazards, earthquake forecasting models, and other related topics. However, the plausibility of the power law model and applicability of parameters were mainly verified by statistical error σ of the b-value, the effectiveness of which is still doubtful. In this research, we used a newly defined p value developed by Clausetet al. (Power-Law Distributions in Empirical Data, SIAM Rev. 51, 661–703, 2009) instead of the statistical error σ of the b-value and verified its effectiveness as a plausibility index of the power-law model. Furthermore, we also verified the effectiveness of K–S statistics as a goodness-of-fit test in estimating the crucial parameter \(M_{\text{c}}\) of the power-law model.  相似文献   

5.
We made an attempt to assess the shear wave velocity values V S and, to a lesser extent, the V P values from ambient noise recordings in an array configuration. Five array sites were situated in the close proximity to borehole sites. Shear wave velocity profiles were modeled at these five array sites with the aid of two computational techniques, viz. spatial autocorrelation (SPAC) and H/V ellipticity. Out of these five array sites, velocity estimates could be reliably inferred at three locations. The shear wave velocities estimated by these methods are found to be quite consistent with each other. The computed V S values up to 30 m depth are in the range from 275 to 375 m/s in most of the sites, which implies prevalence of a low velocity zone at some pocket areas. The results were corroborated by evidence of site geology as well as geotechnical information.  相似文献   

6.
This paper develops a minimum relative entropy theory with frequency as a random variable, called MREF henceforth, for streamflow forecasting. The MREF theory consists of three main components: (1) determination of spectral density (2) determination of parameters by cepstrum analysis, and (3) extension of autocorrelation function. MREF is robust at determining the main periodicity, and provides higher resolution spectral density. The theory is evaluated using monthly streamflow observed at 20 stations in the Mississippi River basin, where forecasted monthly streamflows show the coefficient of determination (r 2) of 0.876, which is slightly higher in the Upper Mississippi (r 2 = 0.932) than in the Lower Mississippi (r 2 = 0.806). Comparison of different priors shows that the prior with the background spectral density with a peak at 1/12 frequency provides satisfactory accuracy, and can be used to forecast monthly streamflow with limited information. Four different entropy theories are compared, and it is found that the minimum relative entropy theory has an advantage over maximum entropy (ME) for both spectral estimation and streamflow forecasting, if additional information as a prior is given. Besides, MREF is found to be more convenient to estimate parameters with cepstrum analysis than minimum relative entropy with spectral power as random variable (MRES), and less information is needed to assume the prior. In general, the reliability of monthly streamflow forecasting from the highest to the lowest is for MREF, MRES, configuration entropy (CE), Burg entropy (BE), and then autoregressive method (AR), respectively.  相似文献   

7.
The problem of estimating the time derivatives of the horizontal components of the geomagnetic field and forecasting the probability of the occurrence of perturbations that exceed a given threshold level (the over-threshold perturbations) arises in the applications concerned with the geomagnetically induced currents (GICs). In this work, we consider the temporal and spatial structure of the Pi3 pulsations with quasi-periods of 102 to 103 s during which the auroral and subauroral stations of the IMAGE network record over-threshold values in the derivatives of the meridional (along the longitudinal circle) BX component and latitudinal (along the latitudinal circle) BY component. The extreme |dBX/dt| values mainly develop against the background of the Pi3 pulsations with a complex frequency content, whereas the extreme |dBY/dt| values appear when the buildup (decay) phases of the bay-like disturbance associated with the evolution of a substorm coincide with the respective phases of the field of pulsations. The conditions under which the derivatives |dBX/dt| and |dBY/dt| reach their over-threshold values are studied for subauroral latitudes by the technique of superposed epoch analysis. The extreme values of the derivatives most frequently occur during the main phase of moderate magnetic storms or beyond the storm—during high substorm activity under the conditions of a negative vertical component of the interplanetary magnetic field. The probability of the occurrence of over-threshold values increases at high amplitudes of the Pi3 pulsations and depends on their spectral content. The problem of analyzing and forecasting the over-threshold |dBY/dt| perturbations is complicated by the fact that the scale of the perturbations is small along the lines of latitude and large along the meridians. This can result in GIC excitation in the North–South oriented electric power lines by the geomagnetic perturbations localized within a narrow band in longitude which can be missed during the measurements.  相似文献   

8.
A preliminary study of b value of rocks with two kinds of structural models has been made on the base of a new acoustic emission recording system. It shows that b value of the sample decreases obviously when the sample with compressive en echelon faults changes into a tensile one after interchange occurs between stress axis σ 1 and σ 2. A similar experiment is observed when the sample with tensile en echelon faults changes into that with a bend fault after two segments of the en echelon fault linking up. These facts indicate that the variation of b value may contain the information of the regional dominant structural model. Therefore, b-value analyses could be a new method for studying regional dominant structural models.  相似文献   

9.
Hydrological models have been widely applied in flood forecasting, water resource management and other environmental sciences. Most hydrological models calibrate and validate parameters with available records. However, the first step of hydrological simulation is always to quantitatively and objectively split samples for use in calibration and validation. In this paper, we have proposed a framework to address this issue through a combination of a hierarchical scheme through trial and error method, for systematic testing of hydrological models, and hypothesis testing to check the statistical significance of goodness-of-fit indices. That is, the framework evaluates the performance of a hydrological model using sample splitting for calibration and validation, and assesses the statistical significance of the Nash–Sutcliffe efficiency index (Ef), which is commonly used to assess the performance of hydrological models. The sample splitting scheme used is judged as acceptable if the Ef values exceed the threshold of hypothesis testing. According to the requirements of the hierarchical scheme for systematic testing of hydrological models, cross calibration and validation will help to increase the reliability of the splitting scheme, and reduce the effective range of sample sizes for both calibration and validation. It is illustrated that the threshold of Ef is dependent on the significance level, evaluation criteria (both regarded as the population), distribution type, and sample size. The performance rating of Ef is largely dependent on the evaluation criteria. Three types of distributions, which are based on an approximately standard normal distribution, a Chi square distribution, and a bootstrap method, are used to investigate their effects on the thresholds, with two commonly used significance levels. The highest threshold is from the bootstrap method, the middle one is from the approximately standard normal distribution, and the lowest is from the Chi square distribution. It was found that the smaller the sample size, the higher the threshold values are. Sample splitting was improved by providing more records. In addition, outliers with a large bias between the simulation and the observation can affect the sample values of Ef, and hence the output of the sample splitting scheme. Physical hydrology processes and the purpose of the model should be carefully considered when assessing outliers. The proposed framework in this paper cannot guarantee the best splitting scheme, but the results show the necessary conditions for splitting schemes to calibrate and validate hydrological models from a statistical point of view.  相似文献   

10.
To alert the public to the possibility of tornado (T), hail (H), or convective wind (C), the National Weather Service (NWS) issues watches (V) and warnings (W). There are severe thunderstorm watches (SV), tornado watches (TV), and particularly dangerous situation watches (PV); and there are severe thunderstorm warnings (SW), and tornado warnings (TW). Two stochastic models are formulated that quantify uncertainty in severe weather alarms for the purpose of making decisions: a one-stage model for deciders who respond to warnings, and a two-stage model for deciders who respond to watches and warnings. The models identify all possible sequences of watches, warnings, and events, and characterize the associated uncertainties in terms of transition probabilities. The modeling approach is demonstrated on data from the NWS Norman, Oklahoma, warning area, years 2000–2007. The major findings are these. (i) Irrespective of its official designation, every warning type {SW, TW} predicts with a significant probability every event type {T, H, C}. (ii) An ordered intersection of SW and TW, defined as reinforced warning (RW), provides additional predictive information and outperforms SW and TW. (iii) A watch rarely leads directly to an event, and most frequently is false. But a watch that precedes a warning does matter. The watch type \(\{SV\), TV, \(PV\}\) is a predictor of the warning type \(\{SW\), RW, \(TW\}\) and of the warning performance: It sharpens the false alarm rate of the warning and the predictive probability of an event, and it increases the average lead time of the warning.  相似文献   

11.
An alternative model for the nonlinear interaction term Snl in spectral wave models, the so called generalized kinetic equation (Janssen J Phys Oceanogr 33(4):863–884, 2003; Annenkov and Shrira J Fluid Mech 561:181–207, 2006b; Gramstad and Stiassnie J Fluid Mech 718:280–303, 2013), is discussed and implemented in the third generation wave model WAVEWATCH-III. The generalized kinetic equation includes the effects of near-resonant nonlinear interactions, and is therefore able, in theory, to describe faster nonlinear evolution than the existing forms of Snl which are based on the standard Hasselmann kinetic equation (Hasselmann J Fluid Mech 12:481–500, 1962). Numerical simulations with WAVEWATCH have been carried out to thoroughly test the performance of the new form of Snl, and to compare it to the existing models for Snl in WAVEWATCH; the DIA and WRT. Some differences between the different models for Snl are observed. As expected, the DIA is shown to perform less well compared to the exact terms in certain situations, in particular for narrow wave spectra. Also for the case of turning wind significant differences between the different models are observed. Nevertheless, different from the case of unidirectional waves where the generalized kinetic equation represents a obvious improvement to the standard forms of Snl (Gramstad and Stiassnie 2013), the differences seems to be less pronounced for the more realistic cases considered in this paper.  相似文献   

12.
One of the crucial components in seismic hazard analysis is the estimation of the maximum earthquake magnitude and associated uncertainty. In the present study, the uncertainty related to the maximum expected magnitude μ is determined in terms of confidence intervals for an imposed level of confidence. Previous work by Salamat et al. (Pure Appl Geophys 174:763-777, 2017) shows the divergence of the confidence interval of the maximum possible magnitude mmax for high levels of confidence in six seismotectonic zones of Iran. In this work, the maximum expected earthquake magnitude μ is calculated in a predefined finite time interval and imposed level of confidence. For this, we use a conceptual model based on a doubly truncated Gutenberg-Richter law for magnitudes with constant b-value and calculate the posterior distribution of μ for the time interval Tf in future. We assume a stationary Poisson process in time and a Gutenberg-Richter relation for magnitudes. The upper bound of the magnitude confidence interval is calculated for different time intervals of 30, 50, and 100 years and imposed levels of confidence α?=?0.5, 0.1, 0.05, and 0.01. The posterior distribution of waiting times Tf to the next earthquake with a given magnitude equal to 6.5, 7.0, and 7.5 are calculated in each zone. In order to find the influence of declustering, we use the original and declustered version of the catalog. The earthquake catalog of the territory of Iran and surroundings are subdivided into six seismotectonic zones Alborz, Azerbaijan, Central Iran, Zagros, Kopet Dagh, and Makran. We assume the maximum possible magnitude mmax?=?8.5 and calculate the upper bound of the confidence interval of μ in each zone. The results indicate that for short time intervals equal to 30 and 50 years and imposed levels of confidence 1???α?=?0.95 and 0.90, the probability distribution of μ is around μ?=?7.16???8.23 in all seismic zones.  相似文献   

13.
We have analyzed the behavior of the F2 layer parameters during nighttime periods of enhanced electron concentration by the results of vertical sounding of the ionosphere carried out with five-minute periodicity in Almaty (76°55′ E, 43°15′ N) in 2001–2012. The results are obtained within the frameworks of the unified concept of different types of ionospheric plasma disturbances manifested as variations in the height and half-thickness of the layer accompanied by an increase and decrease of N m F2 at the moments of maximum compression and expansion of the layer. A good correlation is found between height h Am , which corresponds to the maximum increase, and layer peak height h m F, while h Am is always less than h m F. The difference between h Am and h m F linearly increases with increasing h m F. Whereas the difference is ~38 km for h m F = 280 km, it is ~54 km for h m F = 380 km. Additionally, the correlation is good between the increase in the electron concentration in the layer maximum ΔN m and the maximum enhancement at the fixed height ΔN; the electron concentration enhancement in the layer maximum is about two to three times lower than its maximum enhancement at the fixed height.  相似文献   

14.
15.
16.
We used CHAMP satellite vector data and the latest IGRF12 model to investigate the regional magnetic anomalies over mainland China. We assumed satellite points on the same surface (307.69 km) and constructed a spherical cap harmonic model of the satellite magnetic anomalies for elements X, Y, Z, and F over Chinese mainland for 2010.0 (SCH2010) based on selected 498 points. We removed the external field by using the CM4 model. The pole of the spherical cap is 36N° and 104°E, and its half-angle is 30°. After checking and comparing the root mean square (RMS) error of ΔX, ΔY, and ΔZ and X, Y, and Z, we established the truncation level at K max = 9. The results suggest that the created China Geomagnetic Referenced Field at the satellite level (CGRF2010) is consistent with the CM4 model. We compared the SCH2010 with other models and found that the intensities and distributions are consistent. In view of the variation of F at different altitudes, the SCH2010 model results obey the basics of the geomagnetic field. Moreover, the change rate of X, Y, and Z for SCH2010 and CM4 are consistent. The proposed model can successfully reproduce the geomagnetic data, as other data-fitting models, but the inherent sources of error have to be considered as well.  相似文献   

17.
We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on morphostructural analysis, pattern recognition, and the Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. The USLE stands for an empirical relationship log10N(M, L)?=?A?+?B·(5 – M)?+?C·log10L, where N(M, L) is the expected annual number of earthquakes of a certain magnitude M within a seismically prone area of linear dimension L. We use parameters A, B, and C of USLE to estimate, first, the expected maximum magnitude in a time interval at seismically prone nodes of the morphostructural scheme of the region under study, then map the corresponding expected ground shaking parameters (e.g., peak ground acceleration, PGA, or macro-seismic intensity). After a rigorous verification against the available seismic evidences in the past (usually, the observed instrumental PGA or the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures (e.g., those based on census of population, buildings inventory). The methodology of seismic hazard and risk assessment is illustrated by application to the territory of Greater Caucasus and Crimea.  相似文献   

18.
The time variations in three parameters during the last decades are considered. R(foF2) is the correlation coefficient between the nighttime and daytime values of foF2 for the same day. Stable trends are found for the minimum (R(foF2)(max)) and maximum (R(foF2)(min)) values of R(foF2) during a year. The foF2(night)/foF2(day) ratio demonstrates both, negative and positive trends, and the trend sign depends on the inclination I and declination D of the magnetic field. The correlation coefficient r(h, fo) between foF2 and the 100 hP level in the stratosphere demonstrates a decrease (in the years of maximum and minimum solar activity) from the 1980s to the 1990s. The trends in all three groups of data are considered under the assumption of long-term changes in the circulation in the upper atmosphere.  相似文献   

19.
Seismic observations exhibit the presence of abnormal b-values prior to numerous earthquakes. The time interval from the appearance of abnormal b-values to the occurrence of mainshock is called the precursor time. There are two kinds of precursor times in use: the first one denoted by T is the time interval from the moment when the b-value starts to increase from the normal one to the abnormal one to the occurrence time of the forthcoming mainshock, and the second one denoted by T p is the time interval from the moment when the abnormal b-value reaches the peak one to the occurrence time of the forthcoming mainshock. Let T* be the waiting time from the moment when the abnormal b-value returned to the normal one to the occurrence time of the forthcoming mainshock. The precursor time, T (usually in days), has been found to be related to the magnitude, M, of the mainshock expected in a linear form as log(T)?=?q?+?rM where q and r are the coefficient and slope, respectively. In this study, the values of T, T p , and T* of 45 earthquakes with 3?≤?M?≤?9 occurred in various tectonic regions are compiled from or measured from the temporal variations in b-values given in numerous source materials. The relationships of T and T p , respectively, versus M are inferred from compiled data. The difference between the values of T and T p decreases with increasing M. In addition, the plots of T*/T versus M, T* versus T, and T* versus T-T* will be made and related equations between two quantities will be inferred from given data.  相似文献   

20.
The regularities in the radiation and propagation of seismic waves in the regions of the North Caucasus are analyzed for estimating the ground motion parameters during the probable future strong earthquakes. Based on the records of the regional earthquakes with magnitudes MW ~ 3.9–5.6 within epicentral distances up to ~300 km obtained during the period of digital measurements at the Sochi and Anapa seismic stations, the Q-factors in the vicinities of these sites are estimated at ~55 f0.9 and ~90f0.7, respectively. The estimates were obtained by the coda normalization method developed by Aki, Rautian, and other authors. This method is based on the phenomenon of suppression of the earthquake (source) effects and local (site) responses by coda waves in the S-wave spectra. The obtained Q-factor estimates can be used for forecasting the ground shaking parameters for the future probable strong earthquakes in the North Caucasus in the vicinities of Sochi and Anapa.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号