首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
If earthquakes are modelled by a stochastic process, it is possible to interpret the associated response spectrum in terms of the statistics of extreme values of oscillator response to the process. For a stationary earthquake model this interpretation leads to a relationship between the power spectral density function of the process and the response spectrum. This relationship is examined in this paper and forms the basis for two methods presented to obtain the power spectrum of the earthquake process from its response spectrum. One of these methods is approximate but leads to an explicit representation of the power spectral density function in terms of the response spectrum. The other method is exact wherein an iterative scheme for the solution of the problem is established. An example problem is solved to illustrate the use of the two methods and it is shown that for small values of damping, the approximate derivation yields a fairly accurate solution.  相似文献   

3.
The constructed estimator is introduced for the right truncation point of the truncated exponential distribution. The new estimator is most efficient in important ranges of truncation points for finite sample sizes. The introduced inverse mean squared error clearly indicates the good behaviour of the new estimator. The estimation of the scaling parameter is considered in all discussions and computations. The methods and models of the extreme value theory are not appropriate to estimate the truncation point because they work only in the case of very large sample sizes. Furthermore, a procedure for a first goodness-of-fit test is introduced. All this has been researched by extensive Monte Carlo simulations for different truncation points and sample sizes. Finally, the new inference methods are applied at the end for the random distribution of wildfire sizes and earthquake magnitudes.  相似文献   

4.
The use of historical data can significantly reduce the uncertainty around estimates of the magnitude of rare events obtained with extreme value statistical models. For historical data to be included in the statistical analysis a number of their properties, e.g. their number and magnitude, need to be known with a reasonable level of confidence. Another key aspect of the historical data which needs to be known is the coverage period of the historical information, i.e. the period of time over which it is assumed that all large events above a certain threshold are known. It might be the case though, that it is not possible to easily retrieve with sufficient confidence information on the coverage period, which therefore needs to be estimated. In this paper methods to perform such estimation are introduced and evaluated. The statistical definition of the problem corresponds to estimating the size of a population for which only few data points are available. This problem is generally refereed to as the German tanks problem, which arose during the second world war, when statistical estimates of the number of tanks available to the German army were obtained. Different estimators can be derived using different statistical estimation approaches, with the maximum spacing estimator being the minimum-variance unbiased estimator. The properties of three estimators are investigated by means of a simulation study, both for the simple estimation of the historical coverage and for the estimation of the extreme value statistical model. The maximum spacing estimator is confirmed to be a good approach to the estimation of the historical period coverage for practical use and its application for a case study in Britain is presented.  相似文献   

5.
The problem of identification of the modal parameters of a structural model using measured ambient response time histories is addressed. A Bayesian spectral density approach (BSDA) for modal updating is presented which uses the statistical properties of a spectral density estimator to obtain not only the optimal values of the updated modal parameters but also their associated uncertainties by calculating the posterior joint probability distribution of these parameters. Calculation of the uncertainties of the identified modal parameters is very important if one plans to proceed with the updating of a theoretical finite element model based on modal estimates. It is found that the updated PDF of the modal parameters can be well approximated by a Gaussian distribution centred at the optimal parameters at which the posterior PDF is maximized. Examples using simulated data are presented to illustrate the proposed method. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

6.
A nonparametric density estimate that incorporates spatial dependency has not been studied in the literature. In this article, we propose a new spatial density estimator that depends on two kernels: one controls the distance between observations while the other controls the spatial dependence structure. The uniform almost sure convergence of the density estimate is established with the rate of convergence. The consistency of the mode of this kernel density is also studied. Then a spatial hierarchical unsupervised clustering algorithm based on the mode estimate is presented. Some simulations as well as an application to the Monsoon Asia Drought Atlas data illustrate the efficiency of our algorithm, and a comparison of the spatial structures of these data detected by the density estimate and clustering algorithm are done.  相似文献   

7.
Parametric models are commonly used in frequency analysis of extreme hydrological events. To estimate extreme quantiles associated to high return periods, these models are not always appropriate. Therefore, estimators based on extreme value theory (EVT) are proposed in the literature. The Weissman estimator is one of the popular EVT-based semi-parametric estimators of extreme quantiles. In the present paper we propose a new family of EVT-based semi-parametric estimators of extreme quantiles. To built this new family of estimators, the basic idea consists in assigning the weights to the k observations being used. Numerical experiments on simulated data are performed and a case study is presented. Results show that the proposed estimators are smooth, stable, less sensitive, and less biased than Weissman estimator.  相似文献   

8.
Studies have illustrated the performance of at-site and regional flood quantile estimators. For realistic generalized extreme value (GEV) distributions and short records, a simple index-flood quantile estimator performs better than two-parameter (2P) GEV quantile estimators with probability weighted moment (PWM) estimation using a regional shape parameter and at-site mean and L-coefficient of variation (L-CV), and full three-parameter at-site GEV/PWM quantile estimators. However, as regional heterogeneity or record lengths increase, the 2P-estimator quickly dominates. This paper generalizes the index flood procedure by employing regression with physiographic information to refine a normalized T-year flood estimator. A linear empirical Bayes estimator uses the normalized quantile regression estimator to define a prior distribution which is employed with the normalized 2P-quantile estimator. Monte Carlo simulations indicate that this empirical Bayes estimator does essentially as well as or better than the simpler normalized quantile regression estimator at sites with short records, and performs as well as or better than the 2P-estimator at sites with longer records or smaller L-CV.  相似文献   

9.
In the hydrologic analysis of extreme events such as precipitation or floods, the data can generally be divided into two types: partial duration series and annual maximum series. Partial duration series analysis is a robust method to analyze hydrologic extremes, but the adaptive choice of an optimal threshold is challenging. The main goal of this paper was to determine the best method for choosing optimal thresholds. Ten semi-parametric tail index estimators were applied to find the optimal threshold of a 24-h duration precipitation period using data from the Korean Meteorological Administration. The mean square errors of the 10 estimators were calculated to determine the optimal threshold using a semi-parametric bootstrap method. A modified generalized Jackknife estimator determined the best performance in this study among the 10 estimators evaluated with regard to estimating the mean square error of the shape estimator for the generalized Pareto distribution.  相似文献   

10.
11.
Erosion from logging road surfaces, cut slopes, banks, and ditches represents a chronic source of sediment input to streams that can degrade aquatic habitats. Road surface erosion is of particular concern because the magnitude of sediment generation when traffic levels are high can be large. Current models for predicting sediment production from roads require information on area‐specific sediment delivery, which is not often available. Here, we developed a model to quantify suspended sediment concentrations (SSC) generated by forest roads surfaces under different conditions of use and density. This model is designed for a typical medium‐size coastal watershed of British Columbia or the American Pacific Northwest, and was applied to the Chilliwack River watershed as a case study. The results illustrate that intensive use of forest roads combined with high road density can increase the number of extreme sedimentation events over a predetermined threshold. A comparison of the effects of road density and the level of road use suggests that the level of road use is more important than the road density for the generation of fine sediment from road surfaces. However, the model omits the impact of roads on mass movements in a watershed, which represent a major source of sediment in steep watersheds, so the effect of road density is likely more substantial than the model predicts. The model is an attempt to overcome field data limitations by using an empirical relation between SSC and traffic variables, and presents a starting point for more intensive field studies that could be used to validate it. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
随机结构动力可靠度分析的极值概率密度方法   总被引:7,自引:2,他引:5  
提出了随机结构动力可靠度分析的极值概率密度方法。基于概率密度演化的基本思想,构造一个虚拟随机过程,使得随机结构动力反应的极值为该虚拟随机过程的截口随机变量。进而.采用概率密度演化方法,建立概率密度演化方程并求解给出随机结构动力反应的极值分布。在安全域内积分即可给出结构动力可靠度,当安全界限为随机变量时,采用这一方法几乎不增加额外的工作量,与随机模拟结果的比较表明,本文建议方法具有良好的精度和效率。  相似文献   

13.
We derive wavenumber domain expressions to calculate the gravity anomaly of a body with irregular bounding surfaces and an exponential density‐depth relationship. We apply the method to sedimentary basins, which commonly have this type of geometry and density distribution. The mathematical formulation also allows the exponential density‐depth relationship to be measured from an arbitrary irregular surface rather than the top surface. Using this arrangement, the gravity anomaly of exhumed sedimentary basins can be predicted if the amount of eroded section can be estimated. The corresponding inverse algorithms are also derived. Examples of the use of the forward algorithms, from the Galicia Interior Basin and the Central Irish Sea Basin, are used to illustrate these methods.  相似文献   

14.
Extreme value analysis of precipitation is of great importance for several types of engineering studies and policy decisions. For return level estimation of extreme 24-h precipitation, practitioners often use daily measurements (usually 08:00–08:00 local time) since high-frequency measurements are scarce. Annual maxima of daily series are smaller or equal to continuous 24-h precipitation maxima such that the resulting return levels may be systematically underestimated. In this paper we use a rule, derived earlier, on the conversion of the generalized extreme value (GEV) distribution of daily to 24-h maxima. We develop an estimator for the conversion exponent by combining daily maxima and high-frequency sampled 24-h maxima in one joint log-likelihood. Once the conversion exponent has been estimated, GEV-parameters of 24-h maxima can be obtained at sites where only daily data is available. The new methodology has been extended to spatial regression models.  相似文献   

15.
The Halphen family of distributions is a flexible and complete system to fit sets of observations independent and identically distributed. Recently, it is shown that this family of distributions represents a potential alternative to the generalized extreme value distributions to model extreme hydrological events. The existence of jointly sufficient statistics for parameter estimation leads to optimality of the method of maximum likelihood (ML). Nevertheless, the ML method requires numerical approximations leading to less accurate values. However, estimators by the method of moments (MM) are explicit and their computation is fast. Even though MM method leads to good results, it is not optimal. In order to combine the advantages of the ML (optimality) and MM (efficiency and fast computations), two new mixed methods were proposed in this paper. One of the two methods is direct and the other is iterative, denoted respectively direct mixed method (MMD) and iterative mixed method (MMI). An overall comparison of the four estimation methods (MM, ML, MMD and MMI) was performed using Monte Carlo simulations regarding the three Halphen distributions. Generally, the MMI method can be considered for the three Halphen distributions since it is recommended for a majority of cases encountered in hydrology. The principal idea of the mixed methods MMD and MMI could be generalized for other distributions with complicated density functions.  相似文献   

16.
17.
This paper aims to demonstrate that the elastic stiffnesses and the anisotropic parameters of rocks can be accurately predicted from geophysical features such as the porosity, the density, the compression stress, the pore pressure and the burial depth using relevant machine learning methods. It also suggests that the extreme gradient boosting method is the best method for this purpose. It is more accurate, extremely faster to train and more robust than the artificial neural networks and the support vector machine methods. Very high R-squared scores was obtained for the predicted elastic stiffnesses of a relevant dataset that is available in the literature. This dataset contains different types of rocks, and the values of the features are in large ranges. An optimal set of parameters was obtained by considering an appropriate sensitivity analysis. The optimized model is very easy to implement in Python for practical applications.  相似文献   

18.
There are two basic approaches for estimating flood quantiles: a parametric and a nonparametric method. In this study, the comparisons of parametric and nonparametric models for annual maximum flood data of Goan gauging station in Korea were performed based on Monte Carlo simulation. In order to consider uncertainties that can arise from model and data errors, kernel density estimation for fitting the sampling distributions was chosen to determine safety factors (SFs) that depend on the probability model used to fit the real data. The relative biases of Sheater and Jones plug-in (SJ) are the smallest in most cases among seven bandwidth selectors applied. The relative root mean square errors (RRMSEs) of the Gumbel (GUM) are smaller than those of any other models regardless of parent models considered. When the Weibull-2 is assumed as a parent model, the RRMSEs of kernel density estimation are relatively small, while those of kernel density estimation are much bigger than those of parametric methods for other parent models. However, the RRMSEs of kernel density estimation within interpolation range are much smaller than those for extrapolation range in comparison with those of parametric methods. Among the applied distributions, the GUM model has the smallest SFs for all parent models, and the general extreme value model has the largest values for all parent models considered.  相似文献   

19.
The use of radiography is illustrated in four areas of interest to the hydrologist and geomorphologist. Soil samples analysed show that zones of varying hydraulic conductivity can be identified by X-ray analysis, as can the extent of the root zone. In addition examples are presented which illustrate that radiography may have potential in the examination of overland flow erosion and deposition characteristics, and in the analysis of the behaviour of clay soils during and after an extreme dry period.  相似文献   

20.
The key problem in nonparametric frequency analysis of flood and droughts is the estimation of the bandwidth parameter which defines the degree of smoothing. Most of the proposed bandwidth estimators have been based on the density function rather than the cumulative distribution function or the quantile that are the primary interest in frequency analysis. We propose a new bandwidth estimator derived from properties of quantile estimators. The estimator builds on work by Altman and Léger (1995). The estimator is compared to the well-known method of least squares cross-validation (LSCV) using synthetic data generated from various parametric distributions used in hydrologic frequency analysis. Simulations suggest that our estimator performs at least as well as, and in many cases better than, the method of LSCV. In particular, the use of the proposed plug-in estimator reduces bias in the estimation as compared to LSCV. When applied to data sets containing observations with identical values, typically the result of rounding or truncation, the LSCV and most other techniques generally underestimates the bandwidth. The proposed technique performs very well in such situations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号