首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper introduces a portfolio approach for quantifying pollution risk in the presence of PM\(_{2.5}\) concentration in cities. The model used is based on a copula dependence structure. For assessing model parameters, we analyze a limited data set of PM\(_{2.5}\) levels of Beijing, Tianjin, Chengde, Hengshui, and Xingtai. This process reveals a better fit for the t-copula dependence structure with generalized hyperbolic marginal distributions for the PM\(_{2.5}\) log-ratios of the cities. Furthermore, we show how to efficiently simulate risk measures clean-air-at-risk and conditional clean-air-at-risk using importance sampling and stratified importance sampling. Our numerical results show that clean-air-at-risk at 0.01 probability level reaches up to \(352\,{\mu \hbox {gm}^{-3}}\) (initial PM\(_{2.5}\) concentrations of cities are assumed to be \(100\,{\mu \hbox {gm}^{-3}}\)) for the constructed sample portfolio, and that the proposed methods are much more efficient than a naive simulation for computing the exceeding probabilities and conditional excesses.  相似文献   

2.
Vegetation is known to influence the hydrological state variables, suction \( \left( \psi \right) \) and volumetric water content (\( \theta_{w} \)) of soil. In addition, vegetation induces heterogeneity in the soil porous structure and consequently the relative permeability (\( k_{r} \)) of water under unsaturated conditions. The indirect method of utilising the soil water characteristic curve (SWCC) is commonly adopted for the determination of \( k_{r} \). In such cases, it is essential to address the stochastic behaviour of SWCC, in order to conduct a robust analysis on the \( k_{r} \) of vegetative cover. The main aim of this study is to address the uncertainties associated with \( k_{r} \), using probabilistic analysis, for vegetative covers (i.e., grass and tree species) with bare cover as control treatment. We propose two approaches to accomplish the aforesaid objective. The univariate suction approach predicts the probability distribution functions of \( {\text{k}}_{\text{r}} \), on the basis of identified best probability distribution of suction. The bivariate suction and water content approach deals with the bivariate modelling of the water content and suction (SWCC), in order to capture the randomness in the permeability curves, due to presence of vegetation. For this purpose, the dependence structure of \( \psi \) and \( \theta_{w} \) is established via copula theory, and the \( k_{r} \) curves are predicted with respect to varying levels of \( \psi - \theta_{w} \) correlation. The results showed that the \( k_{r} \) of vegetative covers is substantially lower than that in bare covers. The reduction in \( k_{r} \) with drying is more in tree cover than grassed cover, since tree roots induce higher levels of suction. Moreover, the air entry value of the soil depends on the magnitude of \( \psi - \theta_{w} \) correlation, which in turn, is influenced by the type of vegetation in the soil. \( k_{r} \) is found to be highly uncertain in the desaturation zone of the relative permeability curve. The stochastic behaviour of \( k_{r} \) is found to be most significant in tree covers. Finally, a simplified case study is also presented in order to demonstrate the impact of the uncertainty in \( k_{r} \), on the stability of vegetates slopes. With an increment in the parameter \( \alpha \), factor of safety (FS) is found to decrease. The trend of FS is reverse of this with parameter \( n \). Overall FS is found to vary around 4–5%, for both bare and vegetative slopes.  相似文献   

3.
Temperature data from SABER/TIMED and Empirical Orthogonal Function(EOF) analysis are taken to examine possible modulations of the temperature migrating diurnal tide(DW1) by latitudinal gradients of zonal mean zonal wind(■). The result shows that z increases with altitudes and displays clearly seasonal and interannual variability. In the upper mesosphere and lower thermosphere(MLT), at the latitudes between 20°N and 20°S, when ■ strengthens(weakens) at equinoxes(solstices) the DW1 amplitude increases(decreases) simultaneously. Stronger maximum in March-April equinox occurs in both z and the DW1 amplitude. Besides, a quasi-biennial oscillation of DW1 is also found to be synchronous with ■. The resembling spatial-temporal features suggest that ■ in the upper tropic MLT probably plays an important role in modulating semiannual, annual, and quasi-biennial oscillations in DW1 at the same latitude and altitude. In addition, ■ in the mesosphere possibly affects the propagation of DW1 and produces SAO of DW1 in the lower thermosphere. Thus, SAO of DW1 in the upper MLT may be a combined effect of ■ both in the mesosphere and in the upper MLT, which models studies should determine in the future.  相似文献   

4.
Diurnal S\(_1\) tidal oscillations in the coupled atmosphere–ocean system induce small perturbations of Earth’s prograde annual nutation, but matching geophysical model estimates of this Sun-synchronous rotation signal with the observed effect in geodetic Very Long Baseline Interferometry (VLBI) data has thus far been elusive. The present study assesses the problem from a geophysical model perspective, using four modern-day atmospheric assimilation systems and a consistently forced barotropic ocean model that dissipates its energy excess in the global abyssal ocean through a parameterized tidal conversion scheme. The use of contemporary meteorological data does, however, not guarantee accurate nutation estimates per se; two of the probed datasets produce atmosphere–ocean-driven S\(_1\) terms that deviate by more than 30 \(\upmu \)as (microarcseconds) from the VLBI-observed harmonic of \(-16.2+i113.4\) \(\upmu \)as. Partial deficiencies of these models in the diurnal band are also borne out by a validation of the air pressure tide against barometric in situ estimates as well as comparisons of simulated sea surface elevations with a global network of S\(_1\) tide gauge determinations. Credence is lent to the global S\(_1\) tide derived from the Modern-Era Retrospective Analysis for Research and Applications (MERRA) and the operational model of the European Centre for Medium-Range Weather Forecasts (ECMWF). When averaged over a temporal range of 2004 to 2013, their nutation contributions are estimated to be \(-8.0+i106.0\) \(\upmu \)as (MERRA) and \(-9.4+i121.8\) \(\upmu \)as (ECMWF operational), thus being virtually equivalent with the VLBI estimate. This remarkably close agreement will likely aid forthcoming nutation theories in their unambiguous a priori account of Earth’s prograde annual celestial motion.  相似文献   

5.
Conditional risk based on multivariate hazard scenarios   总被引:1,自引:1,他引:0  
We present a novel methodology to compute conditional risk measures when the conditioning event depends on a number of random variables. Specifically, given a random vector \((\mathbf {X},Y)\), we consider risk measures that express the risk of Y given that \(\mathbf {X}\) assumes values in an extreme multidimensional region. In particular, the considered risky regions are related to the AND, OR, Kendall and Survival Kendall hazard scenarios that are commonly used in environmental literature. Several closed formulas are considered (especially in the AND and OR scenarios). An application to spatial risk analysis involving real data is discussed.  相似文献   

6.
During the last 15 years, more attention has been paid to derive analytic formulae for the gravitational potential and field of polyhedral mass bodies with complicated polynomial density contrasts, because such formulae can be more suitable to approximate the true mass density variations of the earth (e.g., sedimentary basins and bedrock topography) than methods that use finer volume discretization and constant density contrasts. In this study, we derive analytic formulae for gravity anomalies of arbitrary polyhedral bodies with complicated polynomial density contrasts in 3D space. The anomalous mass density is allowed to vary in both horizontal and vertical directions in a polynomial form of \(\lambda =ax^m+by^n+cz^t\), where mnt are nonnegative integers and abc are coefficients of mass density. First, the singular volume integrals of the gravity anomalies are transformed to regular or weakly singular surface integrals over each polygon of the polyhedral body. Then, in terms of the derived singularity-free analytic formulae of these surface integrals, singularity-free analytic formulae for gravity anomalies of arbitrary polyhedral bodies with horizontal and vertical polynomial density contrasts are obtained. For an arbitrary polyhedron, we successfully derived analytic formulae of the gravity potential and the gravity field in the case of \(m\le 1\), \(n\le 1\), \(t\le 1\), and an analytic formula of the gravity potential in the case of \(m=n=t=2\). For a rectangular prism, we derive an analytic formula of the gravity potential for \(m\le 3\), \(n\le 3\) and \(t\le 3\) and closed forms of the gravity field are presented for \(m\le 1\), \(n\le 1\) and \(t\le 4\). Besides generalizing previously published closed-form solutions for cases of constant and linear mass density contrasts to higher polynomial order, to our best knowledge, this is the first time that closed-form solutions are presented for the gravitational potential of a general polyhedral body with quadratic density contrast in all spatial directions and for the vertical gravitational field of a prismatic body with quartic density contrast along the vertical direction. To verify our new analytic formulae, a prismatic model with depth-dependent polynomial density contrast and a polyhedral body in the form of a triangular prism with constant contrast are tested. Excellent agreements between results of published analytic formulae and our results are achieved. Our new analytic formulae are useful tools to compute gravity anomalies of complicated mass density contrasts in the earth, when the observation sites are close to the surface or within mass bodies.  相似文献   

7.
High-biomass red tides occur frequently in some semi-enclosed bays of Hong Kong where ambient nutrients are not high enough to support such a high phytoplankton biomass. These high-biomass red tides release massive inorganic nutrients into local waters during their collapse. We hypothesized that the massive inorganic nutrients released from the collapse of red tides would fuel growth of other phytoplankton species. This could influence phytoplankton species composition. We tested the hypothesis using a red tide event caused by Mesodinium rubrum (M. rubrum) in a semi-enclosed bay, Port Shelter. The red tide patch had a cell density as high as 5.0×105 cells L?1, and high chlorophyll a (63.71 μg L?1). Ambient inorganic nutrients (nitrate: \(\rm{NO}_3^-\), ammonium: \(\rm{NH}_4^+\), phosphate: \(\rm{PO}_4^{3-}\), silicate: \(\rm{SiO}_4^{3-}\)) were low both in the red tide patch and the non-red-tide patch (clear waters outside the red tide patch). Nutrient addition experiments were conducted by adding all the inorganic nutrients to water samples from the two patches followed by incubation for 9 days. The results showed that the addition of inorganic nutrients did not sustain high M. rubrum cell density, which collapsed after day 1, and did not drive M. rubrum in the non-red-tide patch sample to the same high-cell density in the red tide patch sample. This confirmed that nutrients were not the driving factor for the formation of this red tide event, or for its collapse. The death of M. rubrum after day 1 released high concentrations of \(\rm{NO}_3^-\), \(\rm{PO}_4^{3-}\), \(\rm{SiO}_4^{3-}\), \(\rm{NH}_4^+\), and urea. Bacterial abundance and heterotrophic activity increased, reaching the highest on day 3 or 4, and decreased as cell density of M. rubrum declined. The released nutrients stimulated growth of diatoms, such as Chaetoceros affinis var. circinalis, Thalassiothrix frauenfeldii, and Nitzschia sp., particularly with additions of \(\rm{SiO}_4^{3-}\) treatments, and other species. These results demonstrated that initiation of M. rubrum red tides in the bay was not directly driven by nutrients. However, the massive inorganic nutrients released from the collapse of the red tide could induce a second bloom in low-ambient nutrient water, influencing phytoplankton species composition.  相似文献   

8.
Vulnerability maps are designed to show areas of greatest potential for groundwater contamination on the basis of hydrogeological conditions and human impacts. The objective of this research is (1) to assess the groundwater vulnerability using DRASTIC method and (2) to improve the DRASTIC method for evaluation of groundwater contamination risk using AI methods, such as ANN, SFL, MFL, NF and SCMAI approaches. This optimization method is illustrated using a case study. For this purpose, DRASTIC model is developed using seven parameters. For validating the contamination risk assessment, a total of 243 groundwater samples were collected from different aquifer types of the study area to analyze \( {\text{NO}}_{ 3}^{ - } \) concentration. To develop AI and CMAI models, 243 data points are divided in two sets; training and validation based on cross validation approach. The calculated vulnerability indices from the DRASTIC method are corrected by the \( {\text{NO}}_{3}^{ - } \) data used in the training step. The input data of the AI models include seven parameters of DRASTIC method. However, the output is the corrected vulnerability index using \( {\text{NO}}_{3}^{ - } \) concentration data from the study area, which is called groundwater contamination risk. In other words, there is some target value (known output) which is estimated by some formula from DRASTIC vulnerability and \( {\text{NO}}_{3}^{ - } \) concentration values. After model training, the AI models are verified by the second \( {\text{NO}}_{3}^{ - } \) concentration dataset. The results revealed that NF and SFL produced acceptable performance while ANN and MFL had poor prediction. A supervised committee machine artificial intelligent (SCMAI), which combines the results of individual AI models using a supervised artificial neural network, was developed for better prediction of vulnerability. The performance of SCMAI was also compared to those of the simple averaging and weighted averaging committee machine intelligent (CMI) methods. As a result, the SCMAI model produced reliable estimates of groundwater contamination risk.  相似文献   

9.
In this study, the 11 August 2012 M w 6.4 Ahar earthquake is investigated using the ground motion simulation based on the stochastic finite-fault model. The earthquake occurred in northwestern Iran and causing extensive damage in the city of Ahar and surrounding areas. A network consisting of 58 acceleration stations recorded the earthquake within 8–217 km of the epicenter. Strong ground motion records from six significant well-recorded stations close to the epicenter have been simulated. These stations are installed in areas which experienced significant structural damage and humanity loss during the earthquake. The simulation is carried out using the dynamic corner frequency model of rupture propagation by extended fault simulation program (EXSIM). For this purpose, the propagation features of shear-wave including \( {Q}_s \) value, kappa value \( {k}_0 \), and soil amplification coefficients at each site are required. The kappa values are obtained from the slope of smoothed amplitude of Fourier spectra of acceleration at higher frequencies. The determined kappa values for vertical and horizontal components are 0.02 and 0.05 s, respectively. Furthermore, an anelastic attenuation parameter is derived from energy decay of a seismic wave by using continuous wavelet transform (CWT) for each station. The average frequency-dependent relation estimated for the region is \( Q=\left(122\pm 38\right){f}^{\left(1.40\pm 0.16\right)}. \) Moreover, the horizontal to vertical spectral ratio \( H/V \) is applied to estimate the site effects at stations. Spectral analysis of the data indicates that the best match between the observed and simulated spectra occurs for an average stress drop of 70 bars. Finally, the simulated and observed results are compared with pseudo acceleration spectra and peak ground motions. The comparison of time series spectra shows good agreement between the observed and the simulated waveforms at frequencies of engineering interest.  相似文献   

10.
This paper considers a problem of analyzing temporal and spatial structure of particulate matter (PM) data with emphasizing high-level \(\text {PM}_{10}\). The proposed method is based on a combination of a generalized extreme value (GEV) distribution and a multiscale concept from scaling property theory used in hydrology. In this study, we use hourly \(\text {PM}_{10}\) data observed for 5 years on 25 stations located in Seoul metropolitan area, Korea. For our analysis, we calculate monthly maximum values for various duration times and area coverages at each station, and show that their distribution follows a GEV distribution. In addition, we identify that the GEV parameters of \(\text {PM}_{10}\) maxima hold a new scaling property, termed ‘piecewise linear scaling property’ for certain duration times. By using this property, we construct a 12-month return level map of hourly \(\text {PM}_{10}\) data at any arbitrary d-hour duration. Furthermore, we extend our study to understand spatio-temporal multiscale structure of \(\text {PM}_{10}\) extremes over different temporal and spatial scales.  相似文献   

11.
Modelling seismic attenuation is one of the most critical points in the hazard assessment process. In this article we consider the spatial distribution of the effects caused by an earthquake as expressed by the values of the macroseismic intensity recorded at various locations surrounding the epicentre. Considering the ordinal nature of the intensity, a way to show its decay with distance is to draw curves—isoseismal lines—on maps, which bound points of intensity not smaller than a fixed value. These lines usually take the form of closed and nested curves around the epicentre, with highly different shapes because of the effects of ground conditions and of complexities in rupture propagation. Forecasting seismic attenuation of future earthquakes requires stochastic modelling of the decay on the basis of a common spatial pattern. The aim of this study is to consider a statistical methodology that identifies a general shape, if it exists, for isoseismal lines of a set of macroseismic fields. Data depth is a general nonparametric method for analysis of probability distributions and datasets. It has arisen as a statistical method to order points of a multivariate space, e.g., Euclidean space \({\mathbb {R}}^{p}\), \(p \ge 1\), according to the centrality with respect to a distribution or a given data cloud. Recently, this method has been extended to the ordering of functions and trajectories. In our case, for a fixed intensity decay \(\varDelta I\), we build a set of convex hulls that enclose the sites of felt intensity \(I_s \ge I_0 -\varDelta I\), one for each macroseismic field of a set of earthquakes that are considered as similar from the attenuation point of view. By applying data depth functions to this functional dataset, it is possible to identify the most central curve, i.e., the attenuation pattern, and to consider other properties like variability, outlyingness, and possible clustering of such curves. Results are shown for earthquakes that occurred on the Central Po Plain in May 2012, and on the eastern flank of Mt. Etna since 1865.  相似文献   

12.
Nowadays, most of the site classifications schemes are based on the predominant period of the site as determined from the average horizontal to vertical spectral ratios of seismic motion or microtremor. However, the difficulty lies in the identification of the predominant period in particular if the observed average response spectral ratio does not present a clear peak but rather a broadband amplification or multiple peaks. In this work, based on the Eurocode-8 (2004) site classification, and assuming bounded random fields for both shear and compression waves-velocities, damping coefficient, natural period and depth of soil profile, one propose a new site-classification approach, based on “target” simulated average \( H/V \) spectral ratios, defined for each soil class. Taking advantage of the relationship of Kawase et al. (Bull Seismol Soc Am 101:2001–2014, 2011), which link the \( H/V \) spectral ratio to the horizontal (\( HTF \)) over the vertical (\( VTF \)) transfer functions, statistics of \( H/V \) spectral ratio via deterministic visco-elastic seismic analysis using the wave propagation theory are computed for the 4 soil classes. The obtained results show that \( H/V \) and \( HTF \) have amplitudes and shapes remarkably different among the four soil classes and exhibit fundamental peaks in the period ranges remarkably similar. Moreover, the “target” simulated average \( H/V \) spectral ratios for the 4 soil classes are in good agreement with the experimental ones obtained by Zhao et al. (Bull Seismol Soc Am 96:914–925, 2006) from the abundant and reliable Japanese strong motions database Kik-net, Ghasemi et al. (Soil Dyn Earthq Eng 29:121–132, 2009) from the Iranian strong motion data, and Di Alessandro et al. (Bull Sesismol Soc Am 106:2, 2011.  https://doi.org/10.1785/0120110084) from the Italian strong motion data. In addition to the 4 EC-8 standard soil classes (A, B, C and D), the superposition of the 4 target \( H/V \) reveals 3 new boundary site classes; AB, BC and CD, for overlapping \( V_{s,30} \) ranges when the predominant peak is not clearly consistent with any of the 4 proposed classes. Finally, one proposes a site classification index based on the ratio between the cross-correlation and the mean quadratic error between the in situ \( H/V \) spectral ratio and the “target” one. In order to test the reliability of the proposed approach, data from 139 sites were used, 132 collected from the Kik-net network database from Japan and 7 from Algeria. The site classification success rate per site class are around 93, 82, 89 and 100% for rock, hard soil, medium soil and soft soil, respectively. Zhao et al. (2006) found an average success for the 4 classes of soil close to 60%, similar to what one found in the present study (63%) without considering the new soil classes, but much smaller if one considers them (86%). In the absence of \( V_{s,30} \) data, the proposed approach can be an alternative to site classification.  相似文献   

13.
In this work, we map the absorption properties of the French crust by analyzing the decay properties of coda waves. Estimation of the coda quality factor \(Q_{c}\) in five non-overlapping frequency-bands between 1 and 32 Hz is performed for more than 12,000 high-quality seismograms from about 1700 weak to moderate crustal earthquakes recorded between 1995 and 2013. Based on sensitivity analysis, \(Q_{c}\) is subsequently approximated as an integral of the intrinsic shear wave quality factor \(Q_{i}\) along the ray connecting the source to the station. After discretization of the medium on a 2-D Cartesian grid, this yields a linear inverse problem for the spatial distribution of \(Q_{i}\). The solution is approximated by redistributing \(Q_{c}\) in the pixels connecting the source to the station and averaging over all paths. This simple procedure allows to obtain frequency-dependent maps of apparent absorption that show lateral variations of \(50\%\) at length scales ranging from 50 km to 150 km, in all the frequency bands analyzed. At low frequency, the small-scale geological features of the crust are clearly delineated: the Meso-Cenozoic basins (Aquitaine, Brabant, Southeast) appear as strong absorption regions, while crystalline massifs (Armorican, Central Massif, Alps) appear as low absorption zones. At high frequency, the correlation between the surface geological features and the absorption map disappears, except for the deepest Meso-Cenozoic basins which exhibit a strong absorption signature. Based on the tomographic results, we explore the implications of lateral variations of absorption for the analysis of both instrumental and historical seismicity. The main conclusions are as follows: (1) current local magnitude \(M_{L}\) can be over(resp. under)-estimated when absorption is weaker(resp. stronger) than the nominal value assumed in the amplitude-distance relation; (2) both the forward prediction of the earthquake macroseismic intensity field and the estimation of historical earthquake seismological parameters using macroseismic intensity data are significantly improved by taking into account a realistic 2-D distribution of absorption. In the future, both \(M_{L}\) estimations and macroseismic intensity attenuation models should benefit from high-resolution models of frequency-dependent absorption such as the one produced in this study.  相似文献   

14.
Random fields based on energy functionals with local interactions possess flexible covariance functions, lead to computationally efficient algorithms for spatial data processing, and have important applications in Bayesian field theory. In this paper we address the calculation of covariance functions for a family of isotropic local-interaction random fields in two dimensions. We derive explicit expressions for non-differentiable Spartan covariance functions in \({\mathbb{R}}^2\) that are based on the modified Bessel function of the second kind. We also derive a family of infinitely differentiable, Bessel-Lommel covariance functions that exhibit a hole effect and are valid in \({\mathbb{R}}^{d}\), where d > 2. Finally, we define a generalized spectrum of correlation scales that can be applied to both differentiable and non-differentiable random fields in contrast with the smoothness microscale.  相似文献   

15.
A semiempirical mathematical model of iron and manganese migration from bottom sediments into the water mass of water bodies has been proposed based on some basic regularities in the geochemistry of those elements. The entry of dissolved forms of iron and manganese under aeration conditions is assumed negligible. When dissolved-oxygen concentration is <0.5 mg/L, the elements start releasing from bottom sediments, their release rate reaching its maximum under anoxic conditions. The fluxes of dissolved iron and manganese (Me) from bottom sediments into the water mass (J Me) are governed by the gradients of their concentrations in diffusion water sublayer adjacent to sediment surface and having an average thickness of h = 0.025 cm: \({J_{Me}} = - {D_{Me}}\frac{{{C_{Me\left( {ss} \right)}} - {C_{Me\left( w \right)}}}}{h}\) (D Me ≈ 1 × 10–9 m2/s is molecular diffusion coefficient of component Me in solution; C Me(ss) and C Me(w) ≈ 0 are Me concentrations on sediment surface, i.e., on the bottom boundary of the diffusion water sublayer, and in the water mass, i.e., on the upper boundary of the diffusion water sublayer). The value of depends on water saturation with dissolved oxygen (\({\eta _{{O_2}}}\)) in accordance with the empiric relationship \({C_{Me\left( {ss} \right)}} = \frac{{C_{_{Me\left( {ss} \right)}}^{\max }}}{{1 + k{\eta _{{O_2}}}}}\) (k is a constant factor equal to 300 for iron and 100 for manganese; C Me(ss) max is the maximal concentration of Me on the bottom boundary of the diffusion water sublayer with C Fe(ss) max ≈ 200 μM (11 mg/L), and C Mn(ss) max ≈ 100 μM (5.5 mg/L).  相似文献   

16.
In a previous publication, the seismicity of Japan from 1 January 1984 to 11 March 2011 (the time of the \(M9\) Tohoku earthquake occurrence) has been analyzed in a time domain called natural time \(\chi.\) The order parameter of seismicity in this time domain is the variance of \(\chi\) weighted for normalized energy of each earthquake. It was found that the fluctuations of the order parameter of seismicity exhibit 15 distinct minima—deeper than a certain threshold—1 to around 3 months before the occurrence of large earthquakes that occurred in Japan during 1984–2011. Six (out of 15) of these minima were followed by all the shallow earthquakes of magnitude 7.6 or larger during the whole period studied. Here, we show that the probability to achieve the latter result by chance is of the order of \(10^{-5}\). This conclusion is strengthened by employing also the receiver operating characteristics technique.  相似文献   

17.
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line \(y = a x + b\). This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to \(M_{w}\) vs. \(m_{b}\) and \(M_{w}\) vs. \(M_{S}\) regressions. This improvement is minor, within the typical error of \(M_{w}\). Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.  相似文献   

18.
Rapid magnitude estimation relations for earthquake early warning systems in the Alborz region have been developed based on the initial first seconds of the P-wave arrival. For this purpose, a total of 717 accelerograms recorded by the Building and Housing Research Center in the Alborz region with the magnitude (Mw) range of 4.8–6.5 in the period between 1995 and 2013 were employed. Average ground motion period (\( \tau_{\text{c}} \)) and peak displacement (\( P_{\text{d}} \)) in different time windows from the P-wave arrival were calculated, and their relation with magnitude was examined. Four earthquakes that were excluded from the analysis process were used to validate the results, and the estimated magnitudes were found to be in good agreement with the observed ones. The results show that using the proposed relations for the Alborz region, earthquake magnitude could be estimated with acceptable accuracy even after 1 s of the P-wave arrival.  相似文献   

19.
This paper proposes methods to detect outliers in functional data sets and the task of identifying atypical curves is carried out using the recently proposed kernelized functional spatial depth (KFSD). KFSD is a local depth that can be used to order the curves of a sample from the most to the least central, and since outliers are usually among the least central curves, we present a probabilistic result which allows to select a threshold value for KFSD such that curves with depth values lower than the threshold are detected as outliers. Based on this result, we propose three new outlier detection procedures. The results of a simulation study show that our proposals generally outperform a battery of competitors. We apply our procedures to a real data set consisting in daily curves of emission levels of nitrogen oxides (NO\(_{x}\)) since it is of interest to identify abnormal NO\(_{x}\) levels to take necessary environmental political actions.  相似文献   

20.
Fragility curves for risk-targeted seismic design maps   总被引:1,自引:0,他引:1  
Seismic design using maps based on “risk-targeting” would lead to an annual probability of attaining or exceeding a certain damage state that is uniform over an entire territory. These maps are based on convolving seismic hazard curves from a standard probabilistic analysis with the derivative of fragility curves expressing the chance for a code-designed structure to attain or exceed a certain damage state given a level of input motion, e.g. peak ground acceleration (PGA). There are few published fragility curves for structures respecting the Eurocodes (ECs, principally EC8 for seismic design) that can be used for the development of risk-targeted design maps for Europe. In this article a set of fragility curves for a regular three-storey reinforced-concrete building designed using EC2 and EC8 for medium ductility and increasing levels of design acceleration \((\hbox {a}_\mathrm{g})\) is developed. These curves show that structures designed using EC8 against PGAs up to about 1 m/s \(^{2}\) have similar fragilities to those that respect only EC2 (although this conclusion may not hold for irregular buildings, other geometries or materials). From these curves, the probability of yielding for a structure subjected to a PGA equal to \(\hbox {a}_\mathrm{g}\) varies between 0.14 ( \(\hbox {a}_\mathrm{g}=0.7\) m/s \(^{2})\) and 0.85 ( \(\hbox {a}_\mathrm{g}=3\) m/s \(^{2})\) whereas the probability of collapse for a structure subjected to a PGA equal to \(\hbox {a}_\mathrm{g}\) varies between 1.7 \(\times 10^{-7}\) ( \(\hbox {a}_\mathrm{g}=0.7\) m/s \(^{2})\) and 1.0 \(\times 10^{-5}\) ( \(\hbox {a}_\mathrm{g}=3\) m/s \(^{2})\) .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号