首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
High-biomass red tides occur frequently in some semi-enclosed bays of Hong Kong where ambient nutrients are not high enough to support such a high phytoplankton biomass. These high-biomass red tides release massive inorganic nutrients into local waters during their collapse. We hypothesized that the massive inorganic nutrients released from the collapse of red tides would fuel growth of other phytoplankton species. This could influence phytoplankton species composition. We tested the hypothesis using a red tide event caused by Mesodinium rubrum (M. rubrum) in a semi-enclosed bay, Port Shelter. The red tide patch had a cell density as high as 5.0×105 cells L?1, and high chlorophyll a (63.71 μg L?1). Ambient inorganic nutrients (nitrate: \(\rm{NO}_3^-\), ammonium: \(\rm{NH}_4^+\), phosphate: \(\rm{PO}_4^{3-}\), silicate: \(\rm{SiO}_4^{3-}\)) were low both in the red tide patch and the non-red-tide patch (clear waters outside the red tide patch). Nutrient addition experiments were conducted by adding all the inorganic nutrients to water samples from the two patches followed by incubation for 9 days. The results showed that the addition of inorganic nutrients did not sustain high M. rubrum cell density, which collapsed after day 1, and did not drive M. rubrum in the non-red-tide patch sample to the same high-cell density in the red tide patch sample. This confirmed that nutrients were not the driving factor for the formation of this red tide event, or for its collapse. The death of M. rubrum after day 1 released high concentrations of \(\rm{NO}_3^-\), \(\rm{PO}_4^{3-}\), \(\rm{SiO}_4^{3-}\), \(\rm{NH}_4^+\), and urea. Bacterial abundance and heterotrophic activity increased, reaching the highest on day 3 or 4, and decreased as cell density of M. rubrum declined. The released nutrients stimulated growth of diatoms, such as Chaetoceros affinis var. circinalis, Thalassiothrix frauenfeldii, and Nitzschia sp., particularly with additions of \(\rm{SiO}_4^{3-}\) treatments, and other species. These results demonstrated that initiation of M. rubrum red tides in the bay was not directly driven by nutrients. However, the massive inorganic nutrients released from the collapse of the red tide could induce a second bloom in low-ambient nutrient water, influencing phytoplankton species composition.  相似文献   

2.
This paper introduces a portfolio approach for quantifying pollution risk in the presence of PM\(_{2.5}\) concentration in cities. The model used is based on a copula dependence structure. For assessing model parameters, we analyze a limited data set of PM\(_{2.5}\) levels of Beijing, Tianjin, Chengde, Hengshui, and Xingtai. This process reveals a better fit for the t-copula dependence structure with generalized hyperbolic marginal distributions for the PM\(_{2.5}\) log-ratios of the cities. Furthermore, we show how to efficiently simulate risk measures clean-air-at-risk and conditional clean-air-at-risk using importance sampling and stratified importance sampling. Our numerical results show that clean-air-at-risk at 0.01 probability level reaches up to \(352\,{\mu \hbox {gm}^{-3}}\) (initial PM\(_{2.5}\) concentrations of cities are assumed to be \(100\,{\mu \hbox {gm}^{-3}}\)) for the constructed sample portfolio, and that the proposed methods are much more efficient than a naive simulation for computing the exceeding probabilities and conditional excesses.  相似文献   

3.
This article deals with the right-tail behavior of a response distribution \(F_Y\) conditional on a regressor vector \({\mathbf {X}}={\mathbf {x}}\) restricted to the heavy-tailed case of Pareto-type conditional distributions \(F_Y(y|\ {\mathbf {x}})=P(Y\le y|\ {\mathbf {X}}={\mathbf {x}})\), with heaviness of the right tail characterized by the conditional extreme value index \(\gamma ({\mathbf {x}})>0\). We particularly focus on testing the hypothesis \({\mathscr {H}}_{0,tail}:\ \gamma ({\mathbf {x}})=\gamma _0\) of constant tail behavior for some \(\gamma _0>0\) and all possible \({\mathbf {x}}\). When considering \({\mathbf {x}}\) as a time index, the term trend analysis is commonly used. In the recent past several such trend analyses in extreme value data have been published, mostly focusing on time-varying modeling of location or scale parameters of the response distribution. In many such environmental studies a simple test against trend based on Kendall’s tau statistic is applied. This test is powerful when the center of the conditional distribution \(F_Y(y|{\mathbf {x}})\) changes monotonically in \({\mathbf {x}}\), for instance, in a simple location model \(\mu ({\mathbf {x}})=\mu _0+x\cdot \mu _1\), \({\mathbf {x}}=(1,x)'\), but the test is rather insensitive against monotonic tail behavior, say, \(\gamma ({\mathbf {x}})=\eta _0+x\cdot \eta _1\). This has to be considered, since for many environmental applications the main interest is on the tail rather than the center of a distribution. Our work is motivated by this problem and it is our goal to demonstrate the opportunities and the limits of detecting and estimating non-constant conditional heavy-tail behavior with regard to applications from hydrology. We present and compare four different procedures by simulations and illustrate our findings on real data from hydrology: weekly maxima of hourly precipitation from France and monthly maximal river flows from Germany.  相似文献   

4.
Vegetation is known to influence the hydrological state variables, suction \( \left( \psi \right) \) and volumetric water content (\( \theta_{w} \)) of soil. In addition, vegetation induces heterogeneity in the soil porous structure and consequently the relative permeability (\( k_{r} \)) of water under unsaturated conditions. The indirect method of utilising the soil water characteristic curve (SWCC) is commonly adopted for the determination of \( k_{r} \). In such cases, it is essential to address the stochastic behaviour of SWCC, in order to conduct a robust analysis on the \( k_{r} \) of vegetative cover. The main aim of this study is to address the uncertainties associated with \( k_{r} \), using probabilistic analysis, for vegetative covers (i.e., grass and tree species) with bare cover as control treatment. We propose two approaches to accomplish the aforesaid objective. The univariate suction approach predicts the probability distribution functions of \( {\text{k}}_{\text{r}} \), on the basis of identified best probability distribution of suction. The bivariate suction and water content approach deals with the bivariate modelling of the water content and suction (SWCC), in order to capture the randomness in the permeability curves, due to presence of vegetation. For this purpose, the dependence structure of \( \psi \) and \( \theta_{w} \) is established via copula theory, and the \( k_{r} \) curves are predicted with respect to varying levels of \( \psi - \theta_{w} \) correlation. The results showed that the \( k_{r} \) of vegetative covers is substantially lower than that in bare covers. The reduction in \( k_{r} \) with drying is more in tree cover than grassed cover, since tree roots induce higher levels of suction. Moreover, the air entry value of the soil depends on the magnitude of \( \psi - \theta_{w} \) correlation, which in turn, is influenced by the type of vegetation in the soil. \( k_{r} \) is found to be highly uncertain in the desaturation zone of the relative permeability curve. The stochastic behaviour of \( k_{r} \) is found to be most significant in tree covers. Finally, a simplified case study is also presented in order to demonstrate the impact of the uncertainty in \( k_{r} \), on the stability of vegetates slopes. With an increment in the parameter \( \alpha \), factor of safety (FS) is found to decrease. The trend of FS is reverse of this with parameter \( n \). Overall FS is found to vary around 4–5%, for both bare and vegetative slopes.  相似文献   

5.
Vulnerability maps are designed to show areas of greatest potential for groundwater contamination on the basis of hydrogeological conditions and human impacts. The objective of this research is (1) to assess the groundwater vulnerability using DRASTIC method and (2) to improve the DRASTIC method for evaluation of groundwater contamination risk using AI methods, such as ANN, SFL, MFL, NF and SCMAI approaches. This optimization method is illustrated using a case study. For this purpose, DRASTIC model is developed using seven parameters. For validating the contamination risk assessment, a total of 243 groundwater samples were collected from different aquifer types of the study area to analyze \( {\text{NO}}_{ 3}^{ - } \) concentration. To develop AI and CMAI models, 243 data points are divided in two sets; training and validation based on cross validation approach. The calculated vulnerability indices from the DRASTIC method are corrected by the \( {\text{NO}}_{3}^{ - } \) data used in the training step. The input data of the AI models include seven parameters of DRASTIC method. However, the output is the corrected vulnerability index using \( {\text{NO}}_{3}^{ - } \) concentration data from the study area, which is called groundwater contamination risk. In other words, there is some target value (known output) which is estimated by some formula from DRASTIC vulnerability and \( {\text{NO}}_{3}^{ - } \) concentration values. After model training, the AI models are verified by the second \( {\text{NO}}_{3}^{ - } \) concentration dataset. The results revealed that NF and SFL produced acceptable performance while ANN and MFL had poor prediction. A supervised committee machine artificial intelligent (SCMAI), which combines the results of individual AI models using a supervised artificial neural network, was developed for better prediction of vulnerability. The performance of SCMAI was also compared to those of the simple averaging and weighted averaging committee machine intelligent (CMI) methods. As a result, the SCMAI model produced reliable estimates of groundwater contamination risk.  相似文献   

6.
In a previous publication, the seismicity of Japan from 1 January 1984 to 11 March 2011 (the time of the \(M9\) Tohoku earthquake occurrence) has been analyzed in a time domain called natural time \(\chi.\) The order parameter of seismicity in this time domain is the variance of \(\chi\) weighted for normalized energy of each earthquake. It was found that the fluctuations of the order parameter of seismicity exhibit 15 distinct minima—deeper than a certain threshold—1 to around 3 months before the occurrence of large earthquakes that occurred in Japan during 1984–2011. Six (out of 15) of these minima were followed by all the shallow earthquakes of magnitude 7.6 or larger during the whole period studied. Here, we show that the probability to achieve the latter result by chance is of the order of \(10^{-5}\). This conclusion is strengthened by employing also the receiver operating characteristics technique.  相似文献   

7.
Microstructure measurements were performed along two sections through the Halmahera Sea and the Ombai Strait and at a station in the deep Banda Sea. Contrasting dissipation rates (??) and vertical eddy diffusivities (K z ) were obtained with depth-averaged ranges of \(\sim [9 \times 10^{-10}-10^{-5}]\) W kg??1 and of \(\sim [1 \times 10^{-5}-2 \times 10^{-3}]\) m2 s??1, respectively. Similarly, turbulence intensity, \(I={\epsilon }/(\nu N^{2})\) with ν the kinematic viscosity and N the buoyancy frequency, was found to vary seven orders of magnitude with values up to \(10^{7}\). These large ranges of variations were correlated with the internal tide energy level, which highlights the contrast between regions close and far from internal tide generations. Finescale parameterizations of ?? induced by the breaking of weakly nonlinear internal waves were only relevant in regions located far from any generation area (“far field”), at the deep Banda Sea station. Closer to generation areas, at the “intermediate field” station of the Halmahera Sea, a modified formulation of MacKinnon and Gregg (2005) was validated for moderately turbulent regimes with 100 < I < 1000. Near generation areas marked by strong turbulent regimes such as “near field” stations within strait and passages, ?? is most adequately inferred from horizontal velocities provided that part of the inertial subrange is resolved, according to Kolmogorov scaling.  相似文献   

8.
Diurnal S\(_1\) tidal oscillations in the coupled atmosphere–ocean system induce small perturbations of Earth’s prograde annual nutation, but matching geophysical model estimates of this Sun-synchronous rotation signal with the observed effect in geodetic Very Long Baseline Interferometry (VLBI) data has thus far been elusive. The present study assesses the problem from a geophysical model perspective, using four modern-day atmospheric assimilation systems and a consistently forced barotropic ocean model that dissipates its energy excess in the global abyssal ocean through a parameterized tidal conversion scheme. The use of contemporary meteorological data does, however, not guarantee accurate nutation estimates per se; two of the probed datasets produce atmosphere–ocean-driven S\(_1\) terms that deviate by more than 30 \(\upmu \)as (microarcseconds) from the VLBI-observed harmonic of \(-16.2+i113.4\) \(\upmu \)as. Partial deficiencies of these models in the diurnal band are also borne out by a validation of the air pressure tide against barometric in situ estimates as well as comparisons of simulated sea surface elevations with a global network of S\(_1\) tide gauge determinations. Credence is lent to the global S\(_1\) tide derived from the Modern-Era Retrospective Analysis for Research and Applications (MERRA) and the operational model of the European Centre for Medium-Range Weather Forecasts (ECMWF). When averaged over a temporal range of 2004 to 2013, their nutation contributions are estimated to be \(-8.0+i106.0\) \(\upmu \)as (MERRA) and \(-9.4+i121.8\) \(\upmu \)as (ECMWF operational), thus being virtually equivalent with the VLBI estimate. This remarkably close agreement will likely aid forthcoming nutation theories in their unambiguous a priori account of Earth’s prograde annual celestial motion.  相似文献   

9.
In this work, we map the absorption properties of the French crust by analyzing the decay properties of coda waves. Estimation of the coda quality factor \(Q_{c}\) in five non-overlapping frequency-bands between 1 and 32 Hz is performed for more than 12,000 high-quality seismograms from about 1700 weak to moderate crustal earthquakes recorded between 1995 and 2013. Based on sensitivity analysis, \(Q_{c}\) is subsequently approximated as an integral of the intrinsic shear wave quality factor \(Q_{i}\) along the ray connecting the source to the station. After discretization of the medium on a 2-D Cartesian grid, this yields a linear inverse problem for the spatial distribution of \(Q_{i}\). The solution is approximated by redistributing \(Q_{c}\) in the pixels connecting the source to the station and averaging over all paths. This simple procedure allows to obtain frequency-dependent maps of apparent absorption that show lateral variations of \(50\%\) at length scales ranging from 50 km to 150 km, in all the frequency bands analyzed. At low frequency, the small-scale geological features of the crust are clearly delineated: the Meso-Cenozoic basins (Aquitaine, Brabant, Southeast) appear as strong absorption regions, while crystalline massifs (Armorican, Central Massif, Alps) appear as low absorption zones. At high frequency, the correlation between the surface geological features and the absorption map disappears, except for the deepest Meso-Cenozoic basins which exhibit a strong absorption signature. Based on the tomographic results, we explore the implications of lateral variations of absorption for the analysis of both instrumental and historical seismicity. The main conclusions are as follows: (1) current local magnitude \(M_{L}\) can be over(resp. under)-estimated when absorption is weaker(resp. stronger) than the nominal value assumed in the amplitude-distance relation; (2) both the forward prediction of the earthquake macroseismic intensity field and the estimation of historical earthquake seismological parameters using macroseismic intensity data are significantly improved by taking into account a realistic 2-D distribution of absorption. In the future, both \(M_{L}\) estimations and macroseismic intensity attenuation models should benefit from high-resolution models of frequency-dependent absorption such as the one produced in this study.  相似文献   

10.
Fragility curves for risk-targeted seismic design maps   总被引:1,自引:0,他引:1  
Seismic design using maps based on “risk-targeting” would lead to an annual probability of attaining or exceeding a certain damage state that is uniform over an entire territory. These maps are based on convolving seismic hazard curves from a standard probabilistic analysis with the derivative of fragility curves expressing the chance for a code-designed structure to attain or exceed a certain damage state given a level of input motion, e.g. peak ground acceleration (PGA). There are few published fragility curves for structures respecting the Eurocodes (ECs, principally EC8 for seismic design) that can be used for the development of risk-targeted design maps for Europe. In this article a set of fragility curves for a regular three-storey reinforced-concrete building designed using EC2 and EC8 for medium ductility and increasing levels of design acceleration \((\hbox {a}_\mathrm{g})\) is developed. These curves show that structures designed using EC8 against PGAs up to about 1 m/s \(^{2}\) have similar fragilities to those that respect only EC2 (although this conclusion may not hold for irregular buildings, other geometries or materials). From these curves, the probability of yielding for a structure subjected to a PGA equal to \(\hbox {a}_\mathrm{g}\) varies between 0.14 ( \(\hbox {a}_\mathrm{g}=0.7\) m/s \(^{2})\) and 0.85 ( \(\hbox {a}_\mathrm{g}=3\) m/s \(^{2})\) whereas the probability of collapse for a structure subjected to a PGA equal to \(\hbox {a}_\mathrm{g}\) varies between 1.7 \(\times 10^{-7}\) ( \(\hbox {a}_\mathrm{g}=0.7\) m/s \(^{2})\) and 1.0 \(\times 10^{-5}\) ( \(\hbox {a}_\mathrm{g}=3\) m/s \(^{2})\) .  相似文献   

11.
Conditional risk based on multivariate hazard scenarios   总被引:1,自引:1,他引:0  
We present a novel methodology to compute conditional risk measures when the conditioning event depends on a number of random variables. Specifically, given a random vector \((\mathbf {X},Y)\), we consider risk measures that express the risk of Y given that \(\mathbf {X}\) assumes values in an extreme multidimensional region. In particular, the considered risky regions are related to the AND, OR, Kendall and Survival Kendall hazard scenarios that are commonly used in environmental literature. Several closed formulas are considered (especially in the AND and OR scenarios). An application to spatial risk analysis involving real data is discussed.  相似文献   

12.
The first part of this paper reviews methods using effective solar indices to update a background ionospheric model focusing on those employing the Kriging method to perform the spatial interpolation. Then, it proposes a method to update the International Reference Ionosphere (IRI) model through the assimilation of data collected by a European ionosonde network. The method, called International Reference Ionosphere UPdate (IRI UP), that can potentially operate in real time, is mathematically described and validated for the period 9–25 March 2015 (a time window including the well-known St. Patrick storm occurred on 17 March), using IRI and IRI Real Time Assimilative Model (IRTAM) models as the reference. It relies on foF2 and M(3000)F2 ionospheric characteristics, recorded routinely by a network of 12 European ionosonde stations, which are used to calculate for each station effective values of IRI indices \(IG_{12}\) and \(R_{12}\) (identified as \(IG_{{12{\text{eff}}}}\) and \(R_{{12{\text{eff}}}}\)); then, starting from this discrete dataset of values, two-dimensional (2D) maps of \(IG_{{12{\text{eff}}}}\) and \(R_{{12{\text{eff}}}}\) are generated through the universal Kriging method. Five variogram models are proposed and tested statistically to select the best performer for each effective index. Then, computed maps of \(IG_{{12{\text{eff}}}}\) and \(R_{{12{\text{eff}}}}\) are used in the IRI model to synthesize updated values of foF2 and hmF2. To evaluate the ability of the proposed method to reproduce rapid local changes that are common under disturbed conditions, quality metrics are calculated for two test stations whose measurements were not assimilated in IRI UP, Fairford (51.7°N, 1.5°W) and San Vito (40.6°N, 17.8°E), for IRI, IRI UP, and IRTAM models. The proposed method turns out to be very effective under highly disturbed conditions, with significant improvements of the foF2 representation and noticeable improvements of the hmF2 one. Important improvements have been verified also for quiet and moderately disturbed conditions. A visual analysis of foF2 and hmF2 maps highlights the ability of the IRI UP method to catch small-scale changes occurring under disturbed conditions which are not seen by IRI.  相似文献   

13.
Temperature data from SABER/TIMED and Empirical Orthogonal Function(EOF) analysis are taken to examine possible modulations of the temperature migrating diurnal tide(DW1) by latitudinal gradients of zonal mean zonal wind(■). The result shows that z increases with altitudes and displays clearly seasonal and interannual variability. In the upper mesosphere and lower thermosphere(MLT), at the latitudes between 20°N and 20°S, when ■ strengthens(weakens) at equinoxes(solstices) the DW1 amplitude increases(decreases) simultaneously. Stronger maximum in March-April equinox occurs in both z and the DW1 amplitude. Besides, a quasi-biennial oscillation of DW1 is also found to be synchronous with ■. The resembling spatial-temporal features suggest that ■ in the upper tropic MLT probably plays an important role in modulating semiannual, annual, and quasi-biennial oscillations in DW1 at the same latitude and altitude. In addition, ■ in the mesosphere possibly affects the propagation of DW1 and produces SAO of DW1 in the lower thermosphere. Thus, SAO of DW1 in the upper MLT may be a combined effect of ■ both in the mesosphere and in the upper MLT, which models studies should determine in the future.  相似文献   

14.
This paper proposes methods to detect outliers in functional data sets and the task of identifying atypical curves is carried out using the recently proposed kernelized functional spatial depth (KFSD). KFSD is a local depth that can be used to order the curves of a sample from the most to the least central, and since outliers are usually among the least central curves, we present a probabilistic result which allows to select a threshold value for KFSD such that curves with depth values lower than the threshold are detected as outliers. Based on this result, we propose three new outlier detection procedures. The results of a simulation study show that our proposals generally outperform a battery of competitors. We apply our procedures to a real data set consisting in daily curves of emission levels of nitrogen oxides (NO\(_{x}\)) since it is of interest to identify abnormal NO\(_{x}\) levels to take necessary environmental political actions.  相似文献   

15.
In this study we assume that a gravitational curvature tensor, i.e. a tensor of third-order directional derivatives of the Earth’s gravitational potential, is observable at satellite altitudes. Such a tensor is composed of ten different components, i.e. gravitational curvatures, which may be combined into vertical–vertical–vertical, vertical–vertical–horizontal, vertical–horizontal–horizontal and horizontal–horizontal-horizontal gravitational curvatures. Firstly, we study spectral properties of the gravitational curvatures. Secondly, we derive new quadrature formulas for the spherical harmonic analysis of the four gravitational curvatures and provide their corresponding analytical error models. Thirdly, requirements for an instrument that would eventually observe gravitational curvatures by differential accelerometry are investigated. The results reveal that measuring third-order directional derivatives of the gravitational potential imposes very high requirements on the accuracy of deployed accelerometers which are beyond the limits of currently available sensors. For example, for orbital parameters and performance similar to those of the GOCE mission, observing third-order directional derivatives requires accelerometers with the noise level of \({\sim}10^{-17}\,\hbox {m}\,\hbox {s}^{-2}\) Hz\(^{-1/2}\).  相似文献   

16.
A damaging seismic sequence hit a wide area mainly located in the Emilia-Romagna region (Northern Italy) during 2012 with several events of local magnitude \(\hbox {M}_\mathrm{l} \ge 5\) , among which the \(\hbox {M}_\mathrm{l}\) 5.9 May 20 and the \(\hbox {M}_\mathrm{l}\) 5.8 May 29 were the main events. Thanks to the presence of a permanent accelerometric station very close to the epicentre and to the temporary installations performed in the aftermath of the first shock, a large number of strong motion recordings are available, on the basis of which, we compared the recorded signals with the values provided by the current Italian seismic regulations, and we observed several differences with respect to horizontal components when the simplified approach for site conditions (based on Vs30 classes) is used. On the contrary, when using the more accurate approach based on the local seismic response, we generally obtain a much better agreement, at least in the frequency range corresponding to a quarter wavelength comparable with the depth of the available subsoil data. Some unresolved questions still remain, such as the low frequency behaviour ( \(<\) 1 Hz) that could be due either to complex propagation at depth larger than the one presently investigated or to near source effects, and the behaviour of vertical spectra whose recorded/code difference is too large to be explained with the information currently available.  相似文献   

17.
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line \(y = a x + b\). This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to \(M_{w}\) vs. \(m_{b}\) and \(M_{w}\) vs. \(M_{S}\) regressions. This improvement is minor, within the typical error of \(M_{w}\). Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.  相似文献   

18.
The current seismic design philosophy is based on nonlinear behavior of structures where the foundation soil is often simplified by a modification of the input acceleration depending on the expected site effects. The latter are generally limited to depend on the shear-wave velocity profile or a classification of the site. Findings presented in this work illustrate the importance of accounting for both soil nonlinearity due to seismic liquefaction and for soil-structure interaction when dealing with liquefiable soil deposits. This paper concerns the assessment of the effect of excess pore pressure (\(\Delta p_{w}\)) and deformation for the nonlinear response of liquefiable soils on the structure’s performance. For this purpose a coupled \(\Delta p_{w}\) and soil deformation (CPD) analysis is used to represent the soil behavior. A mechanical-equivalent fully drained decoupled (DPD) analysis is also performed. The differences between the analyses on different engineering demand parameters are evaluated. The results allow to identify and to quantify the differences between the analyses. Thus, it is possible to establish the situations for which the fully drained analysis might tend to overestimate or underestimate the structure’s demand.  相似文献   

19.
A semiempirical mathematical model of iron and manganese migration from bottom sediments into the water mass of water bodies has been proposed based on some basic regularities in the geochemistry of those elements. The entry of dissolved forms of iron and manganese under aeration conditions is assumed negligible. When dissolved-oxygen concentration is <0.5 mg/L, the elements start releasing from bottom sediments, their release rate reaching its maximum under anoxic conditions. The fluxes of dissolved iron and manganese (Me) from bottom sediments into the water mass (J Me) are governed by the gradients of their concentrations in diffusion water sublayer adjacent to sediment surface and having an average thickness of h = 0.025 cm: \({J_{Me}} = - {D_{Me}}\frac{{{C_{Me\left( {ss} \right)}} - {C_{Me\left( w \right)}}}}{h}\) (D Me ≈ 1 × 10–9 m2/s is molecular diffusion coefficient of component Me in solution; C Me(ss) and C Me(w) ≈ 0 are Me concentrations on sediment surface, i.e., on the bottom boundary of the diffusion water sublayer, and in the water mass, i.e., on the upper boundary of the diffusion water sublayer). The value of depends on water saturation with dissolved oxygen (\({\eta _{{O_2}}}\)) in accordance with the empiric relationship \({C_{Me\left( {ss} \right)}} = \frac{{C_{_{Me\left( {ss} \right)}}^{\max }}}{{1 + k{\eta _{{O_2}}}}}\) (k is a constant factor equal to 300 for iron and 100 for manganese; C Me(ss) max is the maximal concentration of Me on the bottom boundary of the diffusion water sublayer with C Fe(ss) max ≈ 200 μM (11 mg/L), and C Mn(ss) max ≈ 100 μM (5.5 mg/L).  相似文献   

20.
The Lorca Basin has been the object of recent research aimed at studying the phenomena of earthquake-induced landslides and its assessment in the frame of different seismic scenarios. However, it has not been until the 11th May 2011 Lorca earthquakes when it has been possible to conduct a systematic approach to the problem. In this paper we present an inventory of slope instabilities triggered by the Lorca earthquakes which comprises more than 100 cases, mainly rock and soil falls of small size (1–100  \(\hbox {m}^{3}\) ). The distribution of these instabilities is here compared to two different earthquake-triggered landslide hazard maps: one considering the occurrence of the most probable earthquake for a 475-years return period in the Lorca Basin \((\hbox {M}_{\mathrm{w}}=5.0)\) based on both low- and high-resolution digital elevation model (DEM); and a second one matching the occurrence of the \(\hbox {M}_{\mathrm{w}}=5.2\) 2011 Lorca earthquake, which was performed using the higher resolution DEM. The most frequent Newmark displacements related to the slope failures triggered by the 2011 Lorca earthquakes are lower than 2 cm in both the hazard scenarios considered. Additionally, the predicted Newmark displacements were correlated to the inventory of slope instabilities to develop a probability of failure equation. The fit seems to be very good since most of the mapped slope failures are located on the higher probability areas. The probability of slope failure in the Lorca Basin for a seismic event similar to the \(\hbox {M}_{\mathrm{w}}\) 5.2 2011 Lorca earthquake can be considered as very low (0–4 %).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号