首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 984 毫秒
1.
Vegetation is known to influence the hydrological state variables, suction \( \left( \psi \right) \) and volumetric water content (\( \theta_{w} \)) of soil. In addition, vegetation induces heterogeneity in the soil porous structure and consequently the relative permeability (\( k_{r} \)) of water under unsaturated conditions. The indirect method of utilising the soil water characteristic curve (SWCC) is commonly adopted for the determination of \( k_{r} \). In such cases, it is essential to address the stochastic behaviour of SWCC, in order to conduct a robust analysis on the \( k_{r} \) of vegetative cover. The main aim of this study is to address the uncertainties associated with \( k_{r} \), using probabilistic analysis, for vegetative covers (i.e., grass and tree species) with bare cover as control treatment. We propose two approaches to accomplish the aforesaid objective. The univariate suction approach predicts the probability distribution functions of \( {\text{k}}_{\text{r}} \), on the basis of identified best probability distribution of suction. The bivariate suction and water content approach deals with the bivariate modelling of the water content and suction (SWCC), in order to capture the randomness in the permeability curves, due to presence of vegetation. For this purpose, the dependence structure of \( \psi \) and \( \theta_{w} \) is established via copula theory, and the \( k_{r} \) curves are predicted with respect to varying levels of \( \psi - \theta_{w} \) correlation. The results showed that the \( k_{r} \) of vegetative covers is substantially lower than that in bare covers. The reduction in \( k_{r} \) with drying is more in tree cover than grassed cover, since tree roots induce higher levels of suction. Moreover, the air entry value of the soil depends on the magnitude of \( \psi - \theta_{w} \) correlation, which in turn, is influenced by the type of vegetation in the soil. \( k_{r} \) is found to be highly uncertain in the desaturation zone of the relative permeability curve. The stochastic behaviour of \( k_{r} \) is found to be most significant in tree covers. Finally, a simplified case study is also presented in order to demonstrate the impact of the uncertainty in \( k_{r} \), on the stability of vegetates slopes. With an increment in the parameter \( \alpha \), factor of safety (FS) is found to decrease. The trend of FS is reverse of this with parameter \( n \). Overall FS is found to vary around 4–5%, for both bare and vegetative slopes.  相似文献   

2.
In this paper, we introduce additional statistical tools for estimating the maximum regional earthquake magnitude, \( m_{\max} \), as complement to those already introduced by Kijko and Singh (Acta Geophys 59(4):674–700, 2011). Four new methods are introduced and investigated, with regard to their applicability and performance. We present an example of application and a comparison that includes the methods introduced earlier by the previous authors. A condition for the existence of the Tate–Pisarenko estimate and a proof of the asymptotic equivalence of the Tate–Pisarenko and Kijko–Sellevoll estimates are presented in the two appendices.  相似文献   

3.
Diurnal S\(_1\) tidal oscillations in the coupled atmosphere–ocean system induce small perturbations of Earth’s prograde annual nutation, but matching geophysical model estimates of this Sun-synchronous rotation signal with the observed effect in geodetic Very Long Baseline Interferometry (VLBI) data has thus far been elusive. The present study assesses the problem from a geophysical model perspective, using four modern-day atmospheric assimilation systems and a consistently forced barotropic ocean model that dissipates its energy excess in the global abyssal ocean through a parameterized tidal conversion scheme. The use of contemporary meteorological data does, however, not guarantee accurate nutation estimates per se; two of the probed datasets produce atmosphere–ocean-driven S\(_1\) terms that deviate by more than 30 \(\upmu \)as (microarcseconds) from the VLBI-observed harmonic of \(-16.2+i113.4\) \(\upmu \)as. Partial deficiencies of these models in the diurnal band are also borne out by a validation of the air pressure tide against barometric in situ estimates as well as comparisons of simulated sea surface elevations with a global network of S\(_1\) tide gauge determinations. Credence is lent to the global S\(_1\) tide derived from the Modern-Era Retrospective Analysis for Research and Applications (MERRA) and the operational model of the European Centre for Medium-Range Weather Forecasts (ECMWF). When averaged over a temporal range of 2004 to 2013, their nutation contributions are estimated to be \(-8.0+i106.0\) \(\upmu \)as (MERRA) and \(-9.4+i121.8\) \(\upmu \)as (ECMWF operational), thus being virtually equivalent with the VLBI estimate. This remarkably close agreement will likely aid forthcoming nutation theories in their unambiguous a priori account of Earth’s prograde annual celestial motion.  相似文献   

4.
Theory of wave boundary layers (WBLs) developed by Reznik (J Mar Res 71: 253–288, 2013, J Fluid Mech 747: 605–634, 2014, J Fluid Mech 833: 512–537, 2017) is extended to a rotating stratified fluid. In this case, the WBLs arise in the field of near-inertial oscillations (NIOs) driven by a tangential wind stress of finite duration. Near-surface Ekman layer is specified in the most general form; tangential stresses are zero at the lower boundary of Ekman layer and viscosity is neglected below the boundary. After the wind ceases, the Ekman pumping at the boundary becomes a linear superposition of inertial oscillations with coefficients dependent on the horizontal coordinates. The solution under the Ekman layer is obtained in the form of expansions in the vertical wave modes. We separate from the solution a part representing NIO and demonstrate development of a WBL near the Ekman layer boundary. With increasing time t, the WBL width decays inversely proportional to \( \sqrt{t} \) and gradients of fields in the WBL grow proportionally to \( \sqrt{t} \); the most part of NIO is concentrated in the WBL. Structure of the WBL depends strongly on its horizontal scale L determined by scale of the wind stress. The shorter the NIO is, the thinner and sharper the WBL is; the short-wave NIO with L smaller than the baroclinic Rossby scale LR does not penetrate deep into the ocean. On the contrary, for L?≥?LR, the WBL has a smoother vertical structure; a significant long-wave NIO signal is able to reach the oceanic bottom. An asymptotic theory of the WBL in rotating stratified fluid is suggested.  相似文献   

5.
During the last 15 years, more attention has been paid to derive analytic formulae for the gravitational potential and field of polyhedral mass bodies with complicated polynomial density contrasts, because such formulae can be more suitable to approximate the true mass density variations of the earth (e.g., sedimentary basins and bedrock topography) than methods that use finer volume discretization and constant density contrasts. In this study, we derive analytic formulae for gravity anomalies of arbitrary polyhedral bodies with complicated polynomial density contrasts in 3D space. The anomalous mass density is allowed to vary in both horizontal and vertical directions in a polynomial form of \(\lambda =ax^m+by^n+cz^t\), where mnt are nonnegative integers and abc are coefficients of mass density. First, the singular volume integrals of the gravity anomalies are transformed to regular or weakly singular surface integrals over each polygon of the polyhedral body. Then, in terms of the derived singularity-free analytic formulae of these surface integrals, singularity-free analytic formulae for gravity anomalies of arbitrary polyhedral bodies with horizontal and vertical polynomial density contrasts are obtained. For an arbitrary polyhedron, we successfully derived analytic formulae of the gravity potential and the gravity field in the case of \(m\le 1\), \(n\le 1\), \(t\le 1\), and an analytic formula of the gravity potential in the case of \(m=n=t=2\). For a rectangular prism, we derive an analytic formula of the gravity potential for \(m\le 3\), \(n\le 3\) and \(t\le 3\) and closed forms of the gravity field are presented for \(m\le 1\), \(n\le 1\) and \(t\le 4\). Besides generalizing previously published closed-form solutions for cases of constant and linear mass density contrasts to higher polynomial order, to our best knowledge, this is the first time that closed-form solutions are presented for the gravitational potential of a general polyhedral body with quadratic density contrast in all spatial directions and for the vertical gravitational field of a prismatic body with quartic density contrast along the vertical direction. To verify our new analytic formulae, a prismatic model with depth-dependent polynomial density contrast and a polyhedral body in the form of a triangular prism with constant contrast are tested. Excellent agreements between results of published analytic formulae and our results are achieved. Our new analytic formulae are useful tools to compute gravity anomalies of complicated mass density contrasts in the earth, when the observation sites are close to the surface or within mass bodies.  相似文献   

6.
In this study, the 11 August 2012 M w 6.4 Ahar earthquake is investigated using the ground motion simulation based on the stochastic finite-fault model. The earthquake occurred in northwestern Iran and causing extensive damage in the city of Ahar and surrounding areas. A network consisting of 58 acceleration stations recorded the earthquake within 8–217 km of the epicenter. Strong ground motion records from six significant well-recorded stations close to the epicenter have been simulated. These stations are installed in areas which experienced significant structural damage and humanity loss during the earthquake. The simulation is carried out using the dynamic corner frequency model of rupture propagation by extended fault simulation program (EXSIM). For this purpose, the propagation features of shear-wave including \( {Q}_s \) value, kappa value \( {k}_0 \), and soil amplification coefficients at each site are required. The kappa values are obtained from the slope of smoothed amplitude of Fourier spectra of acceleration at higher frequencies. The determined kappa values for vertical and horizontal components are 0.02 and 0.05 s, respectively. Furthermore, an anelastic attenuation parameter is derived from energy decay of a seismic wave by using continuous wavelet transform (CWT) for each station. The average frequency-dependent relation estimated for the region is \( Q=\left(122\pm 38\right){f}^{\left(1.40\pm 0.16\right)}. \) Moreover, the horizontal to vertical spectral ratio \( H/V \) is applied to estimate the site effects at stations. Spectral analysis of the data indicates that the best match between the observed and simulated spectra occurs for an average stress drop of 70 bars. Finally, the simulated and observed results are compared with pseudo acceleration spectra and peak ground motions. The comparison of time series spectra shows good agreement between the observed and the simulated waveforms at frequencies of engineering interest.  相似文献   

7.
This article deals with the right-tail behavior of a response distribution \(F_Y\) conditional on a regressor vector \({\mathbf {X}}={\mathbf {x}}\) restricted to the heavy-tailed case of Pareto-type conditional distributions \(F_Y(y|\ {\mathbf {x}})=P(Y\le y|\ {\mathbf {X}}={\mathbf {x}})\), with heaviness of the right tail characterized by the conditional extreme value index \(\gamma ({\mathbf {x}})>0\). We particularly focus on testing the hypothesis \({\mathscr {H}}_{0,tail}:\ \gamma ({\mathbf {x}})=\gamma _0\) of constant tail behavior for some \(\gamma _0>0\) and all possible \({\mathbf {x}}\). When considering \({\mathbf {x}}\) as a time index, the term trend analysis is commonly used. In the recent past several such trend analyses in extreme value data have been published, mostly focusing on time-varying modeling of location or scale parameters of the response distribution. In many such environmental studies a simple test against trend based on Kendall’s tau statistic is applied. This test is powerful when the center of the conditional distribution \(F_Y(y|{\mathbf {x}})\) changes monotonically in \({\mathbf {x}}\), for instance, in a simple location model \(\mu ({\mathbf {x}})=\mu _0+x\cdot \mu _1\), \({\mathbf {x}}=(1,x)'\), but the test is rather insensitive against monotonic tail behavior, say, \(\gamma ({\mathbf {x}})=\eta _0+x\cdot \eta _1\). This has to be considered, since for many environmental applications the main interest is on the tail rather than the center of a distribution. Our work is motivated by this problem and it is our goal to demonstrate the opportunities and the limits of detecting and estimating non-constant conditional heavy-tail behavior with regard to applications from hydrology. We present and compare four different procedures by simulations and illustrate our findings on real data from hydrology: weekly maxima of hourly precipitation from France and monthly maximal river flows from Germany.  相似文献   

8.
In a previous publication, the seismicity of Japan from 1 January 1984 to 11 March 2011 (the time of the \(M9\) Tohoku earthquake occurrence) has been analyzed in a time domain called natural time \(\chi.\) The order parameter of seismicity in this time domain is the variance of \(\chi\) weighted for normalized energy of each earthquake. It was found that the fluctuations of the order parameter of seismicity exhibit 15 distinct minima—deeper than a certain threshold—1 to around 3 months before the occurrence of large earthquakes that occurred in Japan during 1984–2011. Six (out of 15) of these minima were followed by all the shallow earthquakes of magnitude 7.6 or larger during the whole period studied. Here, we show that the probability to achieve the latter result by chance is of the order of \(10^{-5}\). This conclusion is strengthened by employing also the receiver operating characteristics technique.  相似文献   

9.
Temperature data from SABER/TIMED and Empirical Orthogonal Function(EOF) analysis are taken to examine possible modulations of the temperature migrating diurnal tide(DW1) by latitudinal gradients of zonal mean zonal wind(■). The result shows that z increases with altitudes and displays clearly seasonal and interannual variability. In the upper mesosphere and lower thermosphere(MLT), at the latitudes between 20°N and 20°S, when ■ strengthens(weakens) at equinoxes(solstices) the DW1 amplitude increases(decreases) simultaneously. Stronger maximum in March-April equinox occurs in both z and the DW1 amplitude. Besides, a quasi-biennial oscillation of DW1 is also found to be synchronous with ■. The resembling spatial-temporal features suggest that ■ in the upper tropic MLT probably plays an important role in modulating semiannual, annual, and quasi-biennial oscillations in DW1 at the same latitude and altitude. In addition, ■ in the mesosphere possibly affects the propagation of DW1 and produces SAO of DW1 in the lower thermosphere. Thus, SAO of DW1 in the upper MLT may be a combined effect of ■ both in the mesosphere and in the upper MLT, which models studies should determine in the future.  相似文献   

10.
In this work, we map the absorption properties of the French crust by analyzing the decay properties of coda waves. Estimation of the coda quality factor \(Q_{c}\) in five non-overlapping frequency-bands between 1 and 32 Hz is performed for more than 12,000 high-quality seismograms from about 1700 weak to moderate crustal earthquakes recorded between 1995 and 2013. Based on sensitivity analysis, \(Q_{c}\) is subsequently approximated as an integral of the intrinsic shear wave quality factor \(Q_{i}\) along the ray connecting the source to the station. After discretization of the medium on a 2-D Cartesian grid, this yields a linear inverse problem for the spatial distribution of \(Q_{i}\). The solution is approximated by redistributing \(Q_{c}\) in the pixels connecting the source to the station and averaging over all paths. This simple procedure allows to obtain frequency-dependent maps of apparent absorption that show lateral variations of \(50\%\) at length scales ranging from 50 km to 150 km, in all the frequency bands analyzed. At low frequency, the small-scale geological features of the crust are clearly delineated: the Meso-Cenozoic basins (Aquitaine, Brabant, Southeast) appear as strong absorption regions, while crystalline massifs (Armorican, Central Massif, Alps) appear as low absorption zones. At high frequency, the correlation between the surface geological features and the absorption map disappears, except for the deepest Meso-Cenozoic basins which exhibit a strong absorption signature. Based on the tomographic results, we explore the implications of lateral variations of absorption for the analysis of both instrumental and historical seismicity. The main conclusions are as follows: (1) current local magnitude \(M_{L}\) can be over(resp. under)-estimated when absorption is weaker(resp. stronger) than the nominal value assumed in the amplitude-distance relation; (2) both the forward prediction of the earthquake macroseismic intensity field and the estimation of historical earthquake seismological parameters using macroseismic intensity data are significantly improved by taking into account a realistic 2-D distribution of absorption. In the future, both \(M_{L}\) estimations and macroseismic intensity attenuation models should benefit from high-resolution models of frequency-dependent absorption such as the one produced in this study.  相似文献   

11.
This paper introduces a portfolio approach for quantifying pollution risk in the presence of PM\(_{2.5}\) concentration in cities. The model used is based on a copula dependence structure. For assessing model parameters, we analyze a limited data set of PM\(_{2.5}\) levels of Beijing, Tianjin, Chengde, Hengshui, and Xingtai. This process reveals a better fit for the t-copula dependence structure with generalized hyperbolic marginal distributions for the PM\(_{2.5}\) log-ratios of the cities. Furthermore, we show how to efficiently simulate risk measures clean-air-at-risk and conditional clean-air-at-risk using importance sampling and stratified importance sampling. Our numerical results show that clean-air-at-risk at 0.01 probability level reaches up to \(352\,{\mu \hbox {gm}^{-3}}\) (initial PM\(_{2.5}\) concentrations of cities are assumed to be \(100\,{\mu \hbox {gm}^{-3}}\)) for the constructed sample portfolio, and that the proposed methods are much more efficient than a naive simulation for computing the exceeding probabilities and conditional excesses.  相似文献   

12.
Vulnerability maps are designed to show areas of greatest potential for groundwater contamination on the basis of hydrogeological conditions and human impacts. The objective of this research is (1) to assess the groundwater vulnerability using DRASTIC method and (2) to improve the DRASTIC method for evaluation of groundwater contamination risk using AI methods, such as ANN, SFL, MFL, NF and SCMAI approaches. This optimization method is illustrated using a case study. For this purpose, DRASTIC model is developed using seven parameters. For validating the contamination risk assessment, a total of 243 groundwater samples were collected from different aquifer types of the study area to analyze \( {\text{NO}}_{ 3}^{ - } \) concentration. To develop AI and CMAI models, 243 data points are divided in two sets; training and validation based on cross validation approach. The calculated vulnerability indices from the DRASTIC method are corrected by the \( {\text{NO}}_{3}^{ - } \) data used in the training step. The input data of the AI models include seven parameters of DRASTIC method. However, the output is the corrected vulnerability index using \( {\text{NO}}_{3}^{ - } \) concentration data from the study area, which is called groundwater contamination risk. In other words, there is some target value (known output) which is estimated by some formula from DRASTIC vulnerability and \( {\text{NO}}_{3}^{ - } \) concentration values. After model training, the AI models are verified by the second \( {\text{NO}}_{3}^{ - } \) concentration dataset. The results revealed that NF and SFL produced acceptable performance while ANN and MFL had poor prediction. A supervised committee machine artificial intelligent (SCMAI), which combines the results of individual AI models using a supervised artificial neural network, was developed for better prediction of vulnerability. The performance of SCMAI was also compared to those of the simple averaging and weighted averaging committee machine intelligent (CMI) methods. As a result, the SCMAI model produced reliable estimates of groundwater contamination risk.  相似文献   

13.
The natural spectrum of electromagnetic variations surrounding Earth extends across an enormous frequency range and is controlled by diverse physical processes. Electromagnetic (EM) induction studies make use of external field variations with frequencies ranging from the solar cycle which has been used for geomagnetic depth sounding through the 10\(^{-4}\)–10\(^4\) Hz frequency band widely used for magnetotelluric and audio-magnetotelluric studies. Above 10\(^4\) Hz, the EM spectrum is dominated by man-made signals. This review emphasizes electromagnetic sources at \(\sim\)1 Hz and higher, describing major differences in physical origin and structure of short- and long-period signals. The essential role of Earth’s internal magnetic field in defining the magnetosphere through its interactions with the solar wind and interplanetary magnetic field is briefly outlined. At its lower boundary, the magnetosphere is engaged in two-way interactions with the underlying ionosphere and neutral atmosphere. Extremely low-frequency (3 Hz–3 kHz) electromagnetic signals are generated in the form of sferics, lightning, and whistlers which can extend to frequencies as high as the VLF range (3–30 kHz).The roughly spherical dielectric cavity bounded by the ground and the ionosphere produces the Schumann resonance at around 8 Hz and its harmonics. A transverse resonance also occurs at 1.7–2.0 kHz arising from reflection off the variable height lower boundary of the ionosphere and exhibiting line splitting due to three-dimensional structure. Ground and satellite observations are discussed in the light of their contributions to understanding the global electric circuit and for EM induction studies.  相似文献   

14.
Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line \(y = a x + b\). This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to \(M_{w}\) vs. \(m_{b}\) and \(M_{w}\) vs. \(M_{S}\) regressions. This improvement is minor, within the typical error of \(M_{w}\). Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.  相似文献   

15.
To define the seismic input in non-liquefiable soils, current seismic standards give the possibility to treat local site effects using a simplified approach. This method is generally based on the introduction of an appropriate number of soil categories with associated soil factors that allow modifying the shape of the elastic acceleration response spectrum computed at rocky (i.e. stiff) sites. Although this approach is highly debated among researchers, it is extensively used in practice due to its easiness. As a matter of fact, for standard projects, this method represents the driving approach for the definition of the seismic input. Nevertheless, recent empirical and numerical studies have risen doubts about the reliability and safety of the simplified approach in view of the tendency of the current soil factors of Italian and European building codes to underestimate the acceleration at the free surface of the soil deposit. On the other hand, for certain soil classes, the current soil factors seem to overestimate ground amplification. Furthermore, the occurrence of soil nonlinearity, whose magnitude is linked to both soil type and level of seismic intensity, highlights the fallacy of using constant soil factors for sites with a different seismic hazard. The objective of this article is to propose a methodology for the definition of hazard-dependent soil factors and simultaneously quantify the reliability of the coefficients specified in the current versions of Eurocode 8 (CEN 2005) and Italian Building Code (NTC8 2008 and revision NTC18 2018). One of the most important outcome of this study is the quantification of the relevance of soil nonlinearity through the definition of empirical relationships between soil factors and peak ground acceleration at outcropping rock sites with flat topological surface (reference condition).  相似文献   

16.
The first part of this paper reviews methods using effective solar indices to update a background ionospheric model focusing on those employing the Kriging method to perform the spatial interpolation. Then, it proposes a method to update the International Reference Ionosphere (IRI) model through the assimilation of data collected by a European ionosonde network. The method, called International Reference Ionosphere UPdate (IRI UP), that can potentially operate in real time, is mathematically described and validated for the period 9–25 March 2015 (a time window including the well-known St. Patrick storm occurred on 17 March), using IRI and IRI Real Time Assimilative Model (IRTAM) models as the reference. It relies on foF2 and M(3000)F2 ionospheric characteristics, recorded routinely by a network of 12 European ionosonde stations, which are used to calculate for each station effective values of IRI indices \(IG_{12}\) and \(R_{12}\) (identified as \(IG_{{12{\text{eff}}}}\) and \(R_{{12{\text{eff}}}}\)); then, starting from this discrete dataset of values, two-dimensional (2D) maps of \(IG_{{12{\text{eff}}}}\) and \(R_{{12{\text{eff}}}}\) are generated through the universal Kriging method. Five variogram models are proposed and tested statistically to select the best performer for each effective index. Then, computed maps of \(IG_{{12{\text{eff}}}}\) and \(R_{{12{\text{eff}}}}\) are used in the IRI model to synthesize updated values of foF2 and hmF2. To evaluate the ability of the proposed method to reproduce rapid local changes that are common under disturbed conditions, quality metrics are calculated for two test stations whose measurements were not assimilated in IRI UP, Fairford (51.7°N, 1.5°W) and San Vito (40.6°N, 17.8°E), for IRI, IRI UP, and IRTAM models. The proposed method turns out to be very effective under highly disturbed conditions, with significant improvements of the foF2 representation and noticeable improvements of the hmF2 one. Important improvements have been verified also for quiet and moderately disturbed conditions. A visual analysis of foF2 and hmF2 maps highlights the ability of the IRI UP method to catch small-scale changes occurring under disturbed conditions which are not seen by IRI.  相似文献   

17.
In situ, airborne and satellite measurements are used to characterize the structure of water vapor in the lower tropical troposphere—below the height, \(z_*,\) of the triple-point isotherm, \(T_*.\) The measurements are evaluated in light of understanding of how lower-tropospheric water vapor influences clouds, convection and circulation, through both radiative and thermodynamic effects. Lower-tropospheric water vapor, which concentrates in the first few kilometers above the boundary layer, controls the radiative cooling profile of the boundary layer and lower troposphere. Elevated moist layers originating from a preferred level of convective detrainment induce a profile of radiative cooling that drives circulations which reinforce such features. A theory for this preferred level of cumulus termination is advanced, whereby the difference between \(T_*\) and the temperature at which primary ice forms gives a ‘first-mover advantage’ to glaciating cumulus convection, thereby concentrating the regions of the deepest convection and leading to more clouds and moisture near the triple point. A preferred level of convective detrainment near \(T_*\) implies relative humidity reversals below \(z*\) which are difficult to identify using retrievals from satellite-borne microwave and infrared sounders. Isotopologues retrievals provide a hint of such features and their ability to constrain the structure of the vertical humidity profile merits further study. Nonetheless, it will likely remain challenging to resolve dynamically important aspects of the vertical structure of water vapor from space using only passive sensors.  相似文献   

18.
In the framework of the SIGMA project, a study was launched to develop a parametric earthquake catalog for the historical period, covering the metropolitan territory and calibrated in Mw. A set of candidate calibration events was selected corresponding to earthquakes felt over a part of the French metropolitan territory, which are fairly well documented both in terms of macroseismic intensity distributions (SisFrance BRGM-EDF-IRSN) and magnitude estimates. The detailed analysis of the macroseismic data led us to retain only 30 events out of 65 with Mw ranging from 3.6 to 5.8. In order to supplement the dataset with data from larger magnitude events, Italian earthquakes were also considered (11 events posterior to 1900 with Mw?≥?6.0 out of 15 in total), using both the DBMI11 macroseismic database (Locati et al. in Seismol Resour Lett 85(3):727–734, 2014) and the parametric information from the CPTI11 (Rovida et al. in CPTI11, la versione 2011 del Catalogo Parametrico dei Terremoti Italiani Istituto Nazionale di Geofisica et Vulcanologia, Milano, Bologna, 2011.  https://doi.org/10.6092/ingv.it-cpti11). To avoid introducing bias related to the differences in terms of intensity scales (MSK vs. MCS), only intensities smaller than or equal to VII were considered (Traversa et al. in On the use of cross-border macroseismic data to improve the estimation of past earthquakes seismological parameters, 2014). Mw and depth metadata were defined according to the Si-Hex catalogue (Cara et al. in Bull Soc Géol Fr 186:3–19, 2015.  https://doi.org/10.2113/qssqfbull.186.1.3), published information, and to the specific worked conducted within SIGMA related to early instrumental recordings (Benjumea et al. in Study of instrumented earthquakes that occurred during the first part of the 20th century (1905–1962), 2015). For the depth estimates, we also performed a macroseismic analysis to evaluate the range of plausible estimates and check the consistency of the solutions. Uncertainties on the metadata related to the calibration earthquakes were evaluated using the range of available alternative estimates. The intensity attenuation models were developed using a one-step maximum likelihood scheme. Several mathematical formulations and sub-datasets were considered to evaluate the robustness of the results (similarly to Baumont and Scotti in Accounting for data and modeling uncertainties in empirical macroseismic predictive equations (EMPEs). Towards “European” EMPEs based on SISFRANCE, DBMI, ECOS macroseismic database, 2008). In particular, as the region of interest may be characterized by significant laterally varying attenuation properties (Bakun and Scotti in Geophys J Int 164:596–610, 2006; Gasperini in Bull Seismol Soc Am 91:826–841, 2001), we introduced regional attenuation terms to account for this variability. Two zonation schemes were tested, one at the national scale (France/Italy), another at the regional scale based on the studies of Mayor et al. (Bull Earthq Eng, 2017.  https://doi.org/10.1007/s10518-017-0124-8) for France and Gasperini (2001) for Italy. Between and within event residuals were analyzed in detail to identify the best models, that is, the ones associated with the best misfit and most limited residual trends with intensity and distance. This analysis led us to select four sets of models for which no significant trend in the between- and within-event residuals is detected. These models are considered to be valid over a wide range of Mw covering?~?3.5–7.0.  相似文献   

19.
Rapid magnitude estimation relations for earthquake early warning systems in the Alborz region have been developed based on the initial first seconds of the P-wave arrival. For this purpose, a total of 717 accelerograms recorded by the Building and Housing Research Center in the Alborz region with the magnitude (Mw) range of 4.8–6.5 in the period between 1995 and 2013 were employed. Average ground motion period (\( \tau_{\text{c}} \)) and peak displacement (\( P_{\text{d}} \)) in different time windows from the P-wave arrival were calculated, and their relation with magnitude was examined. Four earthquakes that were excluded from the analysis process were used to validate the results, and the estimated magnitudes were found to be in good agreement with the observed ones. The results show that using the proposed relations for the Alborz region, earthquake magnitude could be estimated with acceptable accuracy even after 1 s of the P-wave arrival.  相似文献   

20.
We reviewed joint inversion studies of the rupture processes of significant earthquakes, using the definition of a joint inversion in earthquake source imaging as a source inversion of multiple kinds of datasets (waveform, geodetic, or tsunami). Yoshida and Koketsu (Geophys J Int 103:355–362, 1990), and Wald and Heaton (Bull Seismol Soc Am 84:668–691, 1994) independently initiated joint inversion methods, finding that joint inversion provides more reliable rupture process models than single-dataset inversion, leading to an increase of joint inversion studies. A list of these studies was made using the finite-source rupture model database (Mai and Thingbaijam in Seismol Res Lett 85:1348–1357, 2014). Outstanding issues regarding joint inversion were also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号