首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
We reviewed joint inversion studies of the rupture processes of significant earthquakes, using the definition of a joint inversion in earthquake source imaging as a source inversion of multiple kinds of datasets (waveform, geodetic, or tsunami). Yoshida and Koketsu (Geophys J Int 103:355–362, 1990), and Wald and Heaton (Bull Seismol Soc Am 84:668–691, 1994) independently initiated joint inversion methods, finding that joint inversion provides more reliable rupture process models than single-dataset inversion, leading to an increase of joint inversion studies. A list of these studies was made using the finite-source rupture model database (Mai and Thingbaijam in Seismol Res Lett 85:1348–1357, 2014). Outstanding issues regarding joint inversion were also discussed.  相似文献   

2.
The destructive Pacific Ocean tsunami generated off the east coast of Honshu, Japan, on 11 March 2011 prompted the West Coast and Alaska Tsunami Warning Center (WCATWC) to issue a tsunami warning and advisory for the coastal regions of Alaska, British Columbia, Washington, Oregon, and California. Estimating the length of time the warning or advisory would remain in effect proved difficult. To address this problem, the WCATWC developed a technique to estimate the amplitude decay of a tsunami recorded at tide stations within the Warning Center’s Area of Responsibly (AOR). At many sites along the West Coast of North America, the tsunami wave amplitudes will decay exponentially following the arrival of the maximum wave (Mofjeld et al., Nat Hazards 22:71–89, 2000). To estimate the time it will take before wave amplitudes drop to safe levels, the real-time tide gauge data are filtered to remove the effects of tidal variations. The analytic envelope is computed and a 2 h sequence of amplitude values following the tsunami peak is used to obtain a least squares fit to an exponential function. This yields a decay curve which is then combined with an average West Coast decay function to provide an initial tsunami amplitude-duration forecast. This information may then be provided to emergency managers to assist with response planning.  相似文献   

3.
When the stability of a sharply stratified shear flow is studied, the density profile is usually taken stepwise and a weak stratification between pycnoclines is neglected. As a consequence, in the instability domain of the flow two-sided neutral curves appear such that the waves corresponding to them are neutrally stable, whereas the neighboring waves on either side of the curve are unstable, in contrast with the classical result of Miles (J Fluid Mech 16:209–227, 1963) who proved that in stratified flows unstable oscillations can be only on one side of the neutral curve. In the paper, the contradiction is resolved and changes in the flow stability pattern under transition from a model stepwise to a continuous density profile are analyzed. On this basis, a simple self-consistent algorithm is proposed for studying the stability of sharply stratified shear flows with a continuous density variation and an arbitrary monotonic velocity profile without inflection points. Because our calculations and the algorithm are both based on the method of stability analysis (Churilov J Fluid Mech 539:25–55, 2005; ibid, 617, 301–326, 2008), which differs essentially from usually used, the paper starts with a brief review of the method and results obtained with it.  相似文献   

4.
A vulnerability analysis of c.300 unreinforced Masonry churches in New Zealand is presented. The analysis uses a recently developed vulnerability index method (Cattari et al. in Proceedings of the New Zealand Society for Earthquake Engineering NZSEE 2015 conference, Rotorua, New Zealand, 2015a; b; SECED 2015 conference: earthquake risk and engineering towards a Resilient World, Cambridge; Goded et al. in Vulnerability analysis of unreinforced masonry churches (EQC 14/660)—final report, 2016; Lagomarsino et al. in Bull Earthq Eng, 2018), specifically designed for New Zealand churches, based on a widely tested approach for European historical buildings. It consists of a macroseismic approach where the seismic hazard is defined by the intensity and correlated to post seismic damage. The many differences in typologies of New Zealand and European churches, with very simple architectural designs and a majority of one nave churches in New Zealand, justified the need to develop a method specifically created for this country. A statistical analysis of the churches damaged during the 2010–2011 Canterbury earthquake sequence was previously carried out to develop the vulnerability index modifiers for New Zealand churches. This new method has been applied to generate seismic scenarios for each church, based on the most likely seismic event for 500 years return period, using the latest version of New Zealand’s National Seismic Hazard Model. Results show that highly vulnerable churches (e.g. stone churches and/or with a weak structural design) tend to produce higher expected damage even if the intensity level is lower than for less vulnerable churches in areas with slightly higher seismicity. The results of this paper provide a preliminary tool to identify buildings requiring in depth structural analyses. This paper is considered as a first step towards a vulnerability analysis of all the historical buildings in the country, in order to preserve New Zealand’s cultural and historical heritage.  相似文献   

5.
In the framework of the SIGMA project, a study was launched to develop a parametric earthquake catalog for the historical period, covering the metropolitan territory and calibrated in Mw. A set of candidate calibration events was selected corresponding to earthquakes felt over a part of the French metropolitan territory, which are fairly well documented both in terms of macroseismic intensity distributions (SisFrance BRGM-EDF-IRSN) and magnitude estimates. The detailed analysis of the macroseismic data led us to retain only 30 events out of 65 with Mw ranging from 3.6 to 5.8. In order to supplement the dataset with data from larger magnitude events, Italian earthquakes were also considered (11 events posterior to 1900 with Mw?≥?6.0 out of 15 in total), using both the DBMI11 macroseismic database (Locati et al. in Seismol Resour Lett 85(3):727–734, 2014) and the parametric information from the CPTI11 (Rovida et al. in CPTI11, la versione 2011 del Catalogo Parametrico dei Terremoti Italiani Istituto Nazionale di Geofisica et Vulcanologia, Milano, Bologna, 2011.  https://doi.org/10.6092/ingv.it-cpti11). To avoid introducing bias related to the differences in terms of intensity scales (MSK vs. MCS), only intensities smaller than or equal to VII were considered (Traversa et al. in On the use of cross-border macroseismic data to improve the estimation of past earthquakes seismological parameters, 2014). Mw and depth metadata were defined according to the Si-Hex catalogue (Cara et al. in Bull Soc Géol Fr 186:3–19, 2015.  https://doi.org/10.2113/qssqfbull.186.1.3), published information, and to the specific worked conducted within SIGMA related to early instrumental recordings (Benjumea et al. in Study of instrumented earthquakes that occurred during the first part of the 20th century (1905–1962), 2015). For the depth estimates, we also performed a macroseismic analysis to evaluate the range of plausible estimates and check the consistency of the solutions. Uncertainties on the metadata related to the calibration earthquakes were evaluated using the range of available alternative estimates. The intensity attenuation models were developed using a one-step maximum likelihood scheme. Several mathematical formulations and sub-datasets were considered to evaluate the robustness of the results (similarly to Baumont and Scotti in Accounting for data and modeling uncertainties in empirical macroseismic predictive equations (EMPEs). Towards “European” EMPEs based on SISFRANCE, DBMI, ECOS macroseismic database, 2008). In particular, as the region of interest may be characterized by significant laterally varying attenuation properties (Bakun and Scotti in Geophys J Int 164:596–610, 2006; Gasperini in Bull Seismol Soc Am 91:826–841, 2001), we introduced regional attenuation terms to account for this variability. Two zonation schemes were tested, one at the national scale (France/Italy), another at the regional scale based on the studies of Mayor et al. (Bull Earthq Eng, 2017.  https://doi.org/10.1007/s10518-017-0124-8) for France and Gasperini (2001) for Italy. Between and within event residuals were analyzed in detail to identify the best models, that is, the ones associated with the best misfit and most limited residual trends with intensity and distance. This analysis led us to select four sets of models for which no significant trend in the between- and within-event residuals is detected. These models are considered to be valid over a wide range of Mw covering?~?3.5–7.0.  相似文献   

6.
The transition from symmetric to baroclinic instability in the Eady model   总被引:1,自引:1,他引:0  
Here, we explore the transition from symmetric instability to ageostrophic baroclinic instability in the Eady model; an idealised representation of a submesoscale mixed layer front. We revisit the linear stability problem considered by Stone (J Atmos Sci, 23, 390–400, (Stone 1966)), Stone (J Atmos Sci, 27, 721–726, (Stone 1970)), Stone (J Atmos Sci, 29, 419–426, (Stone 1972)) with a particular focus on three-dimensional ‘mixed modes’ (which are neither purely symmetric or baroclinic) and find that these modes can have growth rates within just a few percent of the corresponding two-dimensional growth rate maximum. In addition, we perform very high resolution numerical simulations allowing an exploration of the transition from symmetric to baroclinic instability. Three-dimensional mixed modes represent the largest contribution to the turbulent kinetic energy during the transition period between symmetric and baroclinic instability. In each simulation, we see the development of sharp fronts with associated high rms vertical velocities of up to 30 mm s?1. Furthermore, we see significant transfer of energy to small scales, demonstrated by time-integrated mixing and energy dissipation by small-scale three-dimensional turbulence totalling about 30 % of the initial kinetic energy in all cases.  相似文献   

7.
We summarize the main elements of a ground-motion model, as built in three-year effort within the Earthquake Model of the Middle East (EMME) project. Together with the earthquake source, the ground-motion models are used for a probabilistic seismic hazard assessment (PSHA) of a region covering eleven countries: Afghanistan, Armenia, Azerbaijan, Cyprus, Georgia, Iran, Jordan, Lebanon, Pakistan, Syria and Turkey. Given the wide variety of ground-motion predictive models, selecting the appropriate ones for modeling the intrinsic epistemic uncertainty can be challenging. In this respect, we provide a strategy for ground-motion model selection based on data-driven testing and sensitivity analysis. Our testing procedure highlights the models of good performance in terms of both data-driven and non-data-driven testing criteria. The former aims at measuring the match between the ground-motion data and the prediction of each model, whereas the latter aims at identification of discrepancies between the models. The selected set of ground models were directly used in the sensitivity analyses that eventually led to decisions on the final logic tree structure. The strategy described in great details hereafter was successfully applied to shallow active crustal regions, and the final logic tree consists of four models (Akkar and Ça?nan in Bull Seismol Soc Am 100:2978–2995, 2010; Akkar et al. in Bull Earthquake Eng 12(1):359–387, 2014; Chiou and Youngs in Earthq Spectra 24:173–215, 2008; Zhao et al. in Bull Seismol Soc Am 96:898–913, 2006). For other tectonic provinces in the considered region (i.e., subduction), we adopted the predictive models selected within the 2013 Euro-Mediterranean Seismic Hazard Model (Woessner et al. in Bull Earthq Eng 13(12):3553–3596, 2015). Finally, we believe that the framework of selecting and building a regional ground-motion model represents a step forward in ground-motion modeling, particularly for large-scale PSHA models.  相似文献   

8.
In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599–1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenkoet al. (Pure Appl. Geophys 171:1599–1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenkoet al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599–1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.  相似文献   

9.
Point measurement-based estimation of bedload transport in the coastal zone is very difficult. The only way to assess the magnitude and direction of bedload transport in larger areas, particularly those characterized by complex bottom topography and hydrodynamics, is to use a holistic approach. This requires modeling of waves, currents, and the critical bed shear stress and bedload transport magnitude, with a due consideration to the realistic bathymetry and distribution of surface sediment types. Such a holistic approach is presented in this paper which describes modeling of bedload transport in the Gulf of Gdańsk. Extreme storm conditions defined based on 138-year NOAA data were assumed. The SWAN model (Booij et al. 1999) was used to define wind–wave fields, whereas wave-induced currents were calculated using the Ko?odko and Gic-Grusza (2015) model, and the magnitude of bedload transport was estimated using the modified Meyer-Peter and Müller (1948) formula. The calculations were performed using a GIS model. The results obtained are innovative. The approach presented appears to be a valuable source of information on bedload transport in the coastal zone.  相似文献   

10.
The third-generation wave model, WAVEWATCH III, was employed to simulate bulk wave parameters in the Persian Gulf using three different wind sources: ERA-Interim, CCMP, and GFS-Analysis. Different formulations for whitecapping term and the energy transfer from wind to wave were used, namely the Tolman and Chalikov (J Phys Oceanogr 26:497–518, 1996), WAM cycle 4 (BJA and WAM4), and Ardhuin et al. (J Phys Oceanogr 40(9):1917–1941, 2010) (TEST405 and TEST451 parameterizations) source term packages. The obtained results from numerical simulations were compared to altimeter-derived significant wave heights and measured wave parameters at two stations in the northern part of the Persian Gulf through statistical indicators and the Taylor diagram. Comparison of the bulk wave parameters with measured values showed underestimation of wave height using all wind sources. However, the performance of the model was best when GFS-Analysis wind data were used. In general, when wind veering from southeast to northwest occurred, and wind speed was high during the rotation, the model underestimation of wave height was severe. Except for the Tolman and Chalikov (J Phys Oceanogr 26:497–518, 1996) source term package, which severely underestimated the bulk wave parameters during stormy condition, the performances of other formulations were practically similar. However, in terms of statistics, the Ardhuin et al. (J Phys Oceanogr 40(9):1917–1941, 2010) source terms with TEST405 parameterization were the most successful formulation in the Persian Gulf when compared to in situ and altimeter-derived observations.  相似文献   

11.
Theory of wave boundary layers (WBLs) developed by Reznik (J Mar Res 71: 253–288, 2013, J Fluid Mech 747: 605–634, 2014, J Fluid Mech 833: 512–537, 2017) is extended to a rotating stratified fluid. In this case, the WBLs arise in the field of near-inertial oscillations (NIOs) driven by a tangential wind stress of finite duration. Near-surface Ekman layer is specified in the most general form; tangential stresses are zero at the lower boundary of Ekman layer and viscosity is neglected below the boundary. After the wind ceases, the Ekman pumping at the boundary becomes a linear superposition of inertial oscillations with coefficients dependent on the horizontal coordinates. The solution under the Ekman layer is obtained in the form of expansions in the vertical wave modes. We separate from the solution a part representing NIO and demonstrate development of a WBL near the Ekman layer boundary. With increasing time t, the WBL width decays inversely proportional to \( \sqrt{t} \) and gradients of fields in the WBL grow proportionally to \( \sqrt{t} \); the most part of NIO is concentrated in the WBL. Structure of the WBL depends strongly on its horizontal scale L determined by scale of the wind stress. The shorter the NIO is, the thinner and sharper the WBL is; the short-wave NIO with L smaller than the baroclinic Rossby scale LR does not penetrate deep into the ocean. On the contrary, for L?≥?LR, the WBL has a smoother vertical structure; a significant long-wave NIO signal is able to reach the oceanic bottom. An asymptotic theory of the WBL in rotating stratified fluid is suggested.  相似文献   

12.
High-frequency (HF) surface wave radars provide the unique capability to continuously monitor the coastal environment far beyond the range of conventional microwave radars. Bragg-resonant backscattering by ocean waves with half the electromagnetic radar wavelength allows ocean surface currents to be measured at distances up to 200 km. When a tsunami propagates from the deep ocean to shallow water, a specific ocean current signature is generated throughout the water column. Due to the long range of an HF radar, it is possible to detect this current signature at the shelf edge. When the shelf edge is about 100 km in front of the coastline, the radar can detect the tsunami about 45 min before it hits the coast, leaving enough time to issue an early warning. As up to now no HF radar measurements of an approaching tsunami exist, a simulation study has been done to fix parameters like the required spatial resolution or the maximum coherent integration time allowed. The simulation involves several steps, starting with the Hamburg Shelf Ocean Model (HAMSOM) which is used to estimate the tsunami-induced current velocity at 1 km spatial resolution and 1 s time step. This ocean current signal is then superimposed to modelled and measured HF radar backscatter signals using a new modulation technique. After applying conventional HF radar signal processing techniques, the surface current maps contain the rapidly changing tsunami-induced current features, which can be compared to the HAMSOM data. The specific radial tsunami current signatures can clearly be observed in these maps, if appropriate spatial and temporal resolution is used. Based on the entropy of the ocean current maps, a tsunami detection algorithm is described which can be used to issue an automated tsunami warning message.  相似文献   

13.
A novel implementation of parameters estimating the space-time wave extremes within the spectral wave model WAVEWATCH III (WW3) is presented. The new output parameters, available in WW3 version 5.16, rely on the theoretical model of Fedele (J Phys Oceanogr 42(9):1601-1615, 2012) extended by Benetazzo et al. (J Phys Oceanogr 45(9):2261–2275, 2015) to estimate the maximum second-order nonlinear crest height over a given space-time region. In order to assess the wave height associated to the maximum crest height and the maximum wave height (generally different in a broad-band stormy sea state), the linear quasi-determinism theory of Boccotti (2000) is considered. The new WW3 implementation is tested by simulating sea states and space-time extremes over the Mediterranean Sea (forced by the wind fields produced by the COSMO-ME atmospheric model). Model simulations are compared to space-time wave maxima observed on March 10th, 2014, in the northern Adriatic Sea (Italy), by a stereo camera system installed on-board the “Acqua Alta” oceanographic tower. Results show that modeled space-time extremes are in general agreement with observations. Differences are mostly ascribed to the accuracy of the wind forcing and, to a lesser extent, to the approximations introduced in the space-time extremes parameterizations. Model estimates are expected to be even more accurate over areas larger than the mean wavelength (for instance, the model grid size).  相似文献   

14.
An efficient method for inferring Manning’s n coefficients using water surface elevation data was presented in Sraj et al. (Ocean Modell 83:82–97 2014a) focusing on a test case based on data collected during the Tōhoku earthquake and tsunami. Polynomial chaos (PC) expansions were used to build an inexpensive surrogate for the numerical model GeoClaw, which were then used to perform a sensitivity analysis in addition to the inversion. In this paper, a new analysis is performed with the goal of inferring the fault slip distribution of the Tōhoku earthquake using a similar problem setup. The same approach to constructing the PC surrogate did not lead to a converging expansion; however, an alternative approach based on basis pursuit denoising was found to be suitable. Our result shows that the fault slip distribution can be inferred using water surface elevation data whereas the inferred values minimize the error between observations and the numerical model. The numerical approach and the resulting inversion are presented in this work.  相似文献   

15.
The new Database of Italy’s Seismogenic Sources (Basili et al. 2008) identifies areas with a degree of homogeneity in earthquake generation mechanism judged sufficiently high. Nevertheless, their seismic sequences show rather long and regular interoccurrence times mixed with irregularly distributed short interoccurrence times. Accordingly, the following question could naturally arise: do sequences consist of nearly periodic events perturbed by a kind of noise; are they Poissonian; or short interoccurrence times predominate like in a cluster model? The relative reliability of these hypotheses is at present a matter of discussion (Faenza et al., Geophys J Int 155:521–531, 2003; Corral, Proc Geoph 12:89–100, 2005, Tectonophysics 424:177–193, 2006). In our regions, a statistical validation is not feasible because of the paucity of data. Moreover, the classical tests do not clearly suggest which one among different proposed models must be favoured. In this paper, we adopt a model of interoccurrence times able to interpret the three different hypotheses, ranging from exponential to Weibull distributions, in a scenario of increasing degree of predictability. In order to judge which one of these hypotheses is favoured, we adopt, instead of the classical tests, a more selective indicator measuring the error in respect to the chosen panorama of possible truths. The earthquake prediction is here simply defined and calculated through the conditional probability of occurrence depending on the elapsed time t0 since the last earthquake. Short-term and medium-term predictions are performed for all the Italian seismic zones on the basis of datasets built in the context of the National Projects INGV-DPC 2004–2006, in the frame of which this research was developed. The mathematical model of interoccurrence times (mixture of exponential and Weibull distributions) is justified in its analytical structure. A dimensionless procedure is used in order to reduce the number of parameters and to make comparisons easier. Three different procedures are taken into consideration for the estimation of the parameter values; in most of the cases, they give comparable results. The degree of credibility of the proposed methods is evaluated. Their robustness as well as their sensitivity are discussed. The comparison of the probability of occurrence of a Maw >5.3 event in the next 5 and 30 years from January 1, 2003, conditional to the time elapsed since the last event, shows that the relative ranking of impending rupture in 5 years is roughly maintained in a 30-year perspective with higher probabilities and large fluctuations between sources belonging to the same macro region.  相似文献   

16.
An alternative model for the nonlinear interaction term Snl in spectral wave models, the so called generalized kinetic equation (Janssen J Phys Oceanogr 33(4):863–884, 2003; Annenkov and Shrira J Fluid Mech 561:181–207, 2006b; Gramstad and Stiassnie J Fluid Mech 718:280–303, 2013), is discussed and implemented in the third generation wave model WAVEWATCH-III. The generalized kinetic equation includes the effects of near-resonant nonlinear interactions, and is therefore able, in theory, to describe faster nonlinear evolution than the existing forms of Snl which are based on the standard Hasselmann kinetic equation (Hasselmann J Fluid Mech 12:481–500, 1962). Numerical simulations with WAVEWATCH have been carried out to thoroughly test the performance of the new form of Snl, and to compare it to the existing models for Snl in WAVEWATCH; the DIA and WRT. Some differences between the different models for Snl are observed. As expected, the DIA is shown to perform less well compared to the exact terms in certain situations, in particular for narrow wave spectra. Also for the case of turning wind significant differences between the different models are observed. Nevertheless, different from the case of unidirectional waves where the generalized kinetic equation represents a obvious improvement to the standard forms of Snl (Gramstad and Stiassnie 2013), the differences seems to be less pronounced for the more realistic cases considered in this paper.  相似文献   

17.
A recently compiled, comprehensive, and good-quality strong-motion database of the Iranian earthquakes has been used to develop local empirical equations for the prediction of peak ground acceleration (PGA) and 5%-damped pseudo-spectral accelerations (PSA) up to 4.0 s. The equations account for style of faulting and four site classes and use the horizontal distance from the surface projection of the rupture plane as a distance measure. The model predicts the geometric mean of horizontal components and the vertical-to-horizontal ratio. A total of 1551 free-field acceleration time histories recorded at distances of up to 200 km from 200 shallow earthquakes (depth < 30 km) with moment magnitudes ranging from Mw 4.0 to 7.3 are used to perform regression analysis using the random effects algorithm of Abrahamson and Youngs (Bull Seism Soc Am 82:505–510, 1992), which considers between-events as well as within-events errors. Due to the limited data used in the development of previous Iranian ground motion prediction equations (GMPEs) and strong trade-offs between different terms of GMPEs, it is likely that the previously determined models might have less precision on their coefficients in comparison to the current study. The richer database of the current study allows improving on prior works by considering additional variables that could not previously be adequately constrained. Here, a functional form used by Boore and Atkinson (Earthquake Spect 24:99–138, 2008) and Bindi et al. (Bull Seism Soc Am 9:1899–1920, 2011) has been adopted that allows accounting for the saturation of ground motions at close distances. A regression has been also performed for the V/H in order to retrieve vertical components by scaling horizontal spectra. In order to take into account epistemic uncertainty, the new model can be used along with other appropriate GMPEs through a logic tree framework for seismic hazard assessment in Iran and Middle East region.  相似文献   

18.
Ground-motion prediction equations (GMPEs) are essential tools in seismic hazard studies to estimate ground motions generated by potential seismic sources. Global GMPEs which are based on well-compiled global strong-motion databanks, have certain advantages over local GMPEs, including more sophisticated parameters in terms of distance, faulting style, and site classification but cannot guarantee the local/region-specific propagation characteristics of shear wave (e.g., geometric spreading behavior, quality factor) for different seismic regions at larger distances (beyond about 80 km). Here, strong-motion records of northern Iran have been used to estimate the propagation characteristics of shear wave and determine the region-specific adjustment parameters for three of the NGA-West2 GMPEs to be applicable in northern Iran. The dataset consists of 260 three-component records from 28 earthquakes, recorded at 139 stations, with moment magnitudes between 4.9 and 7.4, horizontal distance to the surface projection of the rupture (R JB) less than 200 km, and average shear-wave velocity over the top 30 m of the subsurface (V S30) between 155 and 1500 m/s. The paper also presents the ranking results for three of the NGA-West2 GMPEs against strong motions recorded in northern Iran, before and after adjustment for region-dependent attenuation characteristics. The ranking is based on the likelihood and log-likelihood methods (LH and LLH) proposed by Scherbaum et al. (Bull Seismol Soc Am 94: 2164–2185, 2004, Bull Seismol Soc Am 99, 3234–3247, 2009, respectively), the Nash–Sutcliffe model efficiency coefficient (Nash and Sutcliffe, J Hydrol 10:282–290, 1970), and the EDR method of Kale and Akkar (Bull Seismol Soc Am 103:1069–1084, 2012). The best-fitting models over the whole frequency range are the ASK14 and BSSA14 models. Taking into account that the models’ performances were boosted after applying the adjustment factors, at least moderate regional variation of ground motions is highlighted. The regional adjustment based on the Iranian database reveals an upward trend (indicated as high Q factor) for the selected database. Further investigation to determine adjustment factors based on a much richer database of the Iranian strong-motion records is of utmost important for seismic hazard and risk analysis studies in northern Iran, containing major cities including the capital city of Tehran.  相似文献   

19.
In this paper we propose Universal trace co-kriging, a novel methodology for interpolation of multivariate Hilbert space valued functional data. Such data commonly arises in multi-fidelity numerical modeling of the subsurface and it is a part of many modern uncertainty quantification studies. Besides theoretical developments we also present methodological evaluation and comparisons with the recently published projection based approach by Bohorquez et al. (Stoch Environ Res Risk Assess 31(1):53–70, 2016.  https://doi.org/10.1007/s00477-016-1266-y). Our evaluations and analyses were performed on synthetic (oil reservoir) and real field (uranium contamination) subsurface uncertainty quantification case studies. Monte Carlo analyses were conducted to draw important conclusions and to provide practical guidelines for all future practitioners.  相似文献   

20.
According to the idea now widespread that macroseismic intensity should be expressed in probabilistic terms, a beta-binomial model has been proposed in the literature to estimate the probability of the intensity at site in the Bayesian framework and a clustering procedure has been adopted to define learning sets of macroseismic fields required to assign prior distributions of the model parameters. This article presents the results concerning the learning sets obtained by exploiting the large Italian macroseismic database DBM1I11 (Locati et al. in DBMI11, the 2011 version of the Italian Macroseismic Database, 2011. http://emidius.mi.ingv.it/DBMI11/) and discusses the problems related to their use in probabilistic modelling of the attenuation in seismic regions of the European countries partners of the UPStrat-MAFA project (2012), namely South Iceland, Portugal, SE Spain and Mt Etna volcano area (Italy). Anisotropy and the presence of offshore earthquakes are some of the problems faced. All the work has been carried out in the framework of the Task B of the project.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号