首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
—The spatial power spectrum of the scalar potential (V) of the main geomagnetic field shows a power-law behaviour at the core-mantle boundary (CMB) and an almost uniform distribution of the corresponding phases. This is strong evidence for a fractal topography of V having a non-integer dimension of 2.2 (with an uncertainty of ± 0.1) which is, indeed, found from an analysis of the power spectra of 32 spherical harmonic models of V spanning the interval 1647 to 1990.  相似文献   

2.
An earthquake catalog derived from the detection of seismically-generated T-waves is used to study the time-clustering behavior of moderate-size (?3.0 M) earthquakes between 15 and 35°N along the Mid-Atlantic Ridge (MAR). Within this region, the distribution of inter-event times is consistent with a non-periodic, non-random, clustered process. The highest degrees of clustering are associated temporally with large mainshock-aftershock sequences; however, some swarm-like activity also is evident. Temporal fluctuations characterized by a power spectral density P(f) that decays as 1/fα are present within the time sequence, with α ranging from 0.12 to 0.55 for different regions of the spreading axis. This behavior is negligible at time scales less than ∼5×103 s, and earthquake occurrence becomes less clustered (smaller α) as increasing size thresholds are applied to the catalog. A power-law size-frequency scaling for Mid-Atlantic Ridge earthquakes also can be demonstrated using the distribution of acoustic magnitudes, or source levels. Although fractal seismic behavior has been linked to the structure of the underlying fault population in other environments, power-law fault size distributions have not been observed widely in the mid-ocean ridge setting.  相似文献   

3.
The knowledge of the high intensity tails of probability distributions that determine the rate of occurrence of extreme events of solar energetic particles is a critical element in the evaluation of hazards for human and robotic space missions. Here instead of the standard approach based on fitting a selected distribution function to the observed data we investigate a different approach, which is based on a study of the scaling properties of the maximum particle flux in time intervals of increasing length. To find the tail of the probability distributions we apply the “Max-Spectrum” method (Stoev, S.A., Michailidis, G., 2006. On the estimation of the heavy-tail exponent in time series using the Max-Spectrum. Technical Report 447, Department of Statistics, University of Michigan) to 1973–1997 IMP-8 proton data and the 1987–2008 GOES data, which cover a wide range of proton energies. We find that both data sets indicate a power-law tail with the power exponents close to 0.6 at least in the energy range 9–60 MeV. The underlying probability distribution is consistent with the Fréchet type (power-law behavior) extreme value distribution. Since the production of high fluxes of energetic particles is caused by fast Coronal Mass Ejections (CMEs) this heavy-tailed distribution also means that the Sun generates more fast CMEs than would be expected from a Poissonian-type process.  相似文献   

4.
Morphometric analyses of glaciated valleys typically involve attempts to fit empirical data from valley cross-profiles to quadratic (parabolic) or power-law equations, with the choice of equation depending on the goal of the analysis. In assessing the relative merits of these two types of equations, some confusion has arisen because of apparent variations in coordinate system datums between studies which have made use of the power-law equation. However, we show that this confusion relates to a simple mathematical error, rather than any true difference in methodology. Of more significance, however, is the fact that in fitting a power-law equation to observed profiles, significant bias is introduced by the use of a logarithmic data transformation. Because of this implicit bias, power-law equation parameters are influenced most strongly by data points close in elevation to the centre of the valley, and so failure to account for the depositional elements which frequently occur in these areas can lead to significant errors if the aim of the analysis is to characterize the form of the erosional profile.  相似文献   

5.
Most GPS time-series exhibit a seasonal signal that can have an amplitude of a few millimetres. This seasonal signal can be removed by fitting an extra sinusoidal signal with a period of one year to the GPS data during the estimation of the linear trend.However, Blewitt and Lavallée (2002) showed that including an annual signal in the estimation process still can give a larger linear trend error than the trend error estimated from data from which the annual signal has been removed by other means. They assumed that the GPS data only contained white noise and we extend their result to the case of power-law plus white noise which is known to exist in most GPS observations. For the GPS stations CASC, LAGO, PDEL and TETN the difference in trend error between having or not having an annual signal in the data is around ten times larger when a power-law plus white noise model is used instead of a pure white noise model. Next, our methodology can be used to estimate for any station how much the accuracy of the linear trend will improve when one tries to subtract the annual signal from the GPS time-series by using a physical model.Finally, we demonstrate that for short time-series the trend error is more influenced by the fact that the noise properties also need to be estimated from the data. This causes on average an underestimation of the trend error.  相似文献   

6.
空间光滑地震活动性模型中光滑函数的比较研究   总被引:2,自引:1,他引:1       下载免费PDF全文
徐伟进  高孟潭 《地震学报》2012,34(2):244-256
使用Frankel提出的基于空间光滑地震活动性模型的地震危险性分析方法,选择华南、华北、川滇3个地区的地震记录,比较分析了高斯、幂律和地震分形分布光滑函数3种光滑函数在不同地区的适用性.结果表明,使用交叉验证法可以为高斯光滑函数选取合适的相关距离c值,光滑得到的地震活动性模型能够真实反映研究区域的地震活动特征,根据活动性模型计算得出的峰值加速度(PGA)分布也符合人们对研究区域地震危险性的认识.幂律光滑函数适用于地震活动性较强的地区,且具有容易求取光滑参数的优点.光滑程度较低的幂律光滑函数不适用于地震活动性弱的地区,在该类地区应选择光滑程度较高的高斯光滑函数.地震分形分布光滑函数不适用于地震活动较强且地震活动强度差异较大的地区,其容易过分高估高震级地震对地震危险性的影响,而忽略了低震级地震对地震危险性的贡献.但对于地震活动较弱且地震活动强度差异较小的地区,可使用地震分形分布光滑函数,且同样具有容易求取光滑参数的优点.   相似文献   

7.
On 17 June 1996, Ruapehu volcano, New Zealand, produced a sustained andesitic sub-Plinian eruption, which generated a narrow tephra-fall deposit extending more than 200 km from the volcano. The extremely detailed data set from this eruption allowed methods for the determination of total grain-size distribution and volume of tephra-fall deposits to be critically investigated. Calculated total grain-size distributions of tephra-fall deposits depend strongly on the method used and on the availability of data across the entire dispersal area. The Voronoi Tessellation method was tested for the Ruapehu deposit and gave the best results when applied to a data set extending out to isomass values of <1 g m–2. The total grain-size distribution of a deposit is also strongly influenced by the very proximal samples, and this can be shown by artificially constructing subsets from the Ruapehu database. Unless the available data set is large, all existing techniques for calculations of total grain-size distribution give only apparent distributions. The tephra-fall deposit from Ruapehu does not show a simple exponential thinning, but can be approximated well by at least three straight-line segments or by a power-law fit on semi-log plots of thickness vs. (area)1/2. Integrations of both fits give similar volumes of about 4×106 m3. Integration of at least three exponential segments and of a power-law fit with at least ten isopach contours available can be considered as a good estimate of the actual volume of tephra fall. Integrations of smaller data sets are more problematic.Editorial responsibility: H. Shinohara  相似文献   

8.
Accelerating rates of volcano-tectonic (VT) earthquakes are commonly observed during volcanic unrest. Understanding the repeatability of their behaviour is essential to evaluating their potential to forecast eruptions. Quantitative eruption forecasts have focused on changes in precursors over intervals of weeks or less. Previous studies at basaltic volcanoes in frequent eruption, such as Kilauea in Hawaii and Piton de La Fournaise on Réunion, suggest that VT earthquake rates tend to follow a power-law acceleration with time about 2 weeks before eruption, but that this trend is often obscured by random fluctuations (or noise) in VT earthquake rate. These previous studies used a stacking procedure, in which precursory sequences for several eruptions are combined to enhance the signal from an underlying acceleration in VT earthquake rate. Such analyses assume a common precursory trend for all eruptions. This assumption is tested here for the 57 eruptions and intrusions recorded at Kilauea between 1959 and 1984. Applying rigorous criteria for selecting data (e.g. maximum depth; restricting magnitudes to be greater than the completeness magnitude, 2.1), we find a much less pronounced increase in the aggregate rate of earthquakes than previously reported. The stacked trend is also strongly controlled by the behaviour of one particular pre-eruptive sequence. In contrast, a robust signal emerges among stacked VT earthquake rates for a subset of the eruptions and intrusions. The results are consistent with two different precursory styles at Kilauea: (1) a small proportion of eruptions and intrusions that are preceded by accelerating rates of VT earthquakes over intervals of weeks to months and (2) a much larger number of eruptions that show no consistent increase until a few hours beforehand. The results also confirm the importance of testing precursory trends against data that have been filtered according to simple constraints on the spatial distribution and completeness magnitude of the VT earthquakes.  相似文献   

9.
We used the 3D continuum-scale reactive transport models to simulate eight core flood experiments for two different carbonate rocks. In these experiments the core samples were reacted with brines equilibrated with pCO2 = 3, 2, 1, 0.5 MPa (Smith et al., 2013 [27]). The carbonate rocks were from specific Marly dolostone and Vuggy limestone flow units at the IEAGHG Weyburn-Midale CO2 Monitoring and Storage Project in south-eastern Saskatchewan, Canada. Initial model porosity, permeability, mineral, and surface area distributions were constructed from micro tomography and microscopy characterization data. We constrained model reaction kinetics and porosity–permeability equations with the experimental data. The experimental data included time-dependent solution chemistry and differential pressure measured across the core, and the initial and final pore space and mineral distribution. Calibration of the model with the experimental data allowed investigation of effects of carbonate reactivity, flow velocity, effective permeability, and time on the development and consequences of stable and unstable dissolution fronts.The continuum scale model captured the evolution of distinct dissolution fronts that developed as a consequence of carbonate mineral dissolution and pore scale transport properties. The results show that initial heterogeneity and porosity contrast control the development of the dissolution fronts in these highly reactive systems. This finding is consistent with linear stability analysis and the known positive feedback between mineral dissolution and fluid flow in carbonate formations. Differences in the carbonate kinetic drivers resulting from the range of pCO2 used in the experiments and the different proportions of more reactive calcite and less reactive dolomite contributed to the development of new pore space, but not to the type of dissolution fronts observed for the two different rock types. The development of the dissolution front was much more dependent on the physical heterogeneity of the carbonate rock. The observed stable dissolution fronts with small but visible dissolution fingers were a consequence of the clustering of a small percentage of larger pores in an otherwise homogeneous Marly dolostone. The observed wormholes in the heterogeneous Vuggy limestone initiated and developed in areas of greater porosity and permeability contrast, following pre-existing preferential flow paths.Model calibration of core flood experiments is one way to specifically constrain parameter input used for specific sites for larger scale simulations. Calibration of the governing rate equations and constants for Vuggy limestones showed that dissolution rate constants reasonably agree with published values. However the calcite dissolution rate constants fitted to the Marly dolostone experiments are much lower than those suggested by literature. The differences in fitted calcite rate constants between the two rock types reflect uncertainty associated with measured reactive surface area and appropriately scaling heterogeneous distribution of less abundant reactive minerals. Calibration of the power-law based porosity–permeability equations was sensitive to the overall heterogeneity of the cores. Stable dissolution fronts of the more homogeneous Marly dolostone could be fit with the exponent n = 3 consistent with the traditional Kozeny–Carman equation developed for porous sandstones. More impermeable and heterogeneous cores required larger n values (n = 6–8).  相似文献   

10.
—The size distribution of earthquakes has been investigated since the early 20th century. In 1932 Wadati assumed a power-law distribution n(E) = kE ?w for earthquake energy E and estimated the w value to be 1.7 ~ 2.1. Since the introduction of the magnitude-frequency relation by Gutenberg and Richter in 1944 in the form of log n(M) = a?bM, the spatial or temporal variation (or stability) of b value has been a frequently discussed subject in seismicity studies. The log n(M) versus M plots for some data sets exhibit considerable deviation from a straight line. Many modifications of the G-R relation have been proposed to represent such character. The modified equations include the truncated G-R equation, two-range G-R equation, equations with various additional terms to the original G-R equation. The gamma distribution of seismic moments is equivalent to one of these equations.¶In this paper we examine which equation is the most suitable to magnitude data from Japan and the world using AIC. In some cases, the original G-R equation is the most suitable, however in some cases other equations fit far better. The AIC is also a powerful tool to test the significance of the difference in parameter values between two sets of magnitude data under the assumption that the magnitudes are distributed according to a specified equation. Even if there is no significant difference in b value between two data sets (the G-R relation is assumed), we may find a significant difference between the same data sets under the assumption of another relation. To represent a character of the size distribution, there are indexes other than parameters in the magnitude-frequency distribution. The η value is one of such numbers. Although it is certain that these indexes vary among different data sets and are usable to represent a certain feature of seismicity, the usefulness of these indexes in some practical problems such as foreshock discrimination has not yet been established.  相似文献   

11.
《Journal of Geodynamics》2009,47(3-5):118-130
Since microphysics cannot say definitively whether the rheology of the mantle is linear or non-linear, the aim of this paper is to constrain mantle rheology from observations related to the glacial isostatic adjustment (GIA) process—namely relative sea-levels (RSLs), land uplift rate from GPS and gravity-rate-of-change from GRACE. We consider three earth model types that can have power-law rheology (n = 3 or 4) in the upper mantle, the lower mantle or throughout the mantle. For each model type, a range of A parameter in the creep law will be explored and the predicted GIA responses will be compared to the observations to see which value of A has the potential to explain all the data simultaneously. The coupled Laplace finite-element (CLFE) method is used to calculate the response of a 3D spherical self-gravitating viscoelastic Earth to forcing by the ICE-4G ice history model with ocean loads in self-gravitating oceans. Results show that ice thickness in Laurentide needs to increase significantly or delayed by 2 ka, otherwise the predicted uplift rate, gravity rate-of-change and the amplitude of the RSL for sites inside the ice margin of Laurentide are too low to be able to explain the observations. However, the ice thickness elsewhere outside Laurentide needs to be slightly modified in order to explain the global RSL data outside Laurentide. If the ice model is modified in this way, then the results of this paper indicate that models with power-law rheology in the lower mantle (with A  10−35 Pa−3 s−1 for n = 3) have the highest potential to simultaneously explain all the observed RSL, uplift rate and gravity rate-of-change data than the other model types.  相似文献   

12.
A power-law relation for the frequency–area distribution (FAD) of medium and large landslides (e.g. tens to millions of square meters) has been observed by numerous authors. But the FAD of small landslides diverges from the power-law distribution, with a rollover point below which frequencies decrease for smaller landslides. Some studies conclude that this divergence is an artifact of unmapped small landslides due to lack of spatial or temporal resolution; others posit that it is caused by the change in the underlying failure process. An explanation for this dilemma is essential both to evaluate the factors controlling FADs of landslides and power-law scaling, which is a crucial factor regarding both landscape evolution and landslide hazard assessment. This study examines the FADs of 45 earthquake-induced landslide inventories from around the world in the context of the proposed explanations. We show that each inventory probably involves some combination of the proposed explanations, though not all explanations contribute to each case. We propose an alternative explanation to understand the reason for the divergence from a power-law. We suggest that the geometry of a landslide at the time of mapping reflects not just one single movement but many, including the propagation of numerous smaller landslides before and after the main failure. Because only the resulting combination of these landslides can be observed due to a lack of temporal resolution, many smaller landslides are not taken into account in the inventory. This reveals that the divergence from the power-law is not necessarily attributed to the incompleteness of an inventory. This conceptual model will need to be validated by ongoing observation and analysis. Also, we show that because of the subjectivity of mapping procedures, the total number of landslides and total landslide areas in inventories differ significantly, and therefore the shapes of FADs also differ considerably. © 2018 The Authors. Earth Surface Processes and Landforms published by John Wiley & Sons Ltd.  相似文献   

13.
Practical application of the power-law regression model with an unknown location parameter can be plagued by non-finite least squares parameter estimates. This presents a serious problem in hydrology, since stream flow data is mainly obtained using an estimated stage–discharge power-law rating curve. This study provides a set of sufficient requirements for the data to ensure the existence of finite least squares parameter estimates for a power-law regression with an unknown location parameter. It is shown that in practice, these requirements act as necessary for having a finite least squares solution, in most cases. Furthermore, it is proved that there is a finite probability for the model to produce data having non-finite least squares parameter estimates. The implications of this result are discussed in the context of asymptotic predictions, inference and experimental design. A Bayesian approach to the actual regression problem is recommended.  相似文献   

14.
The Fontana Lapilli deposit was erupted in the late Pleistocene from a vent, or multiple vents, located near Masaya volcano (Nicaragua) and is the product of one of the largest basaltic Plinian eruptions studied so far. This eruption evolved from an initial sequence of fluctuating fountain-like events and moderately explosive pulses to a sustained Plinian episode depositing fall beds of highly vesicular basaltic-andesite scoria (SiO2 > 53 wt%). Samples show unimodal grain size distribution and a moderate sorting that are uniform in time. The juvenile component predominates (> 96 wt%) and consists of vesicular clasts with both sub-angular and fluidal, elongated shapes. We obtain a maximum plume height of 32 km and an associated mass eruption rate of 1.4 × 108 kg s−1 for the Plinian phase. Estimates of erupted volume are strongly sensitive to the technique used for the calculation and to the distribution of field data. Our best estimate for the erupted volume of the majority of the climactic Plinian phase is between 2.9 and 3.8 km3 and was obtained by applying a power-law fitting technique with different integration limits. The estimated eruption duration varies between 4 and 6 h. Marine-core data confirm that the tephra thinning is better fitted by a power-law than by an exponential trend.  相似文献   

15.
Power-Law Testing for Fault Attributes Distributions   总被引:2,自引:0,他引:2  
This paper is devoted to statistical analysis of faults’ attributes. The distributions of lengths, widths of damage zones, displacements and thicknesses of fault cores are studied. Truncated power-law (TPL) is considered in comparison with commonly used simple power-law (PL) (or Pareto) distribution. The maximal likelihood and the confidence interval of the exponent for both PL and TPL are estimated by appropriate statistical methods. The Kolmogorov–Smirnov (KS) test and the likelihood ratio test with alternative non-nested hypothesis for exponential distribution are used to verify the statistical approximation. Furthermore, the advantage of TPL is proved by Bayesian information criterion. Our results suggest that a TPL is more suitable for describing fault attributes, and that its condition is satisfied for a wide range of fault scales. We propose that using truncated power laws in general might eliminate or relax the bias related to sampling strategy and the resolution of measurements (such as censoring, truncation, and cut effect) and; therefore, the most reliable range of data can be considered for the statistical approximation of fault attributes.  相似文献   

16.
—Observational studies indicate that large earthquakes are sometimes preceded by phases of accelerated seismic release (ASR) characterized by cumulative Benioff strain following a power law time-to-failure relation with a term (t f?t) m , where t f is the failure time of the large event and observed values of m are close to 0.3. We discuss properties of ASR and related aspects of seismicity patterns associated with several theoretical frameworks. The subcritical crack growth approach developed to describe deformation on a crack prior to the occurrence of dynamic rupture predicts great variability and low asymptotic values of the exponent m that are not compatible with observed ASR phases. Statistical physics studies assuming that system-size failures in a deforming region correspond to critical phase transitions predict establishment of long-range correlations of dynamic variables and power-law statistics before large events. Using stress and earthquake histories simulated by the model of Ben-Zion (1996) for a discrete fault with quenched heterogeneities in a 3-D elastic half space, we show that large model earthquakes are associated with nonrepeating cyclical establishment and destruction of long-range stress correlations, accompanied by nonstationary cumulative Benioff strain release. We then analyze results associated with a regional lithospheric model consisting of a seismogenic upper crust governed by the damage rheology of Lyakhovsky et al. (1997) over a viscoelastic substrate. We demonstrate analytically for a simplified 1-D case that the employed damage rheology leads to a singular power-law equation for strain proportional to (t f?t)?1/3, and a nonsingular power-law relation for cumulative Benioff strain proportional to (t f?t)1/3. A simple approximate generalization of the latter for regional cumulative Benioff strain is obtained by adding to the result a linear function of time representing a stationary background release. To go beyond the analytical expectations, we examine results generated by various realizations of the regional lithospheric model producing seismicity following the characteristic frequency-size statistics, Gutenberg-Richter power-law distribution, and mode switching activity. We find that phases of ASR exist only when the seismicity preceding a given large event has broad frequency-size statistics. In such cases the simulated ASR phases can be fitted well by the singular analytical relation with m = ?1/3, the nonsingular equation with m = 0.2, and the generalized version of the latter including a linear term with m = 1/3. The obtained good fits with all three relations highlight the difficulty of deriving reliable information on functional forms and parameter values from such data sets. The activation process in the simulated ASR phases is found to be accommodated both by increasing rates of moderate events and increasing average event size, with the former starting a few years earlier than the latter. The lack of ASR in portions of the seismicity not having broad frequency-size statistics may explain why some large earthquakes are preceded by ASR and other are not. The results suggest that observations of moderate and large events contain two complementary end-member predictive signals on the time of future large earthquakes. In portions of seismicity following the characteristic earthquake distribution, such information exists directly in the associated quasi-periodic temporal distribution of large events. In portions of seismicity having broad frequency-size statistics with random or clustered temporal distribution of large events, the ASR phases have predictive information. The extent to which natural seismicity may be understood in terms of these end-member cases remains to be clarified. Continuing studies of evolving stress and other dynamic variables in model calculations combined with advanced analyses of simulated and observed seismicity patterns may lead to improvements in existing forecasting strategies.  相似文献   

17.
Modeling dispersion in homogeneous porous media with the convection–dispersion equation commonly requires computing effective transport coefficients. In this work, we investigate longitudinal and transverse dispersion coefficients arising from the method of volume averaging, for a variety of periodic, homogeneous porous media over a range of particle Péclet (Pep) numbers. Our objective is to validate the upscaled transverse dispersion coefficients and concentration profiles by comparison to experimental data reported in the literature, and to compare the upscaling approach to the more common approach of inverse modeling, which relies on fitting the dispersion coefficients to measured data. This work is unique in that the exact microscale geometry is available; thus, no simplifying assumptions regarding the geometry are required to predict the effective dispersion coefficients directly from theory. Transport of both an inert tracer and non-chemotactic bacteria is investigated for an experimental system that was designed to promote transverse dispersion. We highlight the occurrence of transverse dispersion coefficients that (1) depart from power-law behavior at relatively low Pep values and (2) are greater than their longitudinal counterparts for a specific range of Pep values. The upscaling theory provides values for the transverse dispersion coefficient that are within the 98% confidence interval of the values obtained from inverse modeling. The mean absolute error between experimental and upscaled concentration profiles was very similar to that between the experiments and inverse modeling. In all cases the mean absolute error did not exceed 12%. Overall, this work suggests that volume averaging can potentially be used as an alternative to inverse modeling for dispersion in homogeneous porous media.  相似文献   

18.
Precise measurements of seismological Q are difficult because we lack detailed knowledge on how the Earth’s fine velocity structure affects the amplitude data. In a number of recent papers, Morozov (Geophys J Int 175:239–252, 2008; Seism Res Lett 80:5–7, 2009; Pure Appl Geophys, this volume, 2010) proposes a new procedure intended to improve Q determinations. The procedure relies on quantifying the structural effects using a new form of geometrical spreading (GS) model that has an exponentially decaying component with time, e ?γt·γ is a free parameter and is measured together with Q. Morozov has refit many previously published sets of amplitude attenuation data. In general, the new Q estimates are much higher than previous estimates, and all of the previously estimated frequency-dependence values for Q disappear in the new estimates. In this paper I show that (1) the traditional modeling of seismic amplitudes is physically based, whereas the new model lacks a physical basis; (2) the method of measuring Q using the new model is effectively just a curve fitting procedure using a first-order Taylor series expansion; (3) previous high-frequency data that were fit by a power-law frequency dependence for Q are expected to be also fit by the first-order expansion in the limited frequency bands involved, because of the long tails of power-law functions; (4) recent laboratory measurements of intrinsic Q of mantle materials at seismic frequencies provide independent evidence that intrinsic Q is often frequency-dependent, which should lead to frequency-dependent total Q; (5) published long-period surface wave data that were used to derive several recent Q models inherently contradict the new GS model; and (6) previous modeling has already included a special case that is mathematically identical to the new GS model, but with physical assumptions and measured Q values that differ from those with the new GS model. Therefore, while individually the previous Q measurements have limited precision, they cannot be improved by using the new GS model. The large number of Q measurements by seismologists are sufficient to show that Q values in the Earth are highly laterally variable and are often frequency dependent.  相似文献   

19.
Spatiotemporal mapping the minimum magnitude of completeness Mc and b-value of the Gutenberg–Richter law is conducted for the earthquake catalog data of Greece. The data were recorded by the seismic network of the Institute of Geodynamics of the National Observatory of Athens (GINOA) in 1970–2010 and by the Hellenic Unified Seismic Network (HUSN) in 2011–2014. It is shown that with the beginning of the measurements at HUSN, the number of the recorded events more than quintupled. The magnitude of completeness Mc of the earthquake catalog for 1970–2010 varies within 2.7 to 3.5, whereas starting from April 2011 it decreases to 1.5–1.8 in the central part of the region and fluctuates around the average of 2.0 in the study region overall. The magnitude of completeness Mc and b-value for the catalogs of the earthquakes recorded by the old (GINOA) and new (HUSN) seismic networks are compared. It is hypothesized that the magnitude of completeness Mc may affect the b-value estimates. The spatial distribution of the b-value determined from the HUSN catalog data generally agrees with the main geotectonic features of the studied territory. It is shown that the b-value is below 1 in the zones of compression and is larger than or equal to 1 in the zones dominated by extension. The established depth dependence of the b-value is pretty much consistent with the hypothesis of a brittle–ductile transition zone existing in the Earth’s crust. It is assumed that the source depth of a strong earthquake can probably be estimated from the depth distribution of the b-value, which can be used for seismic hazard assessment.  相似文献   

20.
Time Distribution of Immediate Foreshocks Obtained by a Stacking Method   总被引:1,自引:0,他引:1  
—We apply a stacking method to investigate the time distribution of foreshock activity immediately before a mainshock. The foreshocks are searched for events with M≥ 3.0 within a distance of 50 km and two days from each mainshock with M≥ 5.0, in the JMA catalog from 1977 through 1997/9/30. About 33% of M≥ 5.0 earthquakes are preceded by foreshocks, and 50–70% in some areas. The relative location and time of three types of representative foreshocks, that is, the largest one, the nearest one to the mainshock in distance, and the nearest one in time, are stacked in reference to each mainshock. The statistical test for stacked time distribution of foreshocks within 30km from and two days before mainshocks shows that the inverse power-law type of a probability density time function is a significantly better fit than the exponential one for all three types of representative foreshocks. Two explanations possibly interpret the results. One is that foreshocks occur as a result of a stress change in the region, and the other one is that a foreshock is the cause of a stress change in the region and it triggers a mainshock. The second explanation is compatible with the relationship between a mainshock and aftershocks, when an aftershock happens to become larger than the mainshock. However the values of exponent of the power law obtained for stacked foreshocks are significantly smaller than those for similarly stacked aftershocks. Therefore the foreshock–mainshock relation should not be explained as a normal aftershock activity. Probably an increase of stress during foreshock activity results in apparently smaller values of the exponent, if the second explanation is the case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号