首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
A mixed model is proposed to fit earthquake interevent time distribution. In this model, the whole distribution is constructed by mixing the distribution of clustered seismicity, with a suitable distribution of background seismicity. Namely, the fit is tested assuming a clustered seismicity component modeled by a non-homogeneous Poisson process and a background component modeled using different hypothetical models (exponential, gamma and Weibull). For southern California, Japan, and Turkey, the best fit is found when a Weibull distribution is implemented as a model for background seismicity. Our study uses earthquake random sampling method we introduced recently. It is performed here to account for space–time clustering of earthquakes at different distances from a given source and to increase the number of samples used to estimate earthquake interevent time distribution and its power law scaling. For Japan, the contribution of clustered pairs of events to the whole distribution is analyzed for different magnitude cutoffs, m c, and different time periods. The results show that power laws are mainly produced by the dominance of correlated pairs at small and long time ranges. In particular, both power laws, observed at short and long time ranges, can be attributed to time–space clustering revealed by the standard Gardner and Knopoff’s declustering windows.  相似文献   

2.
We quantify the correlation between earthquakes and use the same to distinguish between relevant causally connected earthquakes. Our correlation metric is a variation on the one introduced by Baiesi and Paczuski (Phys Rev E 69:066,106, 2004). A network of earthquakes is constructed, which is time-ordered and with links between the more correlated ones. Data pertaining to the California region has been used in the study. Recurrences to earthquakes are identified employing correlation thresholds to demarcate the most meaningful ones in each cluster. The distribution of recurrence lengths and recurrence times are analyzed subsequently to extract information about the complex dynamics. We find that the unimodal feature of recurrence lengths helps to associate typical rupture lengths with different magnitude earthquakes. The out-degree of the network shows a hub structure rooted on the large magnitude earthquakes. In-degree distribution is seen to be dependent on the density of events in the neighborhood. Power laws are also obtained with recurrence time distribution agreeing with the Omori law.  相似文献   

3.
4.
There are two fundamentally different approaches to assessing the probabilistic risk of earthquake occurrence. The first is fault based. The statistical occurrence of earthquakes is determined for mapped faults. The applicable models are renewal models in that a tectonic loading of faults is included. The second approach is seismicity based. The risk of future earthquakes is based on the past seismicity in the region. These are also known as cluster models. An example of a cluster model is the epidemic type aftershock sequence (ETAS) model. In this paper we discuss an alternative branching aftershock sequence (BASS) model. In the BASS model an initial, or seed, earthquake is specified. The subsequent earthquakes are obtained from statistical distributions of magnitude, time, and location. The magnitude scaling is based on a combination of the Gutenberg-Richter scaling relation and the modified Båth’s law for the scaling relation of aftershock magnitudes relative to the magnitude of the main earthquake. Omori’s law specifies the distribution of earthquake times, and a modified form of Omori’s law specifies the distribution of earthquake locations. Unlike the ETAS model, the BASS model is fully self-similar, and is not sensitive to the low magnitude cutoff.  相似文献   

5.
A maximum likelihood method is used to estimate the earthquake hazard parameters maximum magnitudeM max, annual activity rate , and theb value of the Gutenberg-Richter equation in the Vrancea (Romania) region. The applied procedure permits the use of mixed catalogs with incomplete historical as well as complete instrumental parts, the consideration of variable detection thresholds, and the incorporation of earthquake magnitude uncertainty.Our imput data, comprises 105 historical earthquakes which occurred between 984 and 1934, and a complete data file containing 1067 earthquakes which occurred during the period 1935–30 August, 1986. The complete part was divided into four subcatalogs according to different thresholds of completeness. Only subcrustal events were considered, and dependent events were removed.The obtained value (=0.65) is at the lower range of the previously reported results, but it appears concurrent with conceptual and observational facts. The same concerns inferred value of max = 7.8 and activity rate 4.0 = 5.34.  相似文献   

6.
The optimal scaling problem for the time t(L × L) between two successive events in a seismogenic cell of size L is considered. The quantity t(L × L) is defined for a random cell of a grid covering a seismic region G. We solve that problem in terms of a multifractal characteristic of epicenters in G known as the tau-function or generalized fractal dimensions; the solution depends on the type of cell randomization. Our theoretical deductions are corroborated by California seismicity with magnitude M ≥ 2. In other words, the population of waiting time distributions for L = 10–100 km provides positive information on the multifractal nature of seismicity, which impedes the population to be converted into a unified law by scaling. This study is a follow-up of our analysis of power/unified laws for seismicity (see Pure and Applied Geophysics 162 (2005), 1135 and GJI 162 (2005), 899).  相似文献   

7.
IntroductionInassessingtheprobabilitiesoftime-dependentandlong-termseismichazardsforsegmentsofactivefaults,itisnecessarytohavetheprobabilitydensity,f(O,fordescribingtherecurrenceintervaldistributionforsegment-rupturingearthquakes.Fromf(nandthefollowingequation,theconditionalprobability,pc,whichincreaseswiththetime,Te,elapsedsincethelatestearthquake,isabletobecalculated(Nishenko,Buland,1987,WorkingGrouponCaliforniaEarthquakeProbabilities,1995;Wen,1995,1998)fwhereATisthetimeintervalforthefor…  相似文献   

8.
The paper studies the effect of magnitude errors on heterogeneous catalogs, by applying the apparent magnitude theory (seeTinti andMulargia, 1985a), which proves to be the most natural and rigorous approach to the problem. Heterogeneities in seismic catalogs are due to a number of various sources and affect both instrumental as well as noninstrumental earthquake compilations.The most frequent basis of heterogeneity is certainly that the recent instrumental records are to be combined with the historic and prehistoric event listings to secure a time coverage, considerably longer than the recurrence time of the major earthquakes. Therefore the case which attracts the greatest attention in the present analysis is that of a catalog consisting of a subset of higher quality data, generallyS 1, spanning the interval T 1 (the instrumental catalog), and of a second subset of more uncertain magnitude determination, generallyS 2, covering a vastly longer interval T 2 (the historic and/or the geologic catalog). The magnitude threshold of the subcatalogS 1 is supposedly smaller than that ofS 2, which, as we will see, is one of the major causes of discrepancy between the apparent magnitude and the true magnitude distributions. We will further suppose that true magnitude occurrences conform to theGutenberg-Richter (GR) law, because the assumption simplified the analysis without reducing the relevancy of our findings.The main results are: 1) the apparent occurrence rate exceeds the true occurrence rate from a certain magnitude onward, saym GR; 2) the apparent occurrence rate shows two distinct GR regimes separated by an intermediate transition region. The offset between the two regimes is the essential outcome ofS 1 being heterogeneous with respect toS 2. The most important consequences of this study are that: 1) it provides a basis to infer the parameters of the true magnitude distribution, by correcting the bias deriving from heterogeneous magnitude errors; 2) it demonstrates that the double GR decay, that several authors have taken as the incontestable proof of the failure of the GR law and of the experimental evidence of the characteristic earthquake theory, is instead perfectly consistent with a GR-type seismicity.  相似文献   

9.
Statistical tests have been used to adjust the Zemmouri seismic data using a distribution function. The Pareto law has been used and the probabilities of various expected earthquakes were computed. A mathematical expression giving the quantiles was established. The extreme values limiting law confirmed the accuracy of the adjustment method. Using the moment magnitude scale, a probabilistic model was made to predict the occurrences of strong earthquakes. The seismic structure has been characterized by the slope of the recurrence plot γ, fractal dimension D, concentration parameter Ksr, Hurst exponents Hr and Ht. The values of D, γ, Ksr, Hr, and Ht diminished many months before the principal seismic shock (M = 6.9) of the studied seismoactive zone has occurred. Three stages of the deformation of the geophysical medium are manifested in the variation of the coefficient G% of the clustering of minor seismic events.  相似文献   

10.
This paper provides a generic equation for the evaluation of the maximum earthquake magnitude mmax for a given seismogenic zone or entire region. The equation is capable of generating solutions in different forms, depending on the assumptions of the statistical distribution model and/or the available information regarding past seismicity. It includes the cases (i) when earthquake magnitudes are distributed according to the doubly-truncated Gutenberg-Richter relation, (ii) when the empirical magnitude distribution deviates moderately from the Gutenberg-Richter relation, and (iii) when no specific type of magnitude distribution is assumed. Both synthetic, Monte-Carlo simulated seismic event catalogues, and actual data from Southern California, are used to demonstrate the procedures given for the evaluation of mmax.The three estimates of mmax for Southern California, obtained by the three procedures mentioned above, are respectively: 8.32 ± 0.43, 8.31 ± 0.42 and 8.34 ± 0.45. All three estimates are nearly identical, although higher than the value 7.99 obtained by Field et al. (1999). In general, since the third procedure is non-parametric and does not require specification of the functional form of the magnitude distribution, its estimate of the maximum earthquake magnitude mmax is considered more reliable than the other two which are based on the Gutenberg-Richter relation.  相似文献   

11.
In order to estimate the recurrence intervals for large earthquakes occurring in eastern Anatolia, this region enclosed within the coordinates of 36–42N, 35–45E has been separated into nine seismogenic sources on the basis of certain seismological and geomorphological criteria, and a regional time- and magnitude-predictable model has been applied for these sources. This model implies that the magnitude of the preceding main shock which is the largest earthquake during a seismic excitation in a seismogenic source governs the time of occurrence and the magnitude of the expected main shock in this source. The data belonging to both the instrumental period (MS≥ 5.5) until 2003 and the historical period (I0≥ 9.0 corresponding to MS≥ 7.0) before 1900 have been used in the analysis. The interevent time between successive main shocks with magnitude equal to or larger than a certain minimum magnitude threshold were considered in each of the nine source regions within the study area. These interevent times as well as the magnitudes of the main shocks have been used to determine the following relations:
fwawhere Tt is the interevent time measured in years, Mmin is the surface wave magnitude of the smallest main shock considered, Mp is the magnitude of the preceding main shock, Mf is magnitude of the following main shock, and M0 is the released seismic moment per year in each source. Multiple correlation coefficient and standard deviation have been computed as 0.50 and 0.28, respectively for the first relation. The corresponding values for the second relation are 0.64 and 0.32, respectively. It was found that the magnitude of the following main shock Mf does not depend on the preceding interevent time Tt. This case is an interesting property for earthquake prediction since it provides the ability to predict the time of occurrence of the next strong earthquake. On the other hand, a strong negative dependence of Mf on Mp was found. This result indicates that a large main shock is followed by a smaller magnitude one and vice versa. On the basis of the first one of the relations above and taking into account the occurrence time and magnitude of the last main shock, the probabilities of occurrence Pt) of main shocks in each seismogenic source of the east Anatolia during the next 10, 20, 30, 40 and 50 years for earthquakes with magnitudes equal 6.0 and 7.0 were determined. The second of these relations has been used to estimate the magnitude of the expected main shock. According to the time- and magnitude-predictable model, it is expected that a strong and a large earthquake can occur in seismogenic Source 2 (Erzincan) with the highest probabilities of P10 = 66% (Mf = 6.9 and Tt = 12 years) and P10 = 44% (Mf = 7.3 and Tt = 24 years) during the future decade, respectively.  相似文献   

12.
Interpretation of magnetic data can be carried out either in the space or frequency domain. The interpretation in the frequency domain is computationally convenient because convolution becomes multiplication. The frequency domain approach assumes that the magnetic sources distribution has a random and uncorrelated distribution. This approach is modified to include random and fractal distribution of sources on the basis of borehole data. The physical properties of the rocks exhibit scaling behaviour which can be defined as P(k) = Ak, where P(k) is the power spectrum as a function of wave number (k), and A and β are the constant and scaling exponent, respectively. A white noise distribution corresponds to β = 0. The high resolution methods of power spectral estimation e.g. maximum entropy method and multi‐taper method produce smooth spectra. Therefore, estimation of scaling exponents is more reliable. The values of β are found to be related to the lithology and heterogeneities in the crust. The modelling of magnetic data for scaling distribution of sources leads to an improved method of interpreting the magnetic data known as the scaling spectral method. The method has found applicability in estimating the basement depth, Curie depth and filtering of magnetic data.  相似文献   

13.
A space-time envelope of minor seismicity related to major shallow earthquakes is identified from observations of the long-term Precursory Scale Increase () phenomenon, which quantifies the three-stage faulting model of seismogenesis. The envelope, which includes the source area of the major earthquake, is here demarcated for 47 earthquakes from four regions, with tectonic regimes ranging from subduction to continental collision and continental transform. The earthquakes range in magnitude from 5.8 to 8.2, and include the 24 most recent mainshocks of magnitude 6.4 and larger in the San Andreas system of California, the Hellenic Arc region of Greece, and the New Zealand region, together with the six most recent mainshocks of magnitude 7.4 and larger in the Pacific Arc region of Japan. Also included are the destructive earthquakes that occurred at Kobe, Japan (1995, magnitude 7.2), Izmit, Turkey (1999, magnitude 7.4), and W.Tottori, Japan (2000, magnitude 7.3). The space (A P ) in the space-time envelope is optimised with respect to the scale increase, while the time (T P ) is the interval between the onset of the scale increase and the occurrence of the earthquake. A strong correlation is found between the envelope A P T P and the magnitude of the earthquake; hence the conclusion that the set of precursory earthquakes contained in the envelope is intrinsic to the seismogenic process. Yet A P and T P are correlated only weakly with each other, suggesting that A P is affected by differences in statical conditions, such as geological structure and lithology, and T P by differences in dynamical conditions, such as plate velocity. Among other scaling relations, predictive regressions are found between, on the one hand, the magnitude level of the precursory seismicity, and on the other hand, both T P and the major earthquake magnitude. Hence the method, as here applied to retrospective analysis, is potentially adaptable to long-range forecasting of the place, time and magnitude of major earthquakes.  相似文献   

14.
Landslide erosion is a dominant hillslope process and the main source of stream sediment in tropical, tectonically active mountain belts. In this study, we quantified landslide erosion triggered by 24 rainfall events from 2001 to 2009 in three mountainous watersheds in Taiwan and investigated relationships between landslide erosion and rainfall variables. The results show positive power‐law relations between landslide erosion and rainfall intensity and cumulative rainfall, with scaling exponents ranging from 2·94 to 5·03. Additionally, landslide erosion caused by Typhoon Morakot is of comparable magnitude to landslide erosion caused by the Chi‐Chi Earthquake (MW = 7·6) or 22–24 years of basin‐averaged erosion. Comparison of the three watersheds indicates that deeper landslides that mobilize soil and bedrock are triggered by long‐duration rainfall, whereas shallow landslides are triggered by short‐duration rainfall. These results suggest that rainfall intensity and watershed characteristics are important controls on rainfall‐triggered landslide erosion and that severe typhoons, like high‐magnitude earthquakes, can generate high rates of landslide erosion in Taiwan. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
A scaling law for the occurrence of aftershocks in southern California is proposed which suggests that the number of aftershocks is independent of the magnitude of the mainshock if aftershocks are counted in the magnitude interval from (Mm ? Δ) to Mm.  相似文献   

16.
Despite decades of research on the ecological consequences of stream network expansion, contraction and fragmentation, surprisingly little is known about the hydrological mechanisms that shape these processes. Here, we present field surveys of the active drainage networks of four California headwater streams (4–27 km2) spanning diverse topographic, geologic and climatic settings. We show that these stream networks dynamically expand, contract, disconnect and reconnect across all the sites we studied. Stream networks at all four sites contract and disconnect during seasonal flow recessions, with their total active network length, and thus their active drainage densities, decreasing by factors of two to three across the range of flows captured in our field surveys. The total flowing lengths of the active stream networks are approximate power‐law functions of unit discharge, with scaling exponents averaging 0.27 ± 0.04 (range: 0.18–0.40). The number of points where surface flow originates obey similar power‐law relationships, as do the lengths and origination points of flowing networks that are continuously connected to the outlet, with scaling exponents averaging 0.36–0.48. Even stream order shifts seasonally by up to two Strahler orders in our study catchments. Broadly, similar stream length scaling has been observed in catchments spanning widely varying geologic, topographic and climatic settings and spanning more than two orders of magnitude in size, suggesting that network extension/contraction is a general phenomenon that may have a general explanation. Points of emergence or disappearance of surface flow represent the balance between subsurface transmissivity in the hyporheic zone and the delivery of water from upstream. Thus the dynamics of stream network expansion and contraction, and connection and disconnection, may offer important clues to the spatial structure of the hyporheic zone, and to patterns and processes of runoff generation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Around 700 reported precursors of about 350 earthquakes, including the negative observations, have been compiled in 11 categories with 31 subdivisions. The data base is subjected to an initial sorting and screening by imposing three restrictions on the ranges of main shock magnitude (M≥4.0), precursory time (t≤20 years), and the epicentral distance of observation points (X m≤4.100.3M ). Of the 31 subcategories of precursory phenomena, 18 with 9 data points or more are independently studied by regressing their precursory times against magnitude. The preliminary results tend to classify the precursors into three groups:
  1. The precursors which show weak or no correlation between time and the magnitude of the eventual main shock. Examples of this group are foreshocks and precursory tilt.
  2. The precursors which show clear scaling with magnitude. These include seismic velocity ratio (V p/Vs), travel time delay, duration of seismic quiescence, and, to some degree, the variation ofb-value, and anomalous seismicity.
  3. The precursors which display clustering of precursory times around a mean value, which differs for different precursors from a few hours to a few years. Examples include the conductivity rate, geoelectric current and potential, strain, water well level, geochemical anomalies, change of focal mechanism, and the enhancement of seismicity reported only for larger earthquakes. Some of the precursors in this category, such as leveling changes and the occurrence of microseismicity, show bimodal patterns of precursory times and may partially be coseismic.
In addition, each category with a sufficient number of reported estimates of distance and signal amplitude is subjected to multiple linear regression. The usefulness of these regressions at this stage appears to be limited to specifying which of the parameters shows a more significant correlation. Standard deviations of residuals of precursory time against magnitude are generally reduced when observation distance enters as a second independent variable. The effect is more pronounced for water well level and conductivity rate changes. While a substantial portion of the data seem to suffer from personal bias, hence should be regarded as noise, the observations of a number of strain sensitive phenomena such as strain, water well level, and conductivity rate changes, appear to be internally more consistent. For instance, their precursory times suggest a scaling relationship with the strain energy surface density associated with the main shock. The scaling is not identical for all three phenomena so that they may constitute the imminent, short- and intermediate-term manifestation of the same process, i.e. strain loading, respectively.  相似文献   

18.
In this paper we evaluate the present state of the seismic regime in Southern California using the concentration parameter of seismogenic faults (K sf ,Sobolev andZavyalov, 1981). The purpose of this work is to identify potential sites for large earthquakes during the next five or ten years. The data for this study derived from the California Institute of Technology's catalog of southern California earthquakes, and spanned the period between 1932 to June 1982. We examined events as small asM L 1.8 but used a magnitude cutoff atM L =3.3 for a detailed analysis. The size of the target earthquakes (M M ) was chosen as 5.3 and 5.8.The algorithm for calculatingK sf used here was improved over the algorithm described bySobolev andZavyalov (1981) in that it considered the seismic history of each elementary seismoactive volume. The dimensions of the elementary seismoactive volumes were 50 km×50 km and 20 km deep. We found that the mean value ofK sf within 6 months prior to the target events was 6.1±2.0 for target events withM L 5.3 and 5.41.8 for targets withM L 5.8. Seventy-three percent of the targets withM L 5.8 occurred in areas whereK sf was less than 6.1. The variance of the time between the appearance of areas with lowK sf values and the following main shocks was quite large (from a few months to ten years) so this parameter cannot be used here for accurate predictions of occurrence time.Regions where the value ofK sf was below 6.1 at the end of our data set (June, 1982) are proposed as the sites of target earthquakes during the next five to ten years. The most dangerous area is the area east of San Bernardino whereK sf values are presently between 2.9 and 3.7 and where there has been no earthquake withM L 5.3 since 1948.  相似文献   

19.
Investigation of the time-dependent seismicity in 274 seismogenic regions of the entire continental fracture system indicates that strong shallow earthquakes in each region exhibit short as well as intermediate term time clustering (duration extending to several years) which follow a power-law time distribution. Mainshocks, however (interevent times of the order of decades), show a quasiperiodic behaviour and follow the ‘regional time and magnitude predictable seismicity model’. This model is expressed by the following formulas $$\begin{gathered} \log T_t = 0.19 M_{\min } + 0.33 M_p - 0.39 \log m_0 + q \hfill \\ M_f = 0.73 M_{\min } - 0.28 M_p + 0.40 \log m_0 + m \hfill \\ \end{gathered} $$ which relate the interevent time,T t (in years), and the surface wave magnitude,M f , of the following mainshock: with the magnitude,M min, of the smallest mainshock considered, the magnitude,M p , of the preceded mainshock and the moment rate,m 0 (in dyn.cm.yr?1), in a seismogenic region. The values of the parametersq andm vary from area to area. The basic properties of this model are described and problems related to its physical significance are discussed. The first of these relations, in combination with the hypothesis that the ratioT/T t , whereT is the observed interevent time, follows a lognormal distribution, has been used to calculate the probability for the occurrence of the next very large mainshock (M s ≥7.0) during the decade 1993–2002 in each of the 141 seismogenic regions in which the circum-Pacific convergent belt has been separated. The second of these relations has been used to estimate the magnitude of the expected mainshock in each of the regions.  相似文献   

20.
The earthquake recurrence time distribution in a given space-time window is being studied, using earthquake catalogues from different seismic regions (Southern California, Canada, and Central Asia). The quality of the available catalogues, taking into account the completeness of the magnitude, is examined. Based on the analysis of the catalogues, it was determined that the probability densities of the earthquake recurrence times can be described by a universal gamma distribution, in which the time is normalized with the mean rate of occurrence. The results show a deviation from the gamma distribution at the short interevent times, suggesting the existence of clustering. This holds from worldwide to local scales and for quite different tectonic environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号