首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Empirical laws and statistics of earthquakes are valuable as a basis for a better understanding of the earthquake cycle. In this paper we focus on the postseismic phase and the physics of aftershock sequences. Using interevent time distributions for a catalogue of Icelandic seismicity, we infer that the parameter C2 in the Omori law, often considered to represent incomplete detection of aftershocks, is at least in part related to the physics of the earthquake process. We investigate the role of postseismic pore pressure diffusion after two Icelandic earthquakes on the rate of aftershocks and what we can infer about the physical meaning of C2 from the diffusion process. Using the Mohr–Coulomb failure criterion we obtain a rate of triggered points in our diffusion model that agrees with the modified Omori law, with a value of C2 that is consistent with data. Our pore pressure diffusion model suggests that C2 is related to the process of reducing high pore pressure gradients existing across a fault zone at short times after a main shock.  相似文献   

2.
V.I. German   《Tectonophysics》2006,424(3-4):167
The paper describes the unified scaling theory for distribution functions of temporal and spatial characteristics in seismology. It is based on the scaling of seismological characteristics calculated for various energy–spatial–temporal intervals. The common mathematical methods for the scaling of distribution functions are developed. The means to test possibility of such scaling are found as well. The relationship between the unified scaling theory and other present scaling approaches is determined. The theory is applied to two characteristics of different seismoactive regions. The first characteristic is the waiting time between earthquakes ΔT, the second one is a new space parameter ΔDmin, which is the minimum distance of a current seismic event to the nearest (in space) neighbor in an energy–spatial–temporal interval. The distribution of the characteristics ΔT and ΔDmin allows estimating the time interval to the next earthquake and the distance of the following earthquake from previous earthquakes. Thus, these characteristics are very important for seismic hazard estimations. Scaling of distributions functions is proven to be successful for ΔDmin in all energy–spatial–temporal intervals and for ΔT with variations of energy/magnitude range. The distribution function of ΔT for various time domains was stable in only 60% of the cases, and near to unstable for spatial variations.  相似文献   

3.
Distributions of time between consecutive earthquakes verify an approximately universal scaling law for stationary seismicity. The shape of these distributions is shown to arise as a mixture of one distribution for short‐distance events and an exponential distribution for far‐off events, the distinction from short and long distances being relative to the size of the region studied. The distributions of consecutive distances show a double power law decay and verify an approximate scaling law which guarantees the simultaneous fulfillment of the scaling laws for time. The interplay between space and time can be seen as well by looking at the distribution of distances for a fixed time separation. These results suggest that seismicity can be understood as a series of intertwined independent continuous‐time random walks, with power law‐distributed waiting times and Lévy‐flight jumps. However, a simple model based on these ideas does not capture the invariance of seismicity under renormalization.  相似文献   

4.
Compound Poisson process models have been studied earlier for earthquake occurrences, with some arbitrary compounding distributions. It is more meaningful to abstract information about the compounding distribution from the empirical observations on the earthquake sequences. The difinition of a compound distribution can be interpreted as an integral transform of the compounding distribution. The latter distribution can therefore be estimated by inverting the integral transform. Alternatively, from the moments of the observable random variablesviz. (a) the number of earthquakes per unit time or (b) the waiting times for subsequent earthquakes, the moments of the compounding distribution can be obtained. This information can be converted into a statement about the compounding distribution.  相似文献   

5.
为了探索震前电磁异常现象的物理机制,许多研究注重于分析岩石破裂过程中产生的电磁辐射信号。对花岗岩样品进行单轴加载,变形直至破裂,记录整个过程中产生的电磁辐射(EME)和声发射(AE)信号,对比分析。这里主要从两个方面分析岩石破裂过程中EME信号的特征:①利用非广延分析方法分析EME的能量分布情况,并与b值分析方法对比,推断岩石破裂类型以及裂纹发育过程;②利用统一等待时间定标律分析EME的时间特征,期望获得岩石破裂过程中EME信号等待时间的分布规律。结果表明,在拟合AE和EME信号的能量分布曲线时,发现b值只能拟合AE部分数据,完全不能拟合EME数据,而非广延参数q可以很好地拟合这两类信号。通过对比分析AE和EME信号的等待时间概率密度的分布情况,发现EME信号的集成效果相对较差,可能不满足统一等待时间定标律。  相似文献   

6.
Very little work has been done in generating alternatives to the Poisson process model. The work reported here deals with alternatives to the Poisson process model for the earthquakes and checks them using empirical data and the statistical hypothesis testing apparatus. The strategy used here for generating hypotheses is to compound the Poisson process. The parameter of the Poisson process is replaced by a random variable having prescribed density function. The density functions used are gamma, chi and extended (gamma/chi). The original distribution is then averaged out with respect to these density functions. For the compound Poisson processes the waiting time distributions for the future events are derived. As the parameters for the various statistical models for earthquake occurrences are not known, the problem is basically of composite hypothesis testing. One way of designing a test is to estimate these parameters and use them as true values. Momentmatching is used here to estimate the parameters. The results of hypothesis testing using data from Hindukush and North East India are presented.  相似文献   

7.
We investigate the evolution of seismicity within large earthquake cycles in a model of a discrete strike-slip fault in elastic solid. The model dynamics is governed by realistic boundary conditions consisting of constant velocity motion of regions around the fault, static/kinetic friction and dislocation creep along the fault, and 3D elastic stress transfer. The fault consists of brittle parts which fail during earthquakes and undergo small creep deformation between events, and aseismic creep cells which are characterized by high ongoing creep motion. This mixture of brittle and creep cells is found to generate realistic aftershock sequences which follow the modified Omori law and scale with the mainshock size. Furthermore, we find that the distribution of interevent times of the simulated earthquakes is in good agreement with observations. The temporal occurrence, however, is magnitude-dependent; in particular, the small events are clustered in time, whereas the largest earthquakes occur quasiperiodically. Averaging the seismicity before several large earthquakes, we observe an increase of activity and a broadening scaling range of magnitudes when the time of the next mainshock is approached. These results are characteristics of a critical point behavior. The presence of critical point dynamics is further supported by the evolution of the stress field in the model, which is compatible with the observation of accelerating moment release in natural fault systems.  相似文献   

8.
The fulfillment of a scaling law for earthquake recurrence–time distributions is a clear indication of the importance of correlations in the structure of seismicity. In order to characterize these correlations we measure conditional recurrence–time and magnitude distributions for worldwide seismicity as well as for Southern California during stationary periods. Disregarding the spatial structure, we conclude that the relevant correlations in seismicity are those of the recurrence time with previous recurrence times and magnitudes; in the latter case, the conditional distribution verifies a scaling relation depending on the difference between the magnitudes of the two events defining the recurrence time. In contrast, with our present resolution, magnitude seems to be independent on the history contained in the seismic catalogs (except perhaps for Southern California for very short time scales, less than about 30 min for the magnitude ranges analyzed).  相似文献   

9.
Benford’s analysis is applied to the recurrence times of approximately 17,000 seismic events in different geological contexts of Italy over the last 6 years, including the Mt. Etna volcanic area and the seismic series associated with the destructive M w 6.3, 2009 L’Aquila earthquake. A close conformity to Benford’s law and a power-law probability distribution for the recurrence times of consecutive events is found, as typical of random multiplicative processes. The application of Benford’s law to the recurrence event times in seismic series of specific seismogenic regions represents a novel approach, which enlarges the occurrence and relevance of Benford-like asymmetries, with implications on the physics of natural systems approaching a power law behaviour. Moreover, we propose that the shift from a close conformity of Benford’s law to Brownian dynamics, observed for time separations among non-consecutive events in the study seismic series, may be ruled by a periodical noise factor, such as the effects of Earth tides on seismicity tuning.  相似文献   

10.
Seismic hazard analysis is based on data and models, which both are imprecise and uncertain. Especially the interpretation of historical information into earthquake parameters, e.g. earthquake size and location, yields ambiguous and imprecise data. Models based on probability distributions have been developed in order to quantify and represent these uncertainties. Nevertheless, the majority of the procedures applied in seismic hazard assessment do not take into account these uncertainties, nor do they show the variance of the results. Therefore, a procedure based on Bayesian statistics was developed to estimate return periods for different ground motion intensities (MSK scale).Bayesian techniques provide a mathematical model to estimate the distribution of random variables in presence of uncertainties. The developed method estimates the probability distribution of the number of occurrences in a Poisson process described by the parameter . The input data are the historical occurrences of intensities for a particular site, represented by a discrete probability distribution for each earthquake. The calculation of these historical occurrences requires a careful preparation of all input parameters, i.e. a modelling of their uncertainties. The obtained results show that the variance of the recurrence rate is smaller in regions with higher seismic activity than in less active regions. It can also be demonstrated that long return periods cannot be estimated with confidence, because the time period of observation is too short. This indicates that the long return periods obtained by seismic source methods only reflects the delineated seismic sources and the chosen earthquake size distribution law.  相似文献   

11.
On the frequency distribution of turbidite thickness   总被引:1,自引:0,他引:1  
The frequency distribution of turbidite thickness records information on flow hydrodynamics, initial sediment volumes and source migration and is an important component of petroleum reservoir models. However, the nature of this thickness distribution is currently uncertain, with log‐normal or negative‐exponential frequency distributions and power‐law cumulative frequency distributions having been proposed by different authors. A detailed analysis of the Miocene Marnoso Arenacea Formation of the Italian Apennines shows that turbidite bed thickness and sand‐interval thickness within each bed have a frequency distribution comprising the sum of a series of log‐normal frequency distributions. These strata were deposited predominantly in a basin‐plain setting, and bed amalgamation is relatively rare. Beds or sand intervals truncated by erosion were excluded from this analysis. Each log‐normal frequency distribution characterizes bed or sand‐interval thickness for a given basal grain‐size or basal Bouma division. Measurements from the Silurian Aberystwyth Grits in Wales, the Cretaceous Great Valley Sequence in California and the Permian Karoo Basin in South Africa show that this conclusion holds for sequences of disparate age and variable location. The median thickness of these log‐normal distributions is positively correlated with basal grain‐size. The power‐law exponent relating the basal grain‐size and median thickness is different for turbidites with a basal A or B division and those with only C, D and E divisions. These two types of turbidite have been termed ‘thin bedded’ and ‘thick bedded’ by previous workers. A change in the power‐law exponent is proposed to be related to: (i) a transition from viscous to inertial settling of sediment grains; and (ii) hindered settling at high sediment concentrations. The bimodal thickness distribution of ‘thin‐bedded’ and ‘thick‐bedded’ turbidites noted by previous workers is explained as the result of a change in the power‐law exponent. This analysis supports the view that A and B divisions were deposited from high‐concentration flow components and that distinct grain‐size modes undergo different depositional processes. Summation of log‐normal frequency distributions for thin‐ and thick‐bedded turbidites produces a cumulative frequency distribution of thickness with a segmented power‐law trend. Thus, the occurrence of both log‐normal and segmented power‐law frequency distributions can be explained in a holistic fashion. Power‐law frequency distributions of turbidite thickness have previously been linked to power‐law distributions of earthquake magnitude or volumes of submarine slope failure. The log‐normal distribution for a given grain‐size class observed in this study suggests an alternative view, that turbidite thickness is determined by the multiplicative addition of several randomly distributed parameters, in addition to the settling velocity of the grain‐sizes present.  相似文献   

12.
13.
The quantitative parameters of the self-similarity of the aftershocks of the Japanese earthquake on March 11, 2011 were obtained. The parameters p in the Omori law (1.06), b in the Gutenberg-Richter law (0.61), and the fractal dimension (D) of the earthquake epicenters (1.52) were determined. Self-similarity is manifested in a range of two orders of temporal and spatial scales and four units of magnitude. The stability in time of parameter p and spatial variations in b and p parameters were revealed.  相似文献   

14.
Physical and stochastic models of earthquake clustering   总被引:4,自引:2,他引:4  
The phenomenon of earthquake clustering, i.e., the increase of occurrence probability for seismic events close in space and time to other previous earthquakes, has been modeled both by statistical and physical processes.From a statistical viewpoint the so-called epidemic model (ETAS) introduced by Ogata in 1988 and its variations have become fairly well known in the seismological community. Tests on real seismicity and comparison with a plain time-independent Poissonian model through likelihood-based methods have reliably proved their validity.On the other hand, in the last decade many papers have been published on the so-called Coulomb stress change principle, based on the theory of elasticity, showing qualitatively that an increase of the Coulomb stress in a given area is usually associated with an increase of seismic activity. More specifically, the rate-and-state theory developed by Dieterich in the ′90s has been able to give a physical justification to the phenomenon known as Omori law. According to this law, a mainshock is followed by a series of aftershocks whose frequency decreases in time as an inverse power law.In this study we give an outline of the above-mentioned stochastic and physical models, and build up an approach by which these models can be merged in a single algorithm and statistically tested. The application to the seismicity of Japan from 1970 to 2003 shows that the new model incorporating the physical concept of the rate-and-state theory performs not worse than the purely stochastic model with two free parameters only. The numerical results obtained in these applications are related to physical characters of the model as the stress change produced by an earthquake close to its edges and to the A and σ parameters of the rate-and-state constitutive law.  相似文献   

15.
Bogdan Enescu  Kiyoshi Ito   《Tectonophysics》2005,409(1-4):147-157
By using the double-difference relocation technique, we have determined the fine structure of seismicity during the 1998 Hida Mountain earthquake swarm. The distribution of seismic activity defines two main directions (N–S and E–W) that probably correspond to the regional stress pattern. The detailed structure of seismicity reveals intense spatio-temporal clustering and earthquake lineations. Each cluster of events contains a mainshock and subsequent aftershock activity that decays according to the Omori law. The seismicity and the b-value temporal and spatial patterns reflect the evolution of the static stress changes during the earthquake swarm. About 80% of the swarm's best-relocated events occur in regions of increased ΔCFF. The smaller value of b found in the northern part of the swarm region and a larger b-value observed to the south, for the same period of time, could be well explained by the static stress changes caused by the larger events of the sequence. We argue that the state of stress in the crust is the main factor that controls the variation of b-value.  相似文献   

16.
In the region of Three Gorges Reservoir (TGR) in China, there has been occurrence of several frequent earthquakes of moderate intensity since reservoir impounding occurred in 2003. These earthquakes are generally believed to be induced by reservoir impoundment and water-level variations. Usually, the geo-stress will change, when natural earthquakes occur. Following this principle, this paper adopted the rate and state theory to simulate and estimate Coulomb stress changes in the TGR region and obtained the pattern of Coulomb stress changes with time and the event sequence as well as the distribution of Coulomb stress changes in space. First, the TGR regional catalogue was analyzed and processed, leading to quantification of the magnitude of completeness and all of the parameters that are used in the stress–seismicity inversion process, including the reference seismicity rates, characteristic relaxation time, fault constitutive parameters, and stress rates. Second, the temporal evolution of the stress changes in different time windows was computed and analyzed, and it was found that there is an association between the Coulomb stress changes and rates of increase in the cumulative number of earthquakes. In addition, the earthquake occurred in November 2008 (M S = 4.1) was analyzed and attempted to simulate the distribution of stress changes in space through the stress–seismicity inversion model. The results proved that the modeled area coincides with the historical area of earthquakes that occurred after 2008. Finally, a prediction was made about the earthquake productivity rates after 2015, which showed a declining earthquake rate over time that ultimately returned to the background seismicity. This result is essentially in agreement with Omori’s law. To conclude, it is rational to use the stress-inversion method to analyze the relationship between induced earthquake seismicity and local stress changes as well as to simulate the area of earthquake occurrence and productivity rates of reservoir-induced earthquakes.  相似文献   

17.
Observations indicate that earthquake faults occur in topologically complex, multi-scale networks driven by plate tectonic forces. We present realistic numerical simulations, involving data-mining, pattern recognition, theoretical analyses and ensemble forecasting techniques, to understand how the observable space–time earthquake patterns are related to the fundamentally inaccessible and unobservable dynamics. Numerical simulations can also help us to understand how the different scales involved in earthquake physics interact and influence the resulting dynamics. Our simulations indicate that elastic interactions (stress transfer) combined with the nonlinearity in the frictional failure threshold law lead to the self-organization of the statistical dynamics, producing 1) statistical distributions for magnitudes and frequencies of earthquakes that have characteristics similar to those possessed by the Gutenberg–Richter magnitude–frequency distributions observed in nature; and 2) clear examples of stress transfer among fault activity described by stress shadows, in which an earthquake on one group of faults reduces the Coulomb failure stress on other faults, thereby delaying activity on those faults. In this paper, we describe the current state of modeling and simulation efforts for Virtual California, a model for all the major active strike slip faults in California. Noting that the Working Group on California Earthquake Probabilities (WGCEP) uses statistical distributions to produce earthquake forecast probabilities, we demonstrate that Virtual California provides a powerful tool for testing the applicability and reliability of the WGCEP statistical methods. Furthermore, we show how the simulations can be used to develop statistical earthquake forecasting techniques that are complementary to the methods used by the WGCEP, but improve upon those methods in a number of important ways. In doing so, we distinguish between the “official” forecasts of the WGCEP, and the “research-quality” forecasts that we discuss here. Finally, we provide a brief discussion of future problems and issues related to the development of ensemble earthquake hazard estimation and forecasting techniques.  相似文献   

18.
Determination of the return period of design flood depends on the nature of the project and the consequences of the flood and is based on economic criteria, human casualties, and hydrological factors. Underestimation of flood might result in casualties and economic damages, while the overestimation leads to capital waste. Therefore, in this research, the flood frequency analysis of Dez Basin, Iran was conducted within the period of 1956–2012 using power law approach together with ordinary distributions, including normal, log normal, Pearson type III, exponential, gamma, generalized extreme value, Nakagami, Rayleigh, logistic, generalized logistic, generalized Pareto, and Weibull distributions. The power law comes from the fractal nature of earth science phenomena such as precipitation and runoff. Accordingly, in this research the partial duration flood series of five hydrometric stations in Dez Basin were extracted using power law with the intervals of 7, 14, 30, and 60 days and then compared with the annual maxima. The results indicated that the annual maxima were not suitable for frequency analysis of the flood in Dez Basin, and the 30-day partial duration series obtained from the power law has a better correspondence with the flow and properties of the Dez Basin. The independence and stationarity of the 30-day partial duration series were examined by Wald–Wolfowitz test, confirming the independence of the considered series. Next, the power distribution and the typical statistical distributions were fitted onto the data of the flood in Dez Basin, with the performance of each distribution being investigated using normalized root-mean-square error and Nash–Sutcliffe criteria. The results revealed that in the SDZ and TPB stations, power distribution had a better performance than other considered distributions. Moreover, in the SDS, TPS, and TZ stations the power distribution stood in the second rank in terms of the best distribution. As the performance of power distribution in the estimation of the flood in Dez Basin has been very satisfactory and calculation of its parameters and its application is easier than ordinary probability distributions, thus it can be suggested as the superior distribution for flood frequency analysis in Dez Basin.  相似文献   

19.
阿尔金断裂带东段地区的地质构造特征及其动力学机制一直是地学工作者关注的焦点。近年来小震资料越来越多应用到活动断裂空间展布、深浅构造分析及动力学机制研究领域。本文应用双差定位法获得研究区域2008~2017年间6013次地震事件的精确定位数据,通过多条小震深度剖面清晰刻画出断裂系统的空间展布形态。综合石油地震剖面、人工地震宽角反射/折射剖面、人工地震深反射剖面,充分利用小震精确定位信息以及浅表活动构造研究成果,建立研究区断裂系统的深浅部构造模型。研究区莫霍面由北往南逐渐加深,存在三处断错,呈阶梯状展布,地壳内存在一条厚约10km的低速层,在该层以上为地震多发区,断裂系统总体呈"Y"字型,上部为一系列叠瓦状逆冲断裂,造成祁连山的隆升,向下并入一条主干断层。最后探讨了青藏高原东北缘地区构造运动的动力学机制,亚洲板块俯冲至祁连山前,上地壳以逆冲推覆构造模式造成上地壳增厚现象,而中下地壳主要为亚洲岩石圈地幔下插,上地幔的拖曳作用下发生流动引起地壳增厚,上下地壳整体增厚。  相似文献   

20.
There is no single method available for estimating the seismic risk in a given area, and as a result most studies are based on some statistical model. If we denote by Z the random variable that measures the maximum magnitude of earthquakes per unit time, the seismic risk of a value m is the probability that this value will be exceeded in the next time units, that is, R(m)=P(Z>m). Several approximations can be made by adjusting different theoretical distributions to the function R, assuming different distributions for the magnitude of earthquakes. A related method used to treat this problem is to consider the difference between the times of occurrence of consecutive earthquakes, or inter-event times. The hazard function, or failure rate function, of this variable measures the instantaneous risk of occurrence of a new earthquake, supposing that the last earthquake happened at time 0. In this paper, we will consider the estimation of the variable that measures the inter-event time and apply nonparametric techniques; that is, we do not consider any theoretical distribution. Moreover, because the stochastic process associated with this variable can sometimes be non-stationary, we condition each time by the previous ones. We then work with a multidimensional estimation, and consider each multidimensional variable as a functional datum. Functional data analysis deals with data consisting of curves or multidimensional variables. Nonparametric estimation can be applied to functional data, to describe the behavior of seismic zones and their associated instantaneous risk. The applications of estimation techniques are shown by applying them to two different regions and data catalogues: California and southern Spain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号