首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Short-term earthquake prediction, months in advance, is an elusive goal of earth sciences, of great importance for fundamental science and for disaster preparedness. Here, we describe a methodology for short-term prediction named RTP (Reverse Tracing of Precursors). Using this methodology the San Simeon earthquake in Central California (magnitude 6.5, Dec. 22, 2003) and the Tokachi-Oki earthquake in Northern Japan (magnitude 8.1, Sept. 25, 2003) were predicted 6 and 7 months in advance, respectively. The physical basis of RTP can be summed up as follows: An earthquake is generated by two interacting processes in a fault network: an accumulation of energy that the earthquake will release and a rise of instability triggering this release. Energy is carried by the stress field, instability is carried by the difference between the stress and strength fields. Both processes can be detected and characterized by “premonitory” patterns of seismicity or other relevant fields. Here, we consider an ensemble of premonitory seismicity patterns. RTP methodology is able to reconstruct these patterns by tracing their sequence backwards in time. The principles of RTP are not specific to earthquakes and may be applicable to critical transitions in a wide class of hierarchical non-linear systems.  相似文献   

2.
The EEPAS (“Every Earthquake a Precursor According to Scale”) model is a space–time point-process model based on the precursory scale increase (Ψ) phenomenon and associated predictive scaling relations. It has previously been fitted to the New Zealand earthquake catalogue, and applied successfully in quasi-prospective tests on the CNSS catalogue for California for forecasting earthquakes with magnitudes above 5.75 and on the JMA catalogue of Japan for magnitudes above 6.75. Here we test whether the Ψ scaling relations extend to lower magnitudes, by applying EEPAS to depth-restricted subsets of the NIED catalogue of the Kanto area, central Japan, for magnitudes above 4.75. As in previous studies, the EEPAS model is found to be more informative than a quasi-static baseline model based on proximity to past earthquakes, and much more informative than the stationary uniform Poisson model. The information that it provides is illustrated by maps of the earthquake occurrence rate density, covering magnitudes from 5.0 to 8.0, for the central Japan region as at the beginning of year 2004, using the NIED and JMA catalogues to mid-2003.  相似文献   

3.
Investigating the period 1983–1994 for western Greece, a possible correlation between the selectivity characteristics of the SES (seismic electric signals of the VAN method) and earthquake parameters has been reported by Uyeda et al. [Uyeda, S., Al-Damegh, K.S., Dologlou, E., Nagao, T., 1999. Some relationship between VAN seismic electric signals (SES) and earthquake parameters, Tectonophysics, 304, 41–55.]. They found that the earthquake source mechanism changed from largely strike-slip type to thrust type at the end of 1987, and this coincided with a shift in the SES sensitive site from Pirgos (PIR) to Ioannina (IOA) VAN station. Here, we report the results for the period January 1, 2002–July 25, 2004, during which the SES sensitive site of PIR became again active, after a 10-year period of “quiescence”. This activation was followed by strike slip earthquakes (on August 14, 2003 and March 17, 2004 with magnitude 6.4 and 6.5, respectively) in the Hellenic arc, which provides additional evidence on the correlation reported by Uyeda et al. The SES activities recorded at PIR have been discriminated from “artificial” noise by employing the natural time-domain analysis introduced recently.  相似文献   

4.
Worldwide analysis of the clustering of earthquakes has lead to the hypothesis that the occurrence of abnormally large clusters indicates an increase in probability of a strong earthquake in the next 3–4 years within the same region. Three long-term premonitory seismicity patterns, which correspond to different non-contradictory definitions of abnormally large clusters, were tested retrospectively in 15 regions. The results of the tests suggest that about 80% of the strongest earthquakes can be predicted by monitoring these patterns.Most of results concern pattern B (“burst of aftershocks”) i.e. an earthquake of medium magnitude with an abnormally large number of aftershocks during the first few days. Two other patterns, S and Σ often complement pattern B and can replace it in some regions where the catalogs show very few aftershocks.The practical application of these patterns is strongly limited by the fact that neither the location of the coming earthquake within the region nor its time of occurrence within 3–4 years is indicated. However, these patterns present the possibility of increasing the reliability of medium and short-term precursors; also, they allow activation of some important early preparatory measures.The results impose the following empirical constraint on the theory of the generation of a strong earthquake: it is preceded by abnormal clustering of weaker earthquakes in the space-time-energy domain; corresponding clusters are few but may occur in a wide region around the location of the coming strong earthquake; the distances are of the same order as for the other reported precursors.  相似文献   

5.
Changes in the stress field of an aquifer system induced by seismotectonic activity may change the mixing ratio of groundwaters with different compositions in a well, leading to hydrochemical signals which in principle could be related to discrete earthquake events. Due to the complexity of the interactions and the multitude of involved factors the identification of such relationships is a difficult task. In this study we present an empiric statistical approach suitable to analyse if there is an interdependency between changes in the chemical composition of monitoring wells and the regional seismotectonic activity of a considered area. To allow a rigorous comparison with hydrochemistry the regional earthquake time series was aggregated into an univariate time series. This was realized by expressing each earthquake in form of a parameter “e”, taking into consideration both energetic (magnitude of a seismic event) and spatial parameters (position of epi/hypocentrum relative to the monitoring site). The earthquake and the hydrochemical time-series were synchronised aggregating the e-parameters into “earthquake activity” functions E, which takes into account the time of sampling relative to the earthquakes which occurred in the considered area. For the definition of the aggregation functions a variety of different “e” parameters were considered. The set of earthquake functions E was grouped by means of factor analysis to select a limited number of significant and representative earthquake functions E to be used further on in the relation analysis with the multivariate hydrochemical data set. From the hydrochemical data a restricted number of hydrochemical factors were extracted. Factor scores allow to represent and analyse the variation of the hydrochemical factors as a function of time. Finally, regression analysis was used to detect those hydrochemical factors which significantly correlate with the aggregated earthquake functions.This methodological approach was tested with a hydrochemical data set collected from a deep well monitored for two years in the seismically active Vrancea region, Romania. Three of the hydrochemical factors were found to correlate significantly with the considered earthquake activities. A screening with different time combinations revealed that correlations are strongest when the cumulative seismicity over several weeks was considered. The case study also showed that the character of the interdependency depends sometimes on the geometrical distribution of the earthquake foci. By using aggregated earthquake information it was possible to detect interrelationships which couldn't have been identified by analysing only relations between single geochemical signals and single earthquake events. Further on, the approach allows to determine the influence of different seismotectonic patterns on the hydrochemical composition of the sampled well. The method is suitable to be used as a decision instrument in assessing if a monitoring site is suitable or not to be included in a monitoring net within a complex earthquake prediction strategy.  相似文献   

6.
Earthquake prediction was practiced in Japan to examine the hypothesis that “a pair of earthquakes with similar magnitudes may be a signal of an impending larger earthquake”. In the present study, predictions were announced with expected probabilities of 20–30% (rank A) or 10–20% (rank B). In 2001–2002, excepting the Ogasawara region, 26 and 6 cases among 61 and 30 predictions of ranks A and B, respectively, were successful. Based on a statistical test of time-shift, i.e., one-year shift in this paper, and averaged activity in 1990–1999, the success rate of 43% for rank A was shown to be greater than that expected by chance with a confidence level more than 99%. The success rate of 20% for rank B gave a corresponding confidence level of only about 40%, suggesting that the predictions of rank B were not confident in this period. According to the results, a statistical test of time-shift was found to be useful to evaluate the significance of prediction methods of this type.  相似文献   

7.
The parameters “radiant flux” (energy radiated per unit area of an earthquake fault in unit time) and “radiant flux per unit displacement” reflect the power dissipated on a fault during slip. Values for moderate-to-large earthquakes range over two orders of magnitude, implying considerable variations in seismic efficiency, even for events of similar magnitude occurring on faults of the same type.  相似文献   

8.
Seismic coupling and uncoupling at subduction zones   总被引:1,自引:0,他引:1  
Seismic coupling has been used as a qualitative measure of the “interaction” between the two plates at subduction zones. Kanamori (1971) introduced seismic coupling after noting that the characteristic size of earthquakes varies systematically for the northern Pacific subduction zones. A quantitative global comparison of many subduction zones reveals a strong correlation of earthquake size with two other variables: age of the subducting lithosphere and convergence rate. The largest earthquakes occur in zones with young lithosphere and fast convergence rates, while zones with old lithosphere and slow rates are relatively aseismic for large earthquakes. Results from a study of the rupture process of three great earthquakes indicate that maximum earthquake size is directly related to the asperity distribution on the fault plane (asperities are strong regions that resist the motion between the two plates). The zones with the largest earthquakes have very large asperities, while the zones with smaller earthquakes have small scattered asperities. This observation can be translated into a simple model of seismic coupling, where the horizontal compressive stress between the two plates is proportional to the ratio of the summed asperity area to the total area of the contact surface. While the variation in asperity size is used to establish a connection between earthquake size and tectonic stress, it also implies that plate age and rate affect the asperity distribution. Plate age and rate can control asperity distribution directly by use of the horizontal compressive stress associated with the “preferred trajectory” (i.e. the vertical and horizontal velocities of subducting slabs are determined by the plate age and convergence velocity). Indirect influences are many, including oceanic plate topography and the amount of subducted sediments.All subduction zones are apparently uncoupled below a depth of about 40 km, and we propose that the basalt to eclogite phase change in the down-going oceanic crust may be largely responsible. This phase change should start at a depth of 30–35 km, and could at least partially uncouple the plates by superplastic deformation throughout the oceanic crust during the phase change.  相似文献   

9.
Observations indicate that earthquake faults occur in topologically complex, multi-scale networks driven by plate tectonic forces. We present realistic numerical simulations, involving data-mining, pattern recognition, theoretical analyses and ensemble forecasting techniques, to understand how the observable space–time earthquake patterns are related to the fundamentally inaccessible and unobservable dynamics. Numerical simulations can also help us to understand how the different scales involved in earthquake physics interact and influence the resulting dynamics. Our simulations indicate that elastic interactions (stress transfer) combined with the nonlinearity in the frictional failure threshold law lead to the self-organization of the statistical dynamics, producing 1) statistical distributions for magnitudes and frequencies of earthquakes that have characteristics similar to those possessed by the Gutenberg–Richter magnitude–frequency distributions observed in nature; and 2) clear examples of stress transfer among fault activity described by stress shadows, in which an earthquake on one group of faults reduces the Coulomb failure stress on other faults, thereby delaying activity on those faults. In this paper, we describe the current state of modeling and simulation efforts for Virtual California, a model for all the major active strike slip faults in California. Noting that the Working Group on California Earthquake Probabilities (WGCEP) uses statistical distributions to produce earthquake forecast probabilities, we demonstrate that Virtual California provides a powerful tool for testing the applicability and reliability of the WGCEP statistical methods. Furthermore, we show how the simulations can be used to develop statistical earthquake forecasting techniques that are complementary to the methods used by the WGCEP, but improve upon those methods in a number of important ways. In doing so, we distinguish between the “official” forecasts of the WGCEP, and the “research-quality” forecasts that we discuss here. Finally, we provide a brief discussion of future problems and issues related to the development of ensemble earthquake hazard estimation and forecasting techniques.  相似文献   

10.
We detect repeating earthquakes associated with the Philippine Sea plate subduction to reveal the plate configuration. In the Kanto district, we find 140 repeating earthquake groups with 428 events by waveform similarity analysis. Most repeating earthquakes in the eastern part of the Kanto district occur with a regular time interval. They have thrust-type focal mechanisms and are distributed near the upper surface of the Philippine Sea plate. These observations indicate that the repeating earthquakes there occur as a repetition of ruptures on the isolated patches distributed on the plate boundary owing to the concentration of stress caused by aseismic slips in the surrounding areas. This shows that the distributions of repeating earthquakes suggest the aseismic slips in the surrounding areas of small patches. We determine spatial distributions of repeating earthquakes in the eastern part of the Kanto district and find that they correspond to the upper boundary of the Philippine Sea plate, that is, the upper boundary of the oceanic crust layer of the Philippine Sea plate. The plate geometry around Choshi is newly constrained by repeating earthquake data and a rather flat geometry in the eastern part of the Kanto district is revealed. The obtained geometry suggests uplift of the Philippine Sea plate due to the collision with the Pacific plate beneath Choshi.Repeating earthquakes in the western part of the Kanto district have extremely shorter recurrence times, and their focal mechanisms are not of the thrust types. These repeating earthquakes are classified as “burst type” activity and likely to occur on the preexistent fault planes which are distributed around the “collision zone” between the Philippine Sea plate and the inland plate. The variation among the repeating earthquake activities in the Kanto district indicates that regular repetition of repeating earthquakes is possible only on the plate boundary with a smooth and simple geometry.  相似文献   

11.
The mechanism of earthquake inoculation and the process of earthquake occurrence are very complicated. Additionally, earthquakes do not happen very often, and we lack enough cognition to the earth’s interior structure, activity regularity and other key elements. As a result, research progress about the theory of earthquake precursors has been greatly restricted. Ground gravity observation has become one of the main ways to study earthquake precursor information in many countries and regions. This paper briefly summarized the surface gravity observation technology and observation network in China: the surface gravity measurement instrument developed from Huygens physical pendulum in seventeenth Century to today’s high-precision absolute gravimeter, and its accuracy reached to ±1×10-8 m/s2. China has successively established the National Gravity Network, Digital Earthquake Observation Network of China,the Crustal Movement Observation Network of China Ⅰ and the Crustal Movement Observation Network of China, to provide a public platform for monitoring non tidal gravity change, seismic gravity and tectonic movement. The use of specific examples illustrated the role of gravity observation data in earthquake prediction. The gravity observation data of ground gravity can be used to capture the information of gravity change in the process of strong earthquake inoculation, and to provide an important basis for the long-term prediction of strong earthquakes. The temporal and spatial variation characteristics of the regional gravity field and its relation to strong earthquakes were analyzed: Before the earthquake whose magnitude is higher than MS 5, generally there will be a large amplitude and range of gravity anomaly zones. Strong earthquakes occur mainly in areas where the gravity field changes violently. The dynamic change images of gravity field can clearly reflect the precursory information of large earthquakes during the inoculation and occurrence. Finally, the existing problems of surface gravity technology in earthquake precursor observation were put forward and the use of gravity measurement data in earthquake prediction research was prospected.  相似文献   

12.
The Parkfield Area Seismic Observatory (PASO) was a dense, telemetered seismic array that operated for nearly 2 years in a 15 km aperture centered on the San Andreas Fault Observatory at Depth (SAFOD) drill site. The main objective of this deployment was to refine the locations of earthquakes that will serve as potential targets for SAFOD drilling and in the process develop a high (for passive seismological techniques) resolution image of the fault zone structure. A challenging aspect of the analysis of this data set was the known existence of large (20–25%) contrasts in seismic wavespeed across the San Andreas Fault. The resultant distortion of raypaths could challenge the applicability of approximate ray tracing techniques. In order to test the sensitivity of our hypocenter locations and tomographic image to the particular ray tracing and inversion technique employed, we compare an initial determination of locations and structure developed using a coarse grid and an approximate ray tracer [Thurber, C., Roecker, S., Roberts, K., Gold, M., Powell, M.L. , and Rittger, K., 2003. Earthquake locations and three-dimensional fault zone structure along the creeping section of the San Andreas fault near Parkfield, CA: Preparing for SAFOD, Geophys. Res. Lett., 30 3, 10.1029/2002GL016004.] with one derived from a relatively fine grid and an application of a finite difference algorithm [Hole, J.A., and Zelt, B.C., 1995. 3-D finite-difference reflection traveltimes, Geophys. J. Int., 121, 2, 427–434.]. In both cases, we inverted arrival-time data from about 686 local earthquakes and 23 shots simultaneously for earthquake locations and three-dimensional Vp and Vp/Vs structure. Included are data from an active source seismic experiment around the SAFOD site as well as from a vertical array of geophones installed in the 2-km-deep SAFOD pilot hole, drilled in summer 2002. Our results show that the main features of the original analysis are robust: hypocenters are located beneath the trace of the fault in the vicinity of the drill site and the positions of major contrasts in wavespeed are largely the same. At the same time, we determine that shear wave speeds in the upper 2 km of the fault zone are significantly lower than previously estimated, and our estimate of the depth of the main part of the seismogenic zone decreases in places by  100 m. Tests using “virtual earthquakes” (borehole receiver gathers of picks for surface shots) indicate that our event locations near the borehole currently are accurate to about a few tens of meters horizontally and vertically.  相似文献   

13.
The Load/Unload Response Ratio (LURR) method is proposed for short-to-intermediate-term earthquake prediction [Yin, X.C., Chen, X.Z., Song, Z.P., Yin, C., 1995. A New Approach to Earthquake Prediction — The Load/Unload Response Ratio (LURR) Theory, Pure Appl. Geophys., 145, 701–715]. This method is based on measuring the ratio between Benioff strains released during the time periods of loading and unloading, corresponding to the Coulomb Failure Stress change induced by Earth tides on optimally oriented faults. According to the method, the LURR time series usually climb to an anomalously high peak prior to occurrence of a large earthquake. Previous studies have indicated that the size of critical seismogenic region selected for LURR measurements has great influence on the evaluation of LURR. In this study, we replace the circular region usually adopted in LURR practice with an area within which the tectonic stress change would mostly affect the Coulomb stress on a potential seismogenic fault of a future event. The Coulomb stress change before a hypothetical earthquake is calculated based on a simple back-slip dislocation model of the event. This new algorithm, by combining the LURR method with our choice of identified area with increased Coulomb stress, is devised to improve the sensitivity of LURR to measure criticality of stress accumulation before a large earthquake. Retrospective tests of this algorithm on four large earthquakes occurred in California over the last two decades show remarkable enhancement of the LURR precursory anomalies. For some strong events of lesser magnitudes occurred in the same neighborhoods and during the same time periods, significant anomalies are found if circular areas are used, and are not found if increased Coulomb stress areas are used for LURR data selection. The unique feature of this algorithm may provide stronger constraints on forecasts of the size and location of future large events.  相似文献   

14.
Flood stories in the Hebrew Bible and the Koran appear to be derived from earlier flood stories like those in the Gilgamesh Epic and still earlier in the Atrahasis. All would have their source from floods of the Tigris and Euphrates rivers.

The Gilgamesh Epic magnifies the catastrophe by having the flood begin with winds, lightning, and a shattering of the earth, or earthquake. Elsewhere in Gilgamesh, an earthquake can be shown to have produced pits and chasms along with gushing of water. It is commonly observed that earthquake shaking causes water to gush from the ground and leaves pits and open fissures. The process is known as soil liquefaction. Earthquake is also a possible explanation for the verse “all the fountains of the great deep (were) broken up” that began the Flood in Genesis. Traditionally, the “great deep” was the ocean bottom. A more recent translation substitutes “burst” for “broken up” in describing the fountains, suggesting that they erupted at the ground surface and were caused by an earthquake with soil liquefaction. Another relation between soil liquefaction and the Flood is found in the Koran where the Flood starts when “water gushed forth from the oven”. Soil liquefaction observed erupting preferentially into houses during an earthquake provides a logical interpretation if the oven is seen as a tiny house. A case can be made that earthquakes with soil liquefaction are embedded in all of these flood stories.  相似文献   


15.
Many different runout prediction methods can be applied to estimate the mobility of future debris flows during hazard assessment. The present article reviews the empirical, analytical, simple flow routing and numerical techniques. All these techniques were applied to back-calculate a debris flow, which occurred in 1982 at La Guingueta catchment, in the Eastern Pyrenees. A sensitivity analysis of input parameters was carried out, while special attention was paid to the influence of rheological parameters. We used the Voellmy fluid rheology for our analytical and numerical modelling, since this flow resistance law coincided best with field observations. The simulation results indicated that the “basal” friction coefficients rather affect the runout distance, while the “turbulence” terms mainly influence flow velocity. A comparison of the velocity computed on the fan showed that the analytical model calculated values similar to the numerical ones. The values of our rheological parameters calibrated at La Guingueta agree with data back-calculated for other debris flows. Empirical relationships represent another method to estimate total runout distance. The results confirmed that they contain an important uncertainty and they are strictly valid only for the conditions, which were the basis for their development. With regards to the simple flow routing algorithm, this methods could satisfactorily simulate the total area affected by the 1982 debris flow, but it was not able to directly calculate total runout distance and velocity. Finally, a suggestion on how different runout prediction methods can be applied to generate debris-flow hazard maps is presented. Taking into account the definition of hazard and intensity, the best choice would be to divide the resulting hazard maps into two types: “final hazard maps” and “preliminary hazard maps”. Only the use of numerical models provided final hazard maps, because they could incorporate different event magnitudes and they supplied output-values for intensity calculation. In contrast, empirical relationships and flow routing algorithms, or a combination of both, could be applied to create preliminary hazard maps. The present study only focussed on runout prediction methods. Other necessary tasks to complete the hazard assessment can be looked up in the “Guidelines for landslide susceptibility, hazard and risk zoning” included in this Special Issue.  相似文献   

16.
Quantitative estimates of earthquake losses are needed as soon as possible after an event. A majority of earthquake-prone countries lack the necessary dense seismograph networks, modern communication, and in some places the experts to assess losses immediately, so the earliest possible warnings must come from global information and international experts. Earthquakes of interest to us are in most areas of the world M ≥ 6. In this article, we have analyzed the response time for distributing source parameter estimates from: National Earthquake Information Center (NEIC) of the US Geological Survey (USGS), the European Mediterranean Seismological Center (EMSC), and Geophysical Institute-Russian Academy of Science, Obninsk (RAS). In terms of earthquake consequences, the Pacific Tsunami Warning Center (TWC) issues assessments of the likelihood of tsunamis, the Joint Research Laboratory in Ispra, Italy (JRC) issues alerts listing sociological aspects of the affected region, and we distribute loss estimates, and recently the USGS has started posting impact assessment information on their PAGER web page. Two years ago, the USGS reduced its median delay of distributing earthquake source parameters by a factor of 2 to the currently observed 26 min, and they distribute information for 99% of the events of interest to us. The median delay of EMSC is 41 min, with 30% of our target events reported. RAS reports after 81 min and 30% of the target events. The first tsunami assessments by TWC reach us 18 min (median) after large earthquakes in the Pacific area. The median delay of alerts by the JRC is 44 min (36 min recently). The World Agency for Planetary Monitoring and Earthquake Risk Reduction (WAPMERR) distributes detailed loss estimates in 41 min (median). Moment tensor solutions of the USGS, which can be helpful for refining loss estimates, reach us in 78 min (median) for 58% of the earthquakes of interest.  相似文献   

17.
In order to identify whether observed seismic signals are generated by an underground nuclear explosion or an earthquake, it is adequate to rely on one efficient identifier that provides a reasonably good clue in an unambiguous way. Although it is generally accepted that multi-station, multi-parameter discrimination can provide separation between explosions and earthquakes, it has been observed that cases do arise where signal characteristics cannot be established distinctly and satisfactorily. In the so-called “difficult” cases which are associated with some ambiguity in deducing the nature of the source using single-station seismograms, it is shown in this paper that a reliable estimate of source depth proves extremely useful. Out of the eleven typical examples of “not-easy-to-discriminate” events recorded at the Gauribidanur short-period seismic array in Southern India, seven could be successfully identified as earthquakes and the remaining four as probable underground explosions on the basis of focal-depth estimates from multi-station data.  相似文献   

18.
The Kuroshima Knoll is about 26 km south of Ishigaki Island in the southern part of the Ryukyu Arc. The area is considered to be the source area of “The 1771 Yaeyama Earthquake Tsunami”, which was due to the submarine landslide caused by an earthquake. It has been cleared from some investigations using “Dolphin 3K” and “Shinkai 2000” that there are large-scale dead Calyptogena colonies, many gravels of fallen dolomite chimneys and carbonates on the top of the Knoll [Matsumoto, T., Uechi, C., Kimura, M., 1997; Machiyama, H., Matsumoto, T., Matsumoto, R., Hattori, M., Okano, M., Iwase, R., Tomaru, H., 2001b.]. Carbonates of Kuroshima Knoll have various shapes and macroscopic textures. These have been classified into 4 types; shell crust (pavement), chimney, burrow, and nodule. It is clear that all chimney and burrow carbonates are composed of dolomite, while shell curst and nodule are composed of calcite, sometimes both calcite and dolomite. These carbonates are considered to have been formed by cold seep, because they are characterized by the light carbon isotopic ratio (semi-biogenic) and the heavy oxygen isotopic ratio. This suggests that methane hydrate layers develop under this survey area and the water that has the heavy oxygen and the light carbon isotopic ratio is derived from the dissociation of methane hydrate.  相似文献   

19.
M. Murru  R. Console  G. Falcone   《Tectonophysics》2009,470(3-4):214-223
We have applied an earthquake clustering epidemic model to real time data at the Italian Earthquake Data Center operated by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) for short-term forecasting of moderate and large earthquakes in Italy. In this epidemic-type model every earthquake is regarded, at the same time, as being triggered by previous events and triggering following earthquakes. The model uses earthquake data only, with no explicit use of tectonic, geologic, or geodetic information. The forecasts are displayed as time-dependent maps showing both the expected rate density of Ml ≥ 4.0 earthquakes and the probability of ground shaking exceeding Modified Mercalli Intensity VI (PGA ≥ 0.01 g) in an area of 100 × 100 km2 around the zone of maximum expected rate density in the following 24 h. For testing purposes, the overall probability of occurrence of an Ml ≥ 4.5 earthquake in the same area of 100 × 100 km2 is also estimated. The whole procedure is tested in real time, for internal use only, at the INGV Earthquake Data Center.Forecast verification procedures have been carried out in forward-retrospective way on the 2006–2007 INGV data set, making use of statistical tools as the Relative Operating Characteristics (ROC) diagrams. These procedures show that the clustering epidemic model performs up to several hundred times better than a simple random forecasting hypothesis. The seismic hazard modeling approach so developed, after a suitable period of testing and refinement, is expected to provide a useful contribution to real time earthquake hazard assessment, even with a possible practical application for decision making and public information.  相似文献   

20.
The surface-wavemagnitudes Ms are determined for 30 great shallow earthquakes that occurred during the period from 1953 to 1977. The determination is based on the amplitude and period data from all available station bulletins, and the same procedure as that employed in Gutenberg and Richter's “Seismicity of the Earth” is used. During this period, the Chilean earthquake of 1960 has the largest Ms, 8.5. The surface-wave magnitudes listed in “Earthquake Data Reports” are found to be higher than Ms on the average. By using the same method as that used by Gutenberg, the broad-band body-wave magnitudes mB are determined for great shallow shocks for the period from 1953 to 1974. mB is based on the amplitudes of P, PP and S waves which are measured on broadband instruments at periods of about 4–20 s. The 1-s body-wave magnitudes listed in “Bulletin of International Seismological Center” and “Earthquake Data Reports” are found to be much smaller than mB on the average. Through the examination of Gutenberg and Richter's original worksheets, the relation between mB and Msis revised to mB = 0.65 Ms+ 2.5 which well satisfies the mg and Msdata for Msbetween 5.2 and  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号