首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 247 毫秒
1.
We apply a recently developed and validated numerical model of tsunami propagation and runup to study the inundation of Resurrection Bay and the town of Seward by the 1964 Alaska tsunami. Seward was hit by both tectonic and landslide-generated tsunami waves during the $M_{\rm W}$ 9.2 1964 megathrust earthquake. The earthquake triggered a series of submarine mass failures around the fjord, which resulted in landsliding of part of the coastline into the water, along with the loss of the port facilities. These submarine mass failures generated local waves in the bay within 5?min of the beginning of strong ground motion. Recent studies estimate the total volume of underwater slide material that moved in Resurrection Bay to be about 211?million m3 (Haeussler et?al. in Submarine mass movements and their consequences, pp 269?C278, 2007). The first tectonic tsunami wave arrived in Resurrection Bay about 30?min after the main shock and was about the same height as the local landslide-generated waves. Our previous numerical study, which focused only on the local landslide-generated waves in Resurrection Bay, demonstrated that they were produced by a number of different slope failures, and estimated relative contributions of different submarine slide complexes into tsunami amplitudes (Suleimani et?al. in Pure Appl Geophys 166:131?C152, 2009). This work extends the previous study by calculating tsunami inundation in Resurrection Bay caused by the combined impact of landslide-generated waves and the tectonic tsunami, and comparing the composite inundation area with observations. To simulate landslide tsunami runup in Seward, we use a viscous slide model of Jiang and LeBlond (J Phys Oceanogr 24(3):559?C572, 1994) coupled with nonlinear shallow water equations. The input data set includes a high resolution multibeam bathymetry and LIDAR topography grid of Resurrection Bay, and an initial thickness of slide material based on pre- and post-earthquake bathymetry difference maps. For simulation of tectonic tsunami runup, we derive the 1964 coseismic deformations from detailed slip distribution in the rupture area, and use them as an initial condition for propagation of the tectonic tsunami. The numerical model employs nonlinear shallow water equations formulated for depth-averaged water fluxes, and calculates a temporal position of the shoreline using a free-surface moving boundary algorithm. We find that the calculated tsunami runup in Seward caused first by local submarine landslide-generated waves, and later by a tectonic tsunami, is in good agreement with observations of the inundation zone. The analysis of inundation caused by two different tsunami sources improves our understanding of their relative contributions, and supports tsunami risk mitigation in south-central Alaska. The record of the 1964 earthquake, tsunami, and submarine landslides, combined with the high-resolution topography and bathymetry of Resurrection Bay make it an ideal location for studying tectonic tsunamis in coastal regions susceptible to underwater landslides.  相似文献   

2.
We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ~500-year empirical record compiled by O’Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0–30% regionally.  相似文献   

3.
Three-dimensional frequency dependent S-wave quality factor (Qβ(f)) value for the central Honshu region of Japan has been determined in this paper using an algorithm based on inversion of strong motion data. The method of inversion for determination of three-dimensional attenuation coefficients is proposed by Hashida and Shimazaki (J Phys Earth. 32, 299–316, 1984) and has been used and modified by Joshi (Curr Sci. 90, 581–585, 2006; Nat Hazards. 43, 129–146, 2007) and Joshi et al. (J. Seismol. 14, 247–272, 2010). Twenty-one earthquakes digitally recorded on strong motion stations of Kik-net network have been used in this work. The magnitude of these earthquake ranges from 3.1 to 4.2 and depth ranging from 5 to 20 km, respectively. The borehole data having high signal to noise ratio and minimum site effect is used in the present work. The attenuation structure is determined by dividing the entire area into twenty-five three-dimensional blocks of uniform thickness having different frequency-dependent shear wave quality factor. Shear wave quality factor values have been determined at frequencies of 2.5, 7.0 and 10 Hz from record in a rectangular grid defined by 35.4°N to 36.4°N and 137.2°E to 138.2°E. The obtained attenuation structure is compared with the available geological features in the region and comparison shows that the obtained structure is capable of resolving important tectonic features present in the area. The proposed attenuation structure is compared with the probabilistic seismic hazard map of the region and shows that it bears some remarkable similarity in the patterns seen in seismic hazard map.  相似文献   

4.
The term flood basalt is redefined emphasizing the importance of the subaerial environment. Using the well established physical criteria of aerial extent, internal structures, time of extrusion and associations, flood basalt activity is distinguished in the Archeans (Dharwars) of Mysore from the geosynclinal volcanics. Study of chemical composition of the Dharwar and other Archean volcanics in the light ofSugimura’s (1968) SWS index, and plotting of the chemical analyses on theMacdonald andKatsura’s (1964) alkali-silica diagram,Kuno’s (1968) alkali-alumina-silica diagram and Scheynamann’s silica-Niggli qz diagram shows both geosynclinal and subaerial volcanics are mainly tholeiitic. Therefore in deciphering the environment of volcanism, it is suggested that the physical criteria take precedence over chemical composition.  相似文献   

5.
We estimate the corner frequencies of 20 crustal seismic events from mainshock–aftershock sequences in different tectonic environments (mainshocks 5.7 < M W < 7.6) using the well-established seismic coda ratio technique (Mayeda et al. in Geophys Res Lett 34:L11303, 2007; Mayeda and Malagnini in Geophys Res Lett, 2010), which provides optimal stability and does not require path or site corrections. For each sequence, we assumed the Brune source model and estimated all the events’ corner frequencies and associated apparent stresses following the MDAC spectral formulation of Walter and Taylor (A revised magnitude and distance amplitude correction (MDAC2) procedure for regional seismic discriminants, 2001), which allows for the possibility of non-self-similar source scaling. Within each sequence, we observe a systematic deviation from the self-similar \( M_{0} \propto \mathop f\nolimits_{\text{c}}^{ - 3} \) line, all data being rather compatible with \( M_{0} \propto \mathop f\nolimits_{\text{c}}^{ - (3 + \varepsilon )} \) , where ε > 0 (Kanamori and Rivera in Bull Seismol Soc Am 94:314–319, 2004). The deviation from a strict self-similar behavior within each earthquake sequence of our collection is indicated by a systematic increase in the estimated average static stress drop and apparent stress with increasing seismic moment (moment magnitude). Our favored physical interpretation for the increased apparent stress with earthquake size is a progressive frictional weakening for increasing seismic slip, in agreement with recent results obtained in laboratory experiments performed on state-of-the-art apparatuses at slip rates of the order of 1 m/s or larger. At smaller magnitudes (M W < 5.5), the overall data set is characterized by a variability in apparent stress of almost three orders of magnitude, mostly from the scatter observed in strike-slip sequences. Larger events (M W > 5.5) show much less variability: about one order of magnitude. It appears that the apparent stress (and static stress drop) does not grow indefinitely at larger magnitudes: for example, in the case of the Chi–Chi sequence (the best sampled sequence between M W 5 and 6.5), some roughly constant stress parameters characterize earthquakes larger than M W ~ 5.5. A representative fault slip for M W 5.5 is a few tens of centimeters (e.g., Ide and Takeo in J Geophys Res 102:27379–27391, 1997), which corresponds to the slip amount at which effective lubrication is observed, according to recent laboratory friction experiments performed at seismic slip velocities (V ~ 1 m/s) and normal stresses representative of crustal depths (Di Toro et al. in Nature in press, 2011, and references therein). If the observed deviation from self-similar scaling is explained in terms of an asymptotic increase in apparent stress (Malagnini et al. in Pure Appl Geophys, 2014, this volume), which is directly related to dynamic stress drop on the fault, one interpretation is that for a seismic slip of a few tens of centimeters (M W ~ 5.5) or larger, a fully lubricated frictional state may be asymptotically approached.  相似文献   

6.
We applied the maximum likelihood method produced by Kijko and Sellevoll (Bull Seismol Soc Am 79:645–654, 1989; Bull Seismol Soc Am 82:120–134, 1992) to study the spatial distributions of seismicity and earthquake hazard parameters for the different regions in western Anatolia (WA). Since the historical earthquake data are very important for examining regional earthquake hazard parameters, a procedure that allows the use of either historical or instrumental data, or even a combination of the two has been applied in this study. By using this method, we estimated the earthquake hazard parameters, which include the maximum regional magnitude $ \hat{M}_{\max } , $ the activity rate of seismic events and the well-known $ \hat{b} $ value, which is the slope of the frequency-magnitude Gutenberg-Richter relationship. The whole examined area is divided into 15 different seismic regions based on their tectonic and seismotectonic regimes. The probabilities, return periods of earthquakes with a magnitude M?≥?m and the relative earthquake hazard level (defined as the index K) are also evaluated for each seismic region. Each of the computed earthquake hazard parameters is mapped on the different seismic regions to represent regional variation of these parameters. Furthermore, the investigated regions are classified into different seismic hazard level groups considering the K index. According to these maps and the classification of seismic hazard, the most seismically active regions in WA are 1, 8, 10 and 12 related to the Alia?a Fault and the Büyük Menderes Graben, Aegean Arc and Aegean Islands.  相似文献   

7.
The Load/Unload Response Ratio (LURR) method is a proposed technique to predict earthquakes that was first put forward by Yin (1987). LURR is based on the idea that when an area enters the damage regime, the rate of seismic activity during loading of the tidal cycle increases relative to the rate of seismic activity during unloading in the months to one year preceding a large earthquake. Since earth tides generally contribute the largest temporal variations in crustal stress, it seems plausible that earth tides would trigger earthquakes in areas that are close to failure (e.g., Vidale et al., 1998). However, the vast majority of studies have shown that earth tides do not trigger earthquakes (e.g., Vidale et al., 1998; Heaton, 1982; Rydelek et al., 1992). In this study, we conduct an independent test of the LURR method, since there would be important scientific and social implications if it were proven to be a robust method of earthquake prediction. Smith and Sammis (2004) undertook a similar study and found no evidence that there was predictive significance to the LURR method. We have repeated calculations of LURR for the Northridge earthquake in California, following both the parameters of X.C. Yin (personal communication) and the somewhat different ones of Smith and Sammis (2004). Though we have followed both sets of parameters closely, we have been unable to reproduce either set of results. Our examinations have shown that the LURR method is very sensitive to certain parameters. Thus it seems likely that the discrepancies between our results and those of previous studies are due to unaccounted for differences in the calculation parameters. A general agreement was made at the 2004 ACES Workshop in China between research groups studying LURR to work cooperatively to resolve the differences in methods and results, and thus permit more definitive conclusions on the potential usefulness of the LURR method in earthquake prediction.  相似文献   

8.
Tsunami Warning Centers issue rapid and accurate tsunami warnings to coastal populations by estimating the location and size of the causative earthquake as soon as possible after rupture initiation. Both US Tsunami Warning Centers have therefore been using Mwp to issue Tsunami Warnings 5–10 min after Earthquake origin time since 2002. However, because Mwp (Tsuboi et al., Bulletin of the Seismological society of America 85:606–613, 1995) is based on the far-field approximation to the P-wave displacement due to a double couple point source, we should only very carefully apply Mwp to data obtained in the near field, at distances of less than a few wavelengths from the fault. On the other hand, the surface waves from Great Earthquakes, including those that occur just offshore of populated areas, such as the 2011 Tohoku earthquake, clip seismographs located near the fault. Because the first arriving P-waves from such large events are often on scale, Mwp should provide useful information, even for these Great Earthquakes. We therefore calculate Mwp from 18 unclipped STS-1 broadband P-wave seismograms, recorded at 2–15° distance from the Tohoku epicenter to determine if Mwp can usefully estimate Mw for this earthquake, using data obtained close to the epicenter. In this case there should be a good chance to get reliable Mwp values for stations at epicentral distances of 9–10°, since the source duration for the Tohoku earthquake is less than 200 s and the time window used to estimate Mwp is 120 s in duration. Our analysis indicates that Mwp does indeed give reliable results (Mw ~ 9.1) beginning at about 11° distance from the epicenter. The values of Mwp from seismic waveforms obtained at 11–15° epicentral distance from the Mw 9.1 off the east coast of Tohuku earthquake of March 11, 2011 fell within the range 9.1–9.3, and were available within 4–5 min after origin time. Even the Mwp values of 7.7–8.4, obtained at less than 5° epicentral distance, exceed the PTWC’s threshold of Mw 7.6 for issuing a regional tsunami warning to coastal populations within 1,000 km of the epicenter, and of Mw 6.9 for issuing a local tsunami warning to the coastal populations of Hawaii.  相似文献   

9.
10.
We explained spectra of distant tsunamis observed in enclosed basins by applying the synthesis method based on joint analysis of tsunami and background spectra from a number of stations. This method is the generalization of the method proposed by Rabinovich (J. Geopys. Res. 102, 12663-12676, 1997) to separate source and topography effects in recorded tsunami waves. The source function is extracted by inversion of the tsunami/background spectra averaged from many observational sites. The method is applied to the 2009 Papua tsunami observed at the Owase tide station in southwest Japan, a region with complicated topography and numerous bays and inlets. The synthesized spectrum explains the observed spectral amplitudes for each frequency component. It is shown that averaging of tsunami and background from various tide gauge stations in semi-enclosed basins is an efficient approach to extract the source function.  相似文献   

11.
A numerical model of the wave dynamics in Chenega Cove, Alaska during the historic M w 9.2 megathrust earthquake is presented. During the earthquake, locally generated waves of unknown origin were identified at the village of Chenega, located in the western part of Prince William Sound. The waves appeared shortly after the shaking began and swept away most of the buildings while the shaking continued. We model the tectonic tsunami in Chenega Cove assuming different tsunami generation processes. Modeled results are compared with eyewitness reports and an observed runup. Results of the numerical experiments let us claim the importance of including both vertical and horizontal displacement into the 1964 tsunami generation process. We also present an explanation for the fact that arrivals of later waves in Chenega were unnoticed.  相似文献   

12.
Unloaded natural rock masses are known to generate seismic signals (Green et al., 2006; Hainzl et al., 2006; Husen et al., 2007; Kraft et al., 2006). Following a 1,000 m3 mass failure into the Mediterranean Sea, centimeter-wide tensile cracks were observed to have developed on top of an unstable segment of the coastal cliff. Nanoseismic monitoring techniques (Wust-Bloch and Joswig, 2006; Joswig, 2008), which function as a seismic microscope for extremely weak seismic events, were applied to verify whether brittle failure is still generated within this unconsolidated sandstone mass and to determine whether it can be detected. Sixteen days after the initial mass failure, three small-aperture sparse arrays (Seismic Navigation Systems-SNS) were deployed on top of this 40-m high shoreline cliff. This paper analyzes dozens of spiky nanoseismic (?2.2 ≥ M L ≥ ?3.4) signals recorded over one night in continuous mode (at 200 Hz) at very short slant distances (3–67 m). Waveform characterization by sonogram analysis (Joswig, 2008) shows that these spiky signals are all short in duration (>0.5 s). Most of their signal energy is concentrated in the 10–75 Hz frequency range and the waveforms display high signal similarity. The detection threshold of the data set reaches M L ?3.4 at 15 m and M L ?2.7 at 67 m. The spatial distribution of source signals shows 3-D clustering within 10 m from the cliff edge. The time distribution of M L magnitude does not display any decay pattern of M L over time. This corroborates an unusual event decay over time (modified Omori’s law), whereby an initial quiet period is followed by regained activity, which then fades again. The polarization of maximal waveform amplitude was used to estimate spatial stress distribution. The orientation of ellipses displaying maximal signal energy is consistent with that of tensile cracks observed in the field and agrees with rock mechanics predictions. The M L– surface rupture length relationship displayed by our data fits a constant-slope extrapolation of empirical data collected by Wells and Coppersmith (1994) for normal fault features at much larger scale. Signal characterization and location as well as the absence of direct anthropogenic noise sources near the monitoring site, all indicate that these nanoseismic signals are generated by brittle failure within the top section of the cliff. The atypical event decay over time that was observed suggests that the cliff material is undergoing post-collapse bulk strain accommodation. This feasibility study demonstrates the potential of nanoseismic monitoring in rapidly detecting, locating and analyzing brittle failure generated within unconsolidated material before total collapse occurs.  相似文献   

13.
In the hours following the 2011 Honshu event, and as part of tsunami warning procedures at the Laboratoire de Géophysique in Papeete, Tahiti, the seismic source of the event was analyzed using a number of real-time procedures. The ultra-long period mantle magnitude algorithm suggests a static moment of 4.1 × 1029 dyn cm, not significantly different from the National Earthquake Information Center (NEIC) value obtained by W-phase inversion. The slowness parameter, $\Uptheta = -5.65, $ is slightly deficient, but characteristic of other large subduction events such as Nias (2005) or Peru (2001); it remains significantly larger than for slow earthquakes such as Sumatra (2004) or Mentawai (2010). Similarly, the duration of high-frequency (2–4 Hz) P waves in relation to seismic moment or estimated energy, fails to document any slowness in the seismic source. These results were confirmed in the ensuing weeks by the analysis of the lowest-frequency spheroidal modes of the Earth. A dataset of 117 fits for eight modes (including the gravest one, 0 S 2, and the breathing mode, 0 S 0) yields a remarkably flat spectrum, with an average moment of 3.5 × 1029 dyn cm (*/1.07). This behavior of the Tohoku earthquake explains the generally successful real-time modeling of its teleseismic tsunami, based on available seismic source scaling laws. On the other hand, it confirms the dichotomy, among mega-quakes (M 0 > 1029 dyn cm) between regular events (Nias, 2005; Chile, 2010; Sendai, 2011) and slow ones (Chile, 1960; Alaska, 1964; Sumatra, 2004; and probably Rat Island, 1965), whose origin remains unexplained.  相似文献   

14.
Three periods of volcanic activity connected with tectonic events form the geological history of the Valley of Mexico (Mooser 1963, 1969). An igneous rock suite from rhyodacites to andesites (but lacking rhyolites and basalts) can be observed in each period. During the Tertiary epochs — in the Oligo-Miocene and Upper Miocene-Pliocene — we have a more dacitic volcanism, in the Quaternary epoch a more andesitic volcanism. This result was verified by calculating the average of all available and stratigraphically datable chemical analyses byGunn &Mooser (1971) andNegendank (1972). Using the average chemical composition of the Oligo-Miocene, Upper Miocene-Pliocene and Quaternary products the equivalent igneous rocks were computed using theRittmann-norms in theStreckeisen-Q-A-P-F double triangle with the following result (names in parenthesis are those using the classification ofMiddlemost (1973): Quaternary : quartz-latite-andesite (andesite) Upper Miocene-Pliocene : leuco-quartz-latite-andesite (high lime dacite) Oligo-Miocene : leuco-quartz-latite-andesite (high lime dacite) The equal average composition of the two groups of Tertiary volcanic rocks seems to support the theory of a uniform primary andesite magma apart from which of the two possible theories of petrogenesis one favors. The calculated average trace element abundances show high Cr- and Ni-values which suggests that mantle material was involved if we consider the Tertiary products as partial melting products of the lower crust. A more elegant hypothesis seems to be the model ofGunn &Mooser (1971), who consider these volcanic rocks as partial melting products of oceanic tholeiites or their high pressure derivatives in the sense ofRaleigh &Lee (1969).  相似文献   

15.
In the last 15 years there have been 16 tsunami events recorded at tide stations on the Pacific Coast of Canada. Eleven of these events were from distant sources covering almost all regions of the Pacific, as well as the December 26, 2004 Sumatra tsunami in the Indian Ocean. Three tsunamis were generated by local or regional earthquakes and two were meteorological tsunamis. The earliest four events, which occurred in the period 1994–1996, were recorded on analogue recorders; these tsunami records were recently re-examined, digitized and thoroughly analysed. The other 12 tsunami events were recorded using digital high-quality instruments, with 1-min sampling interval, installed on the coast of British Columbia (B.C.) in 1998. All 16 tsunami events were recorded at Tofino on the outer B.C. coast, and some of the tsunamis were recorded at eight or more stations. The tide station at Tofino has been in operation for 100 years and these recent observations add to the dataset of tsunami events compiled previously by S.O. Wigen (1983) for the period 1906–1980. For each of the tsunami records statistical analysis was carried out to determine essential tsunami characteristics for all events (arrival times, maximum amplitudes, frequencies and wave-train structure). The analysis of the records indicated that significant background noise at Langara, a key northern B.C. Tsunami Warning station located near the northern end of the Queen Charlotte Islands, creates serious problems in detecting tsunami waves. That station has now been moved to a new location with better tsunami response. The number of tsunami events observed in the past 15 years also justified re-establishing a tide gauge at Port Alberni, where large tsunami wave amplitudes were measured in March 1964. The two meteorological events are the first ever recorded on the B.C. coast. Also, there have been landslide generated tsunami events which, although not recorded on any coastal tide gauges, demonstrate, along with the recent investigation of a historical catastrophic event, the significant risk that landslide generated tsunami pose to coastal and inland regions of B.C.  相似文献   

16.
The effect of location errors in the performance of seismicity-based forecasting methods was studied here using one particular binary forecast technique, the Pattern Informatics (PI) technique (Rundle et al., Proc Nat Acad Sci USA 99, 2514–2521, 2002; Tiampo et al., Pure Appl Geophys 159, 2429–2467, 2002). The Southern Californian dataset was used to generate a series of perturbed catalogs by adding different levels of noise to epicenter locations. The PI technique was applied to these perturbed datasets to perform retrospective forecasts that were evaluated by means of skill scores, commonly used in atmospheric sciences. These results were then compared to the effectiveness obtained from the original dataset. Isolated instances of decline of the PI performance were observed due to the nature of the skill scores themselves, but no clear trend of degradation was identified. Dependence on the total number of events in a catalog also was studied, with no systematic degradation in the performance of the PI for catalogs with events in the cases studied. These results suggest that the stability of the PI method is due to the invariance of the clustering patterns identified by the TM metric (Thirumalai and Mountain, Phys Rev A 39, 3563–3573, 1989) when applied to seismicity.  相似文献   

17.
In this paper, ground motion during the Independence Day earthquake of August 15, 1950 (Mw 8.6, Ben-Menahem et al., 1974) in the northeastern part of India is estimated by seismological approaches. A hybrid simulation technique which combines the low frequency ground motion simulated from an analytical source mechanism model with the stochastically simulated high-frequency components is used for obtaining the acceleration time histories. A series of ground motion simulations are carried out to estimate the peak ground acceleration (PGA) and spectral accelerations at important cities and towns in the epicentral region. One sample PGA distribution in the epicentral region encompassing the epicenter is also obtained. It is found that PGA in the epicentral region has exceeded 1 g during this earthquake. The estimated PGA’s are validated to the extent possible using the MMI values. The simulated acceleration time histories can be used for the assessment of important engineering structures in northeastern India.  相似文献   

18.
Distribution of parameters characterizing soil response during the 1999 Chi-Chi, Taiwan, earthquake (M w = 7.6) around the fault plane is studied. The results of stochastic finite-fault simulations performed in Pavlenko and Wen (2008) and constructed models of soil behavior at 31 soil sites were used for the estimation of amplification of seismic waves in soil layers, average stresses, strains, and shear moduli reduction in the upper 30 m of soil, as well as nonlinear components of soil response during the Chi-Chi earthquake. Amplification factors were found to increase with increasing distance from the fault (or, with decreasing the level of “input” motion to soil layers), whereas average stresses and strains, shear moduli reduction, and nonlinear components of soil response decrease with distance as ~ r ?1 . The area of strong nonlinearity, where soil behavior is substantially nonlinear (the content of nonlinear components in soil response is more than ~40–50% of the intensity of the response), and spectra of oscillations on the surface take the smoothed form close to E(f) ~ f ?n , is located within ~20–25 km from the fault plane (~ 1/4 of its length). Nonlinearity decreases with increasing distance from the fault, and at ~40–50 km from the fault (~ 1/2 of the fault length), soil response becomes virtually linear. Comparing soil behavior in near-fault zones during the 1999 Chi-Chi, the 1995 Kobe (M w = 6.8), and the 2000 Tottori (Japan) (M w = 6.7) earthquakes, we found similarity in the behavior of similar soils and predominance of the hard type of soil behavior. Resonant phenomena in upper soil layers were observed at many studied sites; however, during the Chi-Chi earthquake they involved deeper layers (down to ~ 40–60 m) than during lesser-magnitude Kobe and Tottori earthquakes.  相似文献   

19.
It has been two decades since the last comprehensive standard model of ambient earth noise was published Peterson (Observations and modelling of seismic background noise, US Geological Survey, open-file report 93–322, 1993). The PETERSON model was updated by analyzing the absolute quietest conditions for stations within the GSN (Berger et al. in J Geophys Res 109, 2005; Mcnamara and Buland in Bull Seism Soc Am 94:1517–1527, 2004; Ringler et al. in Seismol Res Lett 81(4) doi:10.1785/gssrl.81.4.605, 2010). Unfortunately, both the original model and the updated models did not include any deployed station in North Africa and Middle East, which reflects the noise levels within the desert environment of those regions. In this study, a survey was conducted to create a new seismic noise model from very broadband stations which recently deployed in North Africa. For this purpose, 1 year of continuous recording of seismic noise data of the Egyptian National Seismic Network (ENSN) was analyzed in order to create a new noise model. Seasonal and diurnal variations in noise spectra were recorded at each station. Moreover, we constructed a new noise model for each individual station. Finally, we obtained a new cumulative noise model for all the stations. We compared the new high-noise model (EHNM) and new low-noise model (ELNM) with both the high-noise model (NHNM) and low-noise model (NLNM) of Peterson (Observations and modelling of seismic background noise, US Geological Survey, open-file report 93–322, 1993). The obtained noise levels are considerably lower than low-noise model of Peterson (Observations and modelling of seismic background noise, US Geological Survey, open-file report 93–322, 1993) at ultra long period band (ULP band), but they are still below the high-noise model of Peterson (Observations and modelling of seismic background noise, US Geological Survey, open-file report 93–322, 1993). The results of this study could be considered as a first step to create permanent seismic noise models for North Africa and Middle East regions.  相似文献   

20.
“Repeating earthquakes” identified by waveform cross-correlation, with inter-event separation of no more than 1 km, can be used for assessment of location precision. Assuming that the network-measured apparent inter-epicenter distance X of the “repeating doublets” indicates the location precision, we estimated the regionalized location quality of the China National Seismograph Network by comparing the “repeating events” in and around China by Schaff and Richards (Science 303: 1176–1178, 2004; J Geophys Res 116: B03309, 2011) and the monthly catalogue of the China Earthquake Networks Center. The comparison shows that the average X value of the China National Seismograph Network is approximately 10 km. The mis-location is larger for the Tibetan Plateau, west and north of Xinjiang, and east of Inner Mongolia, as indicated by larger X values. Mis-location is correlated with the completeness magnitude of the earthquake catalogue. Using the data from the Beijing Capital Circle Region, the dependence of the mis-location on the distribution of seismic stations can be further confirmed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号