首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
A straightforward Bayesian statistic is applied in five broad seismogenic source zones of the northwest frontier of the Himalayas to estimate the earthquake hazard parameters (maximum regional magnitude M max, β value of G–R relationship and seismic activity rate or intensity λ). For this purpose, a reliable earthquake catalogue which is homogeneous for M W ≥ 5.0 and complete during the period 1900 to 2010 is compiled. The Hindukush–Pamir Himalaya zone has been further divided into two seismic zones of shallow (h ≤ 70 km) and intermediate depth (h > 70 km) according to the variation of seismicity with depth in the subduction zone. The estimated earthquake hazard parameters by Bayesian approach are more stable and reliable with low standard deviations than other approaches, but the technique is more time consuming. In this study, quantiles of functions of distributions of true and apparent magnitudes for future time intervals of 5, 10, 20, 50 and 100 years are calculated with confidence limits for probability levels of 50, 70 and 90 % in all seismogenic source zones. The zones of estimated M max greater than 8.0 are related to the Sulaiman–Kirthar ranges, Hindukush–Pamir Himalaya and Himalayan Frontal Thrusts belt; suggesting more seismically hazardous regions in the examined area. The lowest value of M max (6.44) has been calculated in Northern-Pakistan and Hazara syntaxis zone which have estimated lowest activity rate 0.0023 events/day as compared to other zones. The Himalayan Frontal Thrusts belt exhibits higher earthquake magnitude (8.01) in next 100-years with 90 % probability level as compared to other zones, which reveals that this zone is more vulnerable to occurrence of a great earthquake. The obtained results in this study are directly useful for the probabilistic seismic hazard assessment in the examined region of Himalaya.  相似文献   

3.
This study evaluates and compares two methodologies, Monte Carlo simple genetic algorithm (MCSGA) and noisy genetic algorithm (NGA), for cost-effective sampling network design in the presence of uncertainties in the hydraulic conductivity (K) field. Both methodologies couple a genetic algorithm (GA) with a numerical flow and transport simulator and a global plume estimator to identify the optimal sampling network for contaminant plume monitoring. The MCSGA approach yields one optimal design each for a large number of realizations generated to represent the uncertain K-field. A composite design is developed on the basis of those potential monitoring wells that are most frequently selected by the individual designs for different K-field realizations. The NGA approach relies on a much smaller sample of K-field realizations and incorporates the average of objective functions associated with all K-field realizations directly into the GA operators, leading to a single optimal design. The efficacy of the MCSGA-based composite design and the NGA-based optimal design is assessed by applying them to 1000 realizations of the K-field and evaluating the relative errors of global mass and higher moments between the plume interpolated from a sampling network and that output by the transport model without any interpolation. For the synthetic application examined in this study, the optimal sampling network obtained using NGA achieves a potential cost savings of 45% while keeping the global mass and higher moment estimation errors comparable to those errors obtained using MCSGA. The results of this study indicate that NGA can be used as a useful surrogate of MCSGA for cost-effective sampling network design under uncertainty. Compared with MCSGA, NGA reduces the optimization runtime by a factor of 6.5.  相似文献   

4.
Currently used goodness-of-fit (GOF) indicators (i.e. efficiency criteria) are largely empirical and different GOF indicators emphasize different aspects of model performance; a thorough assessment of model skill may require the use of robust skill matrices. In this study, based on the maximum likelihood method, a statistical measure termed BC-GED error model is proposed, which firstly uses the Box–Cox (BC) transformation method to remove the heteroscedasticity of model residuals, and then employs the generalized error distribution (GED) with zero-mean to fit the distribution of model residuals after BC transformation. Various distance-based GOF indicators can be explicitly expressed by the BC-GED error model for different values of the BC transformation parameter λ and GED kurtosis coefficient β. Our study proves that (1) the shape of error distribution implied in the GOF indicators affects the model performance on high or low flow discharges because large error-power (β) value can cause low probability of large residuals and small β value will lead to high probability of zero value; (2) the mean absolute error could balance consideration of low and high flow value as its assumed error distribution (i.e. Laplace distribution, where β = 1) is the turning point of GED derivative at zero value. The results of a study performed in the Baocun watershed via comparison of the SWAT model-calibration results using six distance-based GOF indicators show that even though the formal BC-GED is theoretically reasonable, the calibrated model parameters do not always correspond to high performance of model-simulation results because of imperfection of the hydrologic model. However, the derived distance-based GOF indicators using the maximum likelihood method offer an easy way of choosing GOF indicators for different study purposes and developing multi-objective calibration strategies.  相似文献   

5.
The completeness and the accuracy of the Brest sea level time series dating from 1807 make it suitable for long-term sea level trend studies. New data sets were recently discovered in the form of handwritten tabulations, including several decades of the eighteenth century. Sea level observations have been made in Brest since 1679. This paper presents the historical data sets which have been assembled so far. These data sets span approximately 300 years and together constitute the longest, near-continuous set of sea level information in France. However, an important question arises: Can we relate the past and the present-day records? We partially provide an answer to this question by analysing the documents of several historical libraries with the tidal data using a ‘data archaeology’ approach advocated by Woodworth (Geophys Res Lett 26:1589–1592, 1999b). A second question arises concerning the accuracy of such records. Careful editing was undertaken by examining the residuals between tidal predictions and observations. It proved useful to remove the worst effects of timing errors, in particular the sundial correction to be applied prior to August 1, 1714. A refined correction based on sundial literature [Savoie, La gnomique, Editions Les Belles Lettres, Paris, 2001] is proposed, which eliminates the systematic offsets seen in the discrepancies in timing of the sea level measurements. The tidal analysis has also shown that shallow-water tidal harmonics at Brest causes a systematic difference of 0.023 m between mean sea level (MSL) and mean tide level (MTL). Thus, MTL should not be mixed with the time series of MSL because of this systematic offset. The study of the trends in MTL and MSL however indicates that MTL can be used as a proxy for MSL. Three linear trend periods are distinguished in the Brest MTL time series over the period 1807–2004. Our results support the recent findings of Holgate and Woodworth (Geophys Res Lett) of an enhanced coastal sea level rise during the last decade compared to the global estimations of about 1.8 mm/year over longer periods (Douglas, J Geophys Res 96:6981–6992, 1991). The onset of the relatively large global sea level trends observed in the twentieth century is an important question in the science of climate change. Our findings point out to an ‘inflexion point’ at around 1890, which is remarkably close to that in 1880 found in the Liverpool record by Woodworth (Geophys Res Lett 26:1589–1592, 1999b).  相似文献   

6.
The attenuation properties of the crust in the Chamoli region of Himalaya have been examined by estimating the frequency-dependent relationships of quality factors for P waves (Qα) and for S waves (Qβ) in the frequency range 1.5–24 Hz. The extended coda normalization method has been applied on the waveforms of 25 aftershocks of the 1999 Chamoli earthquake (M 6.4) recorded at five stations. The average value of Qα is found to be varied from 68 at 1.5 Hz to 588 at 24 Hz while it varies from 126 at 1.5 Hz to 868 at 24 Hz for Qβ. The estimated frequency-dependent relations for quality factors are Qα = (44 ± 1)f(0.82±.04) and Qβ = (87 ± 3)f(0.71±.03). The rate of increase of Q(f) for P and S waves in the Chamoli region is comparable with the other regions of the world. The ratio Qβ/Qα is greater than one in the region which along with the frequency dependence of quality factors indicates that scattering is an important factor contributing to the attenuation of body waves in the region. A comparison of attenuation relation for S wave estimated here (Qβ = 87f0.71) with that of coda waves (Qc = 30f1.21) obtained by Mandal et al. (2001) for the same region shows that Qc > Qβ for higher frequencies (>8 Hz) in the region. This indicates a possible high frequency coda enrichment which suggests that the scattering attenuation significantly influences the attenuation of S waves at frequencies >8 Hz. This observation may be further investigated using multiple scattering models. The attenuation relations for quality factors obtained here may be used for the estimation of source parameters and near-source simulation of earthquake ground motion of the earthquakes, which in turn are required for the assessment of seismic hazard in the region.  相似文献   

7.
In weather forecasting, current and past observational data are routinely assimilated into numerical simulations to produce ensemble forecasts of future events in a process termed “model steering”. Here we describe a similar approach that is motivated by analyses of previous forecasts of the Working Group on California Earthquake Probabilities (WGCEP). Our approach is adapted to the problem of earthquake forecasting using topologically realistic numerical simulations for the strike-slip fault system in California. By systematically comparing simulation data to observed paleoseismic data, a series of spatial probability density functions (PDFs) can be computed that describe the probable locations of future large earthquakes. We develop this approach and show examples of PDFs associated with magnitude M > 6.5 and M > 7.0 earthquakes in California.  相似文献   

8.
Abstract

Hydrological drought durations (lengths) in the Canadian prairies were modelled using the standardized hydrological index (SHI) sequences derived from the streamflow series at annual, monthly and weekly time scales. The rivers chosen for the study present high levels of persistence (as indicated by values exceeding 0.95 for lag-1 autocorrelation in weekly SHI sequences), because they encompass large catchment areas (2210–119 000 km2) and traverse, or originate in, lakes. For such rivers, Markov chain models were found to be simple and efficient tools for predicting the drought duration (year, month, or week) based on annual, monthly and weekly SHI sequences. The prediction of drought durations was accomplished at threshold levels corresponding to median flow (Q50) (drought probability, q?=?0.5) to Q95 (drought probability, q?=?0.05) exceedence levels in the SHI sequences. The first-order Markov chain or the random model was found to be acceptable for the prediction of annual drought lengths, based on the Hazen plotting position formula for exceedence probability, because of the small sample size of annual streamflows. On monthly and weekly time scales, the second-order Markov chain model was found to be satisfactory using the Weibull plotting position formula for exceedence probability. The crucial element in modelling drought lengths is the reliable estimation of parameters (conditional probabilities) of the first- and second-order persistence, which were estimated using the notions implicit in the discrete autoregressive moving average class of models. The variance of drought durations is of particular significance, because it plays a crucial role in the accurate estimation of persistence parameters. Although, the counting method of the estimation of persistence parameters was found to be unsatisfactory, it proved useful in setting the initial values and also in subsequent adjustment of the variance-based estimates of persistence parameters. At low threshold levels corresponding to q < 0.20, even the first-order Markov chain can be construed as a satisfactory model for predicting drought durations based on monthly and weekly SHI sequences.

Editor D. Koutsoyiannis; Associate editor C. Onof

Citation Sharma, T.C. and Panu, U.S., 2012. Prediction of hydrological drought durations based on Markov chains in the Canadian prairies. Hydrological Sciences Journal, 57 (4), 705–722.  相似文献   

9.
Community-scale simulations were performed to investigate the risk to groundwater and indoor air receptors downgradient of a contaminated site following the remediation of a long-term source. Six suites of Monte Carlo simulations were performed using a numerical model that accounted for groundwater flow, reactive solute transport, soil gas flow, and vapour intrusion in buildings. The model was applied to a three-dimensional, community-scale (250 m × 1000 m × 14 m) domain containing heterogeneous, spatially correlated distributions of the hydraulic conductivity, fraction of organic carbon, and biodegradation rate constant, which were varied between realizations. Analysis considered results from both individual realizations as well as the suite of Monte Carlo simulations expressed through several novel, integrated parameters, such as the probability of exceeding a regulatory standard in either groundwater or indoor air. Results showed that exceedance probabilities varied considerably with the consideration of biodegradation in the saturated zone, and were less sensitive to changes in the variance of hydraulic conductivity or the incorporation of heterogeneous distributions of organic carbon at this spatial scale. A sharp gradient in exceedance probability existed at the lateral edges of the plumes due to variability in lateral dispersion, which defined a narrow region of exceedance uncertainty. Differences in exceedance probability between realizations (i.e., due to heterogeneity uncertainty) were similar to differences attributed to changes in the variance of hydraulic conductivity or fraction of organic carbon. Simulated clean-up times, defined by reaching an acceptable exceedance probability, were found to be on the order of decades to centuries in these community-scale domains. Results also showed that the choice of the acceptable exceedance probability level (e.g., 1 vs. 5 %) would likely affect clean up times on the order of decades. Moreover, in the scenarios examined here, the risk of exceeding indoor air standards was greater than that of exceeding groundwater standards at all times and places. Overall, simulations of coupled transport processes combined with novel spatial and temporal quantification metrics for Monte Carlo analyses, provide practical tools for assessing risk in wider communities when considering site remediation.  相似文献   

10.
In order to analyze observed seismicity in central Japan and Venezuela, we applied a new method to identify semi-periodic sequences in the occurrence times of large earthquakes, which allows for the presence of multiple periodic sequences and/or events not belonging to any sequence in the time series. We also explored a scheme for diminishing the effects of a sharp cutoff magnitude threshold in selecting the events to analyze. A main four-event sequence with probability P c  = 0.991 of not having occurred by chance was identified for earthquakes with M ≥ 8.0 in central Japan. Venezuela is divided, from West to East, into four regions; for each of these, the magnitude ranges and identified sequences are as follows. Region 1: M ≥ 6.0, a six-event sequence with P c  = 0.923, and a four-event sequence with P c  = 0.706. Region 2: M ≥ 5.6, a five-event sequence with P c  = 0.942. Region 3: M ≥ 5.6, a four-event sequence with P c  = 0.882. Region 4: M ≥ 6.0, a five-event sequence with P c  = 0.891. Forecasts are made and evaluated for all identified sequences having four or more events and probabilities ≥0.5. The last event of all these sequences was satisfactorily aftcast by previous events. Whether the identified sequences do, in fact, correspond to physical processes resulting in semi-periodic seismicity is, of course, an open question; but the forecasts, properly used, may be useful as a factor in seismic hazard estimation.  相似文献   

11.
Probabilistic Assessment of Tsunami Recurrence in the Indian Ocean   总被引:1,自引:0,他引:1  
The Indian Ocean is one of the most tsunamigenic regions of the world and recently experienced a mega-tsunami in the Sumatra region on 26 December 2004 (M W 9.2 earthquake) with tsunami intensity I (Soloviev-Imamura intensity scale) equal to 4.5, causing heavy destruction of lives and property in the Indian Ocean rim countries. In this study, probabilities of occurrences of large tsunamis with tsunami intensities I ≥ 2.0 and I ≥ 3.0 (average wave heights H ≥ 2.83 m and H ≥ 5.66 m, respectively) during a specified time interval were calculated using three stochastic models, namely, Weibull, gamma and lognormal. Tsunami recurrence was calculated for the whole Indian Ocean and the special case of the Andaman-Sumatra-Java (ASJ) region, excluding the 1945 Makran event from the main data set. For this purpose, a reliable, homogeneous and complete tsunami catalogue with I ≥ 2.0 during the period 1797–2006 was used. The tsunami hazard parameters were estimated using the method of maximum likelihood. The logarithm of likelihood function (ln L) was estimated and used to test the suitability of models in the examined region. The Weibull model was observed to be the most suitable model to estimate tsunami recurrence in the region. The sample mean intervals of occurrences of tsunamis with intensity I ≥ 2.0 and I ≥ 3.0 were calculated for the observed data as well as for the Weibull, gamma and lognormal models. The estimated cumulative and conditional probabilities in the whole Indian Ocean region show recurrence periods of about 27–30 years (2033–2036) and 35–36 years (2039–2040) for tsunami intensities I ≥ 2.0 and I ≥ 3.0, respectively, while it is about 31–35 years (2037–2041) and 41–42 years (2045–2046) for a tsunami of intensity I ≥ 2.0 and I ≥ 3.0, respectively, in the ASJ region. A high probability (>0.9) of occurrence of large tsunamis with I ≥ 2.0 in the next 30–40 years in the Indian Ocean region was revealed.  相似文献   

12.
This study used realistic representations of cloudy atmospheres to assess errors in solar flux estimates associated with 1D radiative transfer models. A scene construction algorithm, developed for the EarthCARE mission, was applied to CloudSat, CALIPSO and MODIS satellite data thus producing 3D cloudy atmospheres measuring 61 km wide by 14,000 km long at 1 km grid-spacing. Broadband solar fluxes and radiances were then computed by a Monte Carlo photon transfer model run in both full 3D and 1D independent column approximation modes. Results were averaged into 1,303 (50 km)2 domains. For domains with total cloud fractions A c  < 0.7 top-of-atmosphere (TOA) albedos tend to be largest for 3D transfer with differences increasing with solar zenith angle. Differences are largest for A c  > 0.7 and characterized by small bias yet large random errors. Regardless of A c , differences between 3D and 1D transfer rarely exceed ±30 W m?2 for net TOA and surface fluxes and ±10 W m?2 for atmospheric absorption. Horizontal fluxes through domain sides depend on A c with ~20% of cases exceeding ±30 W m?2; the largest values occur for A c  > 0.7. Conversely, heating rate differences rarely exceed ±20%. As a cursory test of TOA radiative closure, fluxes produced by the 3D model were averaged up to (20 km)2 and compared to values measured by CERES. While relatively little attention was paid to optical properties of ice crystals and surfaces, and aerosols were neglected entirely, ~30% of the differences between 3D model estimates and measurements fall within ±10 W m?2; this is the target agreement set for EarthCARE. This, coupled with the aforementioned comparison between 3D and 1D transfer, leads to the recommendation that EarthCARE employ a 3D transport model when attempting TOA radiative closure.  相似文献   

13.
The Gumbel’s third asymptotic distribution (GIII) of the extreme value method is employed to evaluate the earthquake hazard parameters in the Iranian Plateau. This research quantifies spatial mapping of earthquake hazard parameters like annual and 100-year mode beside their 90 % probability of not being exceeded (NBE) in the Iranian Plateau. Therefore, we used a homogeneous and complete earthquake catalogue during the period 1900–2013 with magnitude M w ? ?4.0, and the Iranian Plateau is separated into equal area mesh of 1° late?×?1° long. The estimated result of annual mode with 90 % probability of NBE is expected to exceed the values of M w 6.0 in the Eastern part of Makran, most parts of Central and East Iran, Kopeh Dagh, Alborz, Azerbaijan, and SE Zagros. The 100-year mode with 90 % probability of NBE is expected to overpass the value of M w 7.0 in the Eastern part of Makran, Central and East Iran, Alborz, Kopeh Dagh, and Azerbaijan. The spatial distribution of 100-year mode with 90 % probability of NBE uncovers the high values of earthquake hazard parameters which are frequently connected with the main tectonic regimes of the studied area. It appears that there is a close communication among the seismicity and the tectonics of the region.  相似文献   

14.
A transient model, hereafter referred to as ROM-TM, was developed to quantify river ecosystem metabolic rates and reaeration rates from field observation of changes in dissolved O2 (DO) and the ratio of 18O to 16O in DO (δ18O-DO). ROM-TM applies an inverse modeling approach and is programmed using MATLAB. Parameters describing photosynthesis, ecosystem respiration, gas exchange, and isotopic fractionation, such as maximum photosynthetic rate (P m ), photosynthetic efficiency parameter (a), respiration rate at 20 °C (R 20 ), gas exchange coefficient (K), respiration isotopic fractionation factor (a R ), and photorespiration coefficient (β R ), can be abstracted by minimizing the sum of square errors between the fitted data and the observed field data. Then DO and δ18O-DO time series can be reconstructed using estimated parameters and input variables. Besides being capable of teasing apart metabolic processes and gas exchange to provide daily average estimates of metabolic parameters at the ecosystem scale, ROM-TM can be used to address issues related to light including light saturation phenomena at the ecosystem level, the effect of cloud cover on the metabolic balance, and photorespiration. Error and uncertainty analysis demonstrates that ROM-TM is stable and robust for the random errors of DO time series. The photosynthetic parameters P m and a are more sensitive than other parameters to lower-resolution time series data.  相似文献   

15.
The Son-Narmada-Tapti lineament and its surroundings of Central India (CI) is the second most important tectonic regime following the converging margin along Himalayas-Myanmar-Andaman of the Indian sub-continent, which attracted several geoscientists to assess its seismic hazard potential. Our study area, a part of CI, is bounded between latitudes 18°–26°N and longitudes 73°–83°E, representing a stable part of Peninsular India. Past damaging moderate magnitude earthquakes as well as continuing microseismicity in the area provided enough data for seismological study. Our estimates based on regional Gutenberg–Richter relationship showed lower b values (i.e., between 0.68 and 0.76) from the average for the study area. The Probabilistic Seismic Hazard Analysis carried out over the area with a radius of ~300 km encircling Bhopal yielded a conspicuous relationship between earthquake return period (T) and peak ground acceleration (PGA). Analyses of T and PGA shows that PGA value at bedrock varies from 0.08 to 0.15 g for 10 % (T = 475 years) and 2 % (T = 2,475 years) probabilities exceeding 50 years, respectively. We establish the empirical relationships $ {\text{ZPA}}_{(T = 475)} = 0.1146\;[V_{\text{s}} (30)]^{ - 0.2924}, $ and $ {\text{ZPA}}_{(T = 2475)} = 0.2053\;[V_{\text{s}} (30)]^{ - 0.2426} $ between zero period acceleration (ZPA) and shear wave velocity up to a depth of 30 m [V s (30)] for the two different return periods. These demonstrate that the ZPA values decrease with increasing shear wave velocity, suggesting a diagnostic indicator for designing the structures at a specific site of interest. The predictive designed response spectra generated at a site for periods up to 4.0 s at 10 and 2 % probability of exceedance of ground motion for 50 years can be used for designing duration dependent structures of variable vertical dimension. We infer that this concept of assimilating uniform hazard response spectra and predictive design at 10 and 2 % probability of exceedance in 50 years at 5 % damping at bedrocks of different categories may offer potential inputs for designing earthquake resistant structures of variable dimensions for the CI region under the National Earthquake Hazard Reduction Program for India.  相似文献   

16.
ABSTRACT

The index flood method of the regional L-moments approach is adapted to annual maximum rainfall (AMR) series of successively increasing durations from 5 minutes to 24 hours. In Turkey, there are 14 such AMRs having standard durations of 5, 10, 15, 30, 60, 120, 180, 240, 300, 360, 480, 720, 1080 and 1440 min. The parameters of the probability distributions found suitable for these AMR series in a homogeneous region need to be adjusted so that their quantile functions will not cross each other over the entire range of probabilities. This adjustment is done so as to make (1) the derivative of the quantile function with respect to the Gumbel reduced variate of a longer-duration AMR be greater than or equal to that of the shorter-duration AMR, and (2) the quantile of a longer-duration AMR be greater than that of the shorter-duration AMR, both to be satisfied for any specific probability. Accordingly, the parameters of a probability distribution fitted to some AMR series must either increase or decrease or be constant with respect to increasing rainstorm duration; and the parameters of different distributions fitted to two sequential AMR series must be interrelated. The index flood method by the L-moments approach modified in such manner for successive-duration AMR series is applied to the Inland Anatolia region of Turkey using data recorded at 31 rain-gauging stations with recording lengths from 31 to 66 years.
EDITOR Z.W. Kundzewicz; ASSOCIATE EDITOR A. Viglione  相似文献   

17.
This work evaluated the spatial variability and distribution of heterogeneous hydraulic conductivity (K) in the Choushui River alluvial fan in Taiwan, using ordinary kriging (OK) and mean and individual sequential Gaussian simulations (SGS). A baseline flow model constructed by upscaling parameters was inversely calibrated to determine the pumping and recharge rates. Simulated heads using different K realizations were then compared with historically measured heads. A global/local simulated error between simulated and measured heads was analysed to assess the different spatial variabilities of various estimated K distributions. The results of a MODFLOW simulation indicate that the OK realization had the smallest sum of absolute mean simulation errors (SAMSE) and the SGS realizations preserved the spatial variability of the measured K fields. Moreover, the SAMSE increases as the spatial variability of the K field increases. The OK realization yields small local simulation errors in the measured K field of moderate magnitude, whereas the SGS realizations have small local simulation errors in the measured K fields, with high and low values. The OK realization of K can be applied to perform a deterministic inverse calibration. The mean SGS method is suggested for constructing a K field when the application focuses on extreme values of estimated parameters and small calibration errors, such as in a simulation of contaminant transport in heterogeneous aquifers. The individual SGS realization is useful in stochastically assessing the spatial uncertainty of highly heterogeneous aquifers. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
The Gujarat and adjoining region falls under all four seismic zones V, IV, III and II of the seismic zoning map of India, and is one of the most seismically prone intracontinental regions of the world. It has experienced two large earthquakes of magnitude M w 7.8 and 7.7 in 1819 and 2001, respectively and several moderate earthquakes during the past two centuries. In the present study, the probability of occurrence of earthquakes of M ≥ 5.0 has been estimated during a specified time interval for different elapsed times on the basis of observed time intervals between earthquakes using three stochastic models namely, Weibull, Gamma and Lognormal. A complete earthquake catalogue has been used covering the time interval of 1819 to 2006. The whole region has been divided into three major seismic regions (Saurashtra, Mainland Gujarat and Kachchh) on the basis of seismotectonics and geomorphology of the region. The earthquake hazard parameters have been estimated using the method of maximum likelihood. The logarithmic of likelihood function (ln L) is estimated and used to test the suitability of models in three different regions. It was found that the Weibull model fits well with the actual data in Saurashtra and Kachchh regions, whereas Lognormal model fits well in Mainland Gujarat. The mean intervals of occurrence of earthquakes are estimated as 40.455, 20.249 and 13.338 years in the Saurashtra, Mainland Gujarat and Kachchh region, respectively. The estimated cumulative probability (probability that the next earthquake will occur at a time later than some specific time from the last earthquake) for the earthquakes of M ≥ 5.0 reaches 0.9 after about 64 years from the last earthquake (1993) in Saurashtra, about 49 years from the last earthquake (1969) in Mainland Gujarat and about 29 years from the last earthquake (2006) in the Kachchh region. The conditional probability (probability that the next earthquake will occur during some specific time interval after a certain elapsed time from last earthquake) is also estimated and it reaches about 0.8 to 0.9 during the time interval of about 57 to 66 years from the last earthquake (1993) in Saurashtra region, 31 to 51 years from the last earthquake (1969) in Mainland Gujarat and about 21 to 28 years from the last earthquake (2006) in Kachchh region.  相似文献   

19.
Temporal distribution of earthquakes with M w > 6 in the Dasht-e-Bayaz region, eastern Iran has been investigated using time-dependent models. Based on these types of models, it is assumed that the times between consecutive large earthquakes follow a certain statistical distribution. For this purpose, four time-dependent inter-event distributions including the Weibull, Gamma, Lognormal, and the Brownian Passage Time (BPT) are used in this study and the associated parameters are estimated using the method of maximum likelihood estimation. The suitable distribution is selected based on logarithm likelihood function and Bayesian Information Criterion. The probability of the occurrence of the next large earthquake during a specified interval of time was calculated for each model. Then, the concept of conditional probability has been applied to forecast the next major (M w > 6) earthquake in the site of our interest. The emphasis is on statistical methods which attempt to quantify the probability of an earthquake occurring within a specified time, space, and magnitude windows. According to obtained results, the probability of occurrence of an earthquake with M w > 6 in the near future is significantly high.  相似文献   

20.
We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ~500-year empirical record compiled by O’Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0–30% regionally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号