首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reliable automatic procedure for locating earthquake in quasi-real time is strongly needed for seismic warning system, earthquake preparedness, and producing shaking maps. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. The main purpose of this work is to investigate the performances of different automatic procedures to choose the most suitable one to be applied for the quasi-real-time earthquake locations in northwestern Italy. The reliability of two automatic-picking algorithms (one based on the Characteristic Function (CF) analysis, CF picker, and the other one based on the Akaike’s information criterion (AIC), AIC picker) and two location methods (“Hypoellipse” and “NonLinLoc” codes) is analysed by comparing the automatically determined hypocentral coordinates with reference ones. Reference locations are computed by the “Hypoellipse” code considering manually revised data and tested using quarry blasts. The comparison is made on a dataset composed by 575 seismic events for the period 2000–2007 as recorded by the Regional Seismic network of Northwestern Italy. For P phases, similar results, in terms of both amount of detected picks and magnitude of travel time differences with respect to manual picks, are obtained applying the AIC and the CF picker; on the contrary, for S phases, the AIC picker seems to provide a significant greater number of readings than the CF picker. Furthermore, the “NonLinLoc” software (applied to a 3D velocity model) is proved to be more reliable than the “Hypoellipse” code (applied to layered 1D velocity models), leading to more reliable automatic locations also when outliers (wrong picks) are present.  相似文献   

2.
We present maximum usable frequency (MUF) calculation results when a radiowave, radiated at a zero angle, is reflected from the ionosphere along direct and reverse paths when the latitudinal variability of the medium is significant. As an example, we consider the Novorossiysk-California path. Calculations were carried out using a “two-point” method and data of the Monthly MUF Prediction for May 1980 and May 1991. The “two-point” method is validated based on a new way of approximated representation of the Watson integral, which is an exact solution of the benchmark problem related to the point source field in a spherically layered medium. It is shown that MUFs along a reverse path are several MHz higher than MUFs along a direct path during the whole day.  相似文献   

3.
Gravity anomaly reference fields, required e.g. in remove-compute-restore (RCR) geoid computation, are obtained from global geopotential models (GGM) through harmonic synthesis. Usually, the gravity anomalies are computed as point values or area mean values in spherical approximation, or point values in ellipsoidal approximation. The present study proposes a method for computation of area mean gravity anomalies in ellipsoidal approximation (‘ellipsoidal area means’) by applying a simple ellipsoidal correction to area means in spherical approximation. Ellipsoidal area means offer better consistency with GGM quasigeoid heights. The method is numerically validated with ellipsoidal area mean gravity derived from very fine grids of gravity point values in ellipsoidal approximation. Signal strengths of (i) the ellipsoidal effect (i.e., difference ellipsoidal vs. spherical approximation), (ii) the area mean effect (i.e., difference area mean vs. point gravity) and (iii) the ellipsoidal area mean effect (i.e., differences between ellipsoidal area means and point gravity in spherical approximation) are investigated in test areas in New Zealand and the Himalaya mountains. The impact of both the area mean and the ellipsoidal effect on quasigeoid heights is in the order of several centimetres. The proposed new gravity data type not only allows more accurate RCR-based geoid computation, but may also be of some value for the GGM validation using terrestrial gravity anomalies that are available as area mean values.  相似文献   

4.
Gradient-based similarity in the atmospheric boundary layer   总被引:2,自引:0,他引:2  
The “flux-based” and “gradient-based” similarity in the stable boundary layer and also in the interfacial part of the convective boundary layer is discussed. The stable case is examined on the basis of data collected during the CASES-99 experiment. Its interfacial counterpart is considered in both the quasi-steady (mid-day) and non-steady states, utilizing the results of large-eddy simulations. In the stable regime, the “gradient-based” approach is not unique and can be based on various master length scales. Three local master length scales are considered: the local Monin-Obukhov scale, the buoyancy scale, and the Ellison scale. In the convective “quasi-steady” (mid-day) case, the “mixed layer” scaling is shown to be valid in the mixed layer and invalid in the interfacial layer. The temperature variance profile in non-steady conditions can be expressed in terms of the convective temperature scale in the mixed layer. The analogous prediction for velocity variances is not valid under non-steady conditions.  相似文献   

5.
In the work 2D and 3D fields of stresses of several scale levels close to the off of the main fault (vertical strike-slip fault) in conditions of compression are mathematically calculated and investigated. The solution is found for the elastic task for a 2D “horizontal” field; a 3D field of stresses is obtained by the imposition of a “vertical” unaxis compression. It is shown that the surroundings of the fault are subdivided into three (not two, as is usually considered) regions of types of predictable secondary fractures: “extension,” “strike-slip fault,” and “compression.” In regions close to the off of the main fault, three different microregions occur. The type of destruction in these microregions depends on the parameters of the outer load. Natural and model data of second order fractures that are compared with the calculated data are examined and generalized. The performed investigation is important for the determination of the genesis of secondary fractures, located close to the main fault. The calculated parageneses of secondary fractures may be used for the estimation of the stress tensor type of the regional field.  相似文献   

6.
A mathematical model used for determination of a local geoid model by combining airborne gravity disturbances and the Earth Gravitational Model 2008 (EGM08) is shortly reviewed. The precision of the estimated local geoid model of Taiwan is tested by its comparison with the “real” geoid at Global Satellite Navigation Systems (GNSS)/levelling points. The same comparison at GNSS/levelling points is done for the geoid evaluated only by using EGM08. Conclusions concerning a rate of improvement of the “global” geoid from EGM08 using the “local” geoid from airborne gravity data are presented.  相似文献   

7.
Studyonthepatternandmodeofverticalcrustaldeformationduringtheseismogenicprocessofintraplatestrongearthquakes杨国华,桂昆长,巩曰沐,杨春花,韩...  相似文献   

8.
In this paper, based on the previous study of practical use of seismic regime windows and seismic regime belts, the problem of establishing a “seismic regime network” consisting of “windows” and “belts” is further posed and discussed according to the observed fact that many “windows” and “belts” make responses to one earthquake. For the convenience of usage, the “seismic regime network” is divided into two classes, the first class and the second one. The former can be used in tendency prediction for long-term seismic activity in a large area, the latter used in short-term prediction in a small area. In this paper, after briefly discussing the physical significance of “seismic regime network”, it is pointed out that this simple and easily used method can be used to observe and extract seismic precursory information from a large area before a great earthquake, thus it can provide a reliable basis for the analysis and judgement of seismic regime tendency in time and space. No doult, this method is of certain practical significance. The Chinese version of this paper appeared in the Chinese edition ofActa Seismologica Sinica,13, 161–169, 1991. The English version of this paper is improved by Prof. Shaoxie Xu.  相似文献   

9.
In the present paper a statistical model for extreme value analysis is developed, considering seasonality. The model is applied to significant wave height data from the N. Aegean Sea. To build this model, a non-stationary point process is used, which incorporates apart from a time varying threshold and harmonic functions with a period of one year, a component μ w(t) estimated through the wavelet transform. The wavelet transform has a dual role in the present study. It detects the significant “periodicities” of the signal by means of the wavelet global and scale-averaged power spectra and then is used to reconstruct the part of the time series, μ w(t), represented by these significant features. A number of candidate models, which incorporate μ w(t) in their location and scale parameters are tried. To avoid overparameterisation, an automatic model selection procedure based on the Akaike information criterion is carried out. The best obtained model is graphically evaluated by means of diagnostic plots. Finally, “aggregated” return levels with return periods of 20, 50 and 100 years, as well as time-dependent quantiles are estimated, combining the results of the wavelet analysis and the Poisson process model, identifying a significant reduction in return level estimation uncertainty, compared to more simple non-stationary models.  相似文献   

10.
Case histories of water level subsidence in bore-holes as a precursor of earthquakes are given here. Based on the examples, a testable quantitative theory for causative mechanism of the precursor—“draining-injecting water model with variable discharge” is proposed (abbreviated to DIW model). Through analysing the constitution law of which the deformation changes in the porous, water-saturated media under the effect of exterior stress, as first step of all, the authors suggested first a simple “drainage-natural restoration model” (abbreviated to DNR model), calculated and gave a group of theoretical precursor curve by using DNR model, compared the theoretical precursor curves of DNR model with the observational curves, found out the differences of the two curves, studied the causative physical factors that caused the differences then, revised the DNR model, and finally, the theory on “draining-injecting water model with variable discharge” in the paper was obtained. The authors deduced general equation of the two dimensions “draining-injecting water linear source drawdown field” in the paper, suggested and developed the concept on “domain”. DIW model can also give a possible explanation for both regularity and complexity of this precursor. DIW theory can quantitatively divide the seismogenic process of the foci on the short-term and impending process into several phases, and by inversing the discharge functionq(τ) curve, the time values by which the phases are divided were obtained. They will be helpful to predicting the occurrence time of earthquake and judging the DD and IPE model of the seismogenesis. The Chinese version of this paper appeared in the Chinese edition ofActa Seismologica Sinica,15, 194–201, 1993.  相似文献   

11.
The mean sea surface height (MSSH) refers to the average of the long-term sea height. The quasi-sea surface topography (QSST) is usually defined as the height difference between the MSSH and the geoid. As to 100 years of time yardstick of geodesy, the time that satellite altimetry data sets spanned is relatively shorter, in this paper, the QSST refers to the residual sea surface height (RSSH) that shows the height dif-ference between MSSH derived from altimetry and the geoid[1]. As w…  相似文献   

12.
The gravity-geologic method (GGM) was implemented for 2′ by 2′ bathymetric determinations in a 1.6° longitude-by-1.0° latitude region centered on the eastern end of the Shackleton Fracture Zone in the Drake Passage, Antarctica. The GGM used the Bouguer slab approximation to process satellite altimetry-derived marine free-air gravity anomalies and 6,548 local shipborne bathymetric sounding measurements from the Korea Ocean Research and Development Institute to update the surrounding off-track bathymetry. The limitations of the Bouguer slab for modeling the gravity effects of variable density, rugged bathymetric relief at distances up to several kilometers, were mitigated by establishing ‘tuning’ densities that stabilized the GGM predictions. Tests using two-thirds of the shipborne bathymetric measurements to estimate the remaining third indicated that the tuning densities minimized root-mean-square deviations to about 29 m. The optimum GGM bathymetry model honoring all the ship observations correlated very well with widely available bathymetry models, despite local differences that ranged up to a few kilometers. The great analytical simplicity of GGM facilitates accurately and efficiently updating bathymetry as new gravity and bathymetric sounding data become available. Furthermore, the availability of marine free-air gravity anomaly data ensures that the GGM is more effective than simply extrapolating or interpolating ship bathymetry coverage into unmapped regions.  相似文献   

13.
Attempts to build a “constant-stress-drop” scaling of an earthquake-source spectrum have invariably met with difficulties. Physically, such a scaling would mean that the low-frequency content of the spectrum would control the high-frequency one, reducing the number of the parameters governing the time history of a shear dislocation to one. This is technically achieved through relationships of the corner frequency of the spectrum to the fault size, inevitably introduced in an arbitrary manner using a constant termed “stress drop”. Throughout decades of observations, this quantity has never proved to be constant. This fact has fundamental physical reasons. The dislocation motion is controlled by two independent parameters: the final static offset and the speed at which it is reached. The former controls the low-frequency asymptote of the spectrum while the latter its high-frequency content. There is no physical reason to believe that the static displacement should predetermine the slip rate, which would be implied if the “stress drop” were constant. Reducing the two parameters to just one (the seismic moment or magnitude) in a “scaling law” has no strict justification; this would necessarily involve arbitrary assumptions about the relationship of one parameter to the other. This explains why the “constant-stress-drop” scaling in seismology has been believed in but never reconciled with the data.  相似文献   

14.
After bio-mass extinction, the ecosystems in most areas were damaged seriously and may become an “ecologically barren area” lacking or even without ecosystems. To know what the pioneer organisms would be and their development, and to trace the es- tablishment process of the ecosystems are of great importance for the study of the biological evolution and recovery in aftermath. As one of the “big five” mass extinctions in the geological history, the Late Devonian Frasnian-Famennian (F-F) e…  相似文献   

15.
A careful re-examination of the well-known written documents pertaining to the 2,750-year-long historical period of Mount Etna was carried out and their interpretation checked through the high-accuracy archeomagnetic method (>1,200 large samples), combined with the 226Ra-230Th radiochronology. The magnetic dating is based upon secular variation of the direction of the geomagnetic field (DGF) and estimated to reach a precision of  ±40 years for the last 1,200 years, and ±100 to 200 years up to circa 150 B.C. Although less precise, the 226Ra-230Th method provides a unique tool for distinguishing between historic and prehistoric lavas, which in some cases might have similar DGFs. We show that despite the abundance of details on ancient historical eruptions, the primary sources of information are often too imprecise to identify their lava flows and eruptive systems. Most of the ages of these lavas, which are today accepted on the geological maps and catalogues, were attributed in the 1800s on the basis of their morphology and without any stratigraphical control. In fact, we found that 80% of the “historically dated” flows and cones prior to the 1700s are usually several hundreds of years older than recorded, the discrepancies sometimes exceeding a millennium. This is proper the case for volcanics presumed of the “1651 east” (actually ∼1020), “1595” (actually two distinct flows, respectively, ∼1200 and ∼1060), “1566” (∼1180), “1536” (two branches dated ∼1250 and ∼950), “1444” (a branch dated ∼1270), “1408” (lower branches dated ∼450 and ∼350), “1381” (∼1160), “1329” (∼1030), “1284” (∼1450 and ∼700), “1169 or 812” (∼1000) eruptions. Conversely, well-preserved cones and flows that are undated on the maps were produced by recent eruptions that went unnoticed in historical accounts, especially during the Middle Ages. For the few eruptions that are recorded between A.D. 252 and 750 B.C., none of their presumed lava flows shows a DGF in agreement with that existing at their respective dates of occurrence, most of these flows being in fact prehistoric. The cinder cones of Monpeloso (presumed “A.D. 252”) and Mt. Gorna (“394 B.C.”), although roughly consistent magnetically and radiochronologically with their respective epochs, remain of unspecified age because of a lack of precision of the DGF reference curve at the time. It is concluded that at the time scale of the last millennia, Mount Etna does not provide evidence of a steady-state behavior. Periods of voluminous eruptions lasting 50 to 150 years (e.g., A.D. 300–450, 950–1060, 1607–1669) are followed by centuries of less productive activity, although at any time a violent outburst may occur. Such a revised history should be taken into account for eruptive models, magma output, internal plumbing of the volcano, petrological evolution, volcano mapping and civil protection.  相似文献   

16.
    
Based on the observations of many years, it has been found that “small earthquake modulation windows” exist in the situation of some special geological structures, which respond sensitively to the variations of regional stress fields and the activities of earthquake swarms greater than moderate strong magnitude, and can supply some precursory information. More than two “small earthquake modulation windows” can also provide a general orientation of the first main earthquake of a earthquake cluster. Compared with “seismic window” based on frequency it is no doubt that the “modulation-window” has an unique characteristic of applicational significance to medium-term earthquake prediction with a time scale of two or three years. The English version is improved by Prof. Xin-Ling QIN, Institute of Geophysics, SSB, China.  相似文献   

17.
ThepatterncharacteristicsofthetendencyvariationsofearthresistivityanditsrelationtoearthquakesHe-YunZHAO(赵和云)(EarthquakeResear...  相似文献   

18.
The properties of rock resitivity were studied under pressure, particularly with “stress reversal”, a procedure in which the pressure applied was increased and decreased. It was observed that, 1) With pressure increasing, the main feature of resistivity change was increase-steady-decrease for high-saturation rock samples (saturation 70–100%). But the main feature for low-saturation samples was different. 2) In 10 out of 11 cases of “stress reversal” for high-saturation samples the resistivity droped (about 2%). Such drop could explain the anomalies in geoelectricity terms, which are commonly observed before earthquakes in China. 3) It was also observed shortly before rock failure that, a) the resistivity drops more dramatically (about 20%) during “stress reversal” period, which is much more than ordinary drops. b) these drops occurred not only during stress decrease but also during stress increase. c) Resistivity exhibits anisotropy: the resistivity along different directions may differ by 10%. These three features may indicate that the rock is nearing failure, while ordinary resistivity drops are only connected with “stress reversal” and may not mean the imminence of rock failure. 4) Resistivity increase was observed during the “stress reversal” period for low-saturation rock samples. The results mentioned above were explained with the effect of water flowing in and out of the cracks of rock. The temporary factors which yield a reduction of the maximum main stress, may enhence the possibility of earthquake occurrence.  相似文献   

19.
A new formulation of the problem of the statistical stability of fully turbulent shear flow is proposed, in which one seeks mean fields that bound the observed flow from the stable side. In the spirit of maximum transport theory, this formulation admits a larger set of “flows” than are dynamically possible. A sequence of constraints derived from the equations of motion can narrow this set, permitting at each step the determination of a “most stable” field free of any empirical elements. Turbulent channel flow is proposed as the first application and test of this quantitative theory. Past deductive theories for this flow, from “mean field” to “transport upper bounds,” are assessed. It is shown why these theories do not retain the significant destabilizing mechanisms of the actual flow. The implications for turbulent flow of recent work on the nonlinear and three-dimensional instability of laminar shearing flow are described. In first exploration of the “decoupled mean” stability theory proposed here, approximate analytical and numerical stability methods are used to find an amplitude and structure for the averaged flow propoerties. The quantitative results differ by considerably less than two from the observed values, providing an incentive for a more complete numerical study and for further constraints on the admitted class of flows. In the language now current for nonlinear stability theory, evidence is advanced here that anN-dimensional central manifold is adjacent to the realized turbulent flow, whereN has the largest possible value compatible with the dynamical relations.  相似文献   

20.
Variation of snow water resources in northwestern China, 1951–1997   总被引:19,自引:0,他引:19  
Two models are used to simulate the high-altitude permafrost distribution on the Qinghai-Xizang Plateau. The two models are the “altitude model”, a Gaussian distribution function used to describe the latitudinal zonation of permafrost based on the three-dimensional rules of high-altitude permafrost, and the “frost number model”, a dimensionless ratio defined by manipulation of freezing and thawing degree-day sums. The results show that the “altitude model” can simulate the high-altitude permafrost distribution under present climate conditions accurately. Given the essential hypotheses and using the GCM scenarios from HADCM2, the “altitude model” is used for predicting the permafrost distribution change on the Qinghai-Xizang Plateau. The results show that the permafrost on the plateau will not change significantly during 20–50 a, the percentage of the total disappeared area will not be over 19%. However, by the year 2099, if the air temperature increases by an average of 2.91°C on the plateau, the decrease in the area of permafrost will exceed 58%—almost all the permafrost in the southern plateau and in the eastern plateau will disappear. Project “Fundamental Research of Cryosphere” supported by the Chinese Academy of Sciences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号