首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Coseismic deformation can be determined from strong-motion records of large earthquakes. Iwan et al. (Bull Seismol Soc Am 75:1225–1246, 1985) showed that baseline corrections are often required to obtain reliable coseismic deformation because baseline offsets lead to unrealistic permanent displacements. Boore (Bull Seismol Soc Am 91:1199–1211, 2001) demonstrated that different choices of time points for baseline correction can yield realistically looking displacements, but with variable amplitudes. The baseline correction procedure of Wu and Wu (J Seismol 11:159–170, 2007) improved upon Iwan et al. (Bull Seismol Soc Am 75:1225–1246, 1985) and achieved stable results. However, their time points for baseline correction were chosen by a recursive process with an artificial criterion. In this study, we follow the procedure of Wu and Wu (J Seismol 11:159–170, 2007) but use the ratio of energy distribution in accelerograms as the criterion to determine the time points of baseline correction automatically, thus avoiding the manual choice of time points and speeding up the estimation of coseismic deformation. We use the 1999 Chi-Chi earthquake in central Taiwan and the 2003 Chengkung and 2006 Taitung earthquakes in eastern Taiwan to illustrate this new approach. Comparison between the results from this and previous studies shows that our new procedure is suitable for quick and reliable determination of coseismic deformation from strong-motion records.  相似文献   

2.
3.
Binary data such as survival, hatching and mortality are assumed to be best described by a binomial distribution. This article provides a simple and straight forward approach for derivation of a no/lowest observed effect level (NOEL/LOEL) in a one-to-many control versus treatments setup. Practically, NOEL and LOEL values can be derived by means of different procedures, e.g. using Fisher’s exact test in coherence with adjusted p values. However, using adjusted p values heavily decreases statistical power. Alternatively, multiple t tests (e.g. Dunnett test procedure) together with arcsin-square-root transformations can be applied in order to account for variance heterogeneity of binomial data. Arcsin-square-root transformation, however, violates normal distribution because transformed data are constrained, while normal distribution provides data in the range \((-\infty ,\infty )\). Furthermore, results of statistical tests relying on an approximate normal distribution are approximate too. When testing for trends in probabilities of success (probs), the step down Cochran–Armitage trend test (CA) can be applied. The test statistic used is approximately normal. However, if probs approach 0 or 1, normal approximation of the null-distribution is suboptimal. Thus, critical values and p values lack statistical accuracy. We propose applying the closure principle (CP) and Fisher–Freeman–Halton test (FISH). The resulting CPFISH can solve the problems mentioned above. CP is used to overcome \(\alpha\)-inflation while FISH is applied to test for differences in probs between the control and any subset of treatment groups. Its applicability is presented by means of real data sets. Additionally, we performed a simulation study of 81 different setups (differing numbers of control replicates, numbers of treatments etc.), and compared the results of CPFISH to CA allowing us to point out the advantages and disadvantages of the CPFISH.  相似文献   

4.
Locations and velocities were calculated for microseisms occurring in samples of rock subjected to triaxial loading and injection of pore fluid. This was accomplished by analyzing arrival times of acoustic emission using an automatic first arrival picker. Apparent velocity anomalies were observed prior to both failure of intact samples and violent slip in samples containing saw cuts. Further analysis revealed that these fluctuations in calculated velocity were not due to changes in the true seismie velocity. Instead, variations in calculated velocity are shown to be related to sampling errors in picking first arrivals. The systematic picking of late first arrivals for small magnitude events was found to be a persistent bias resulting in low calculated velocities. This has encouraged the reexamination of earthquake records to determine how important sampling biases are in contributing to reported velocity anomalies.  相似文献   

5.
The jet erosion test (JET) is a widely applied method for deriving the erodibility of cohesive soils and sediments. There are suggestions in the literature that further examination of the method widely used to interpret the results of these erosion tests is warranted. This paper presents an alternative approach for such interpretation based on the principle of energy conservation. This new approach recognizes that evaluation of erodibility using the jet tester should involve the mass of soil eroded, so determination of this eroded mass (or else scour volume and bulk density) is required. The theory partitions jet kinetic energy flux into that involved in eroding soil, the remainder being dissipated in a variety of mechanisms. The energy required to erode soil is defined as the product of the eroded mass and a resistance parameter which is the energy required to entrain unit mass of soil, denoted J (in J/kg), whose magnitude is sought. An effective component rate of jet energy consumption is defined which depends on depth of scour penetration by the jet, but not on soil type, or the uniformity of the soil type being investigated. Application of the theory depends on experimentally determining the spatial form of jet energy consumption displayed in erosion of a uniform body of soil, an approach of general application. The theory then allows determination of the soil resistance parameter J as a function of depth of scour penetration into any soil profile, thus evaluating such profile variation in erodibility as may exist. This parameter J has been used with the same meaning in soil and gully erosion studies for the last 25 years. Application of this approach will appear in a companion publication as part 2. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

6.
This paper reports the results of jet tester experiments on soil samples of uniform properties which allow quantitative application of the new theory proposed in part 1 of these publications. This theory explores the possibly that a more adequate indicator of soil erodibility may be obtained by using the mass (and so volume) of soil eroded by the jet and the depth of scour penetration, rather than by using penetration depth alone, as assumed in the commonly‐used data interpretation method. It is shown that scour geometry can be well described using a generalized form of the Gaussian function, defined by its standard deviation and maximum depth. Using a published expression for jet kinetic energy flux, the new theory divides this flux into that used to erode soil, and the remainder which is dissipated in a variety of ways. Jet experiments on a specially‐prepared uniform soil sample are reported which provide the key to determining the spatial variability in the profile resistance to erosion offered by field soils. This resistance is expressed in the work required to erode unit mass of soil, denoted as J (in J/kg). The paper also gives results obtained on the profile variation in J for jet tests carried out at riverine sites on the upper Brisbane River, Queensland, Australia. As expected in most natural soil profiles, the results show an increase in J with depth in the profile. The soil resistance (J) is compared to the traditional interpretation of soil erodibility, (kd). The graphical comparison of these two indicators illustrates the inverse type of relationship between them which is expected from their respective definitions, but this relationship is associated with significant scatter. Possible reasons for this scatter are given, together with comments on jet tester experience in a wide variety of soil types. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

7.
Non-linear least-squares inversion operates iteratively by updating the model parameters in each step by a correction vector which is the solution of a set of normal equations. Inversion of geoelectrical data is an ill-posed problem. This and the ensuing suboptimality restrict the initial model to being in the near vicinity of the true model. The problem may be reduced by introducing damping into the system of equations. It is shown that an appropriate choice of the damping parameter obtained adaptively and the use of a conjugate-gradient algorithm to solve the normal equations make the 1D inversion scheme efficient and robust. The scheme uses an optimal damping parameter that is dependent on the noise in the data, in each iterative step. The changes in the damping and relative residual error with iteration number are illustrated. A comparison of its efficacy over the conventional Marquardt and simulated annealing methods, tested on Inman's model, is made. Inversion of induced polarization (IP) sounding is obtained by inverting twice (true and modified) DC apparent resistivity data. The inversion of IP data presented here is generic and can be applied to any of the IP observables, such as chargeability, frequency effect, phase, etc., as long as these observables are explicitly related to the DC apparent resistivity. The scheme is used successfully in inverting noise-free and noisy synthetic data and field data taken from the published literature.  相似文献   

8.
消除探地雷达数据的子波衰减和频散可以很好地提高探地雷达的勘探深度和勘探分辨率.常用的消除探地雷达数据的子波衰减和频散方法为反Q滤波方法.该方法需要利用地下介质的Q参数,但是正确求取地下介质的Q参数很困难.针对这一问题,本文提出了一种消除探地雷达数据的子波衰减和频散的反滤波方法.该方法以地下介质反射系数是随机数为前提,利用地下介质等效滤波器具有最小相位这个特性,通过求取等效滤波器的振幅谱来求取等效滤波器的反滤波器.最后,利用该反滤波器对探地雷达数据进行反滤波,实现消除探地雷达数据的子波衰减和频散.  相似文献   

9.

北京时间2022年9月5日12时52分, 四川甘孜藏族自治州泸定县发生6.8级地震.利用震中附近1 Hz高频GNSS观测数据获取了同震速度和位移波形, 并快速测定了泸定地震的震中和震级.实验结果表明: 高频GNSS反演的震中与美国地质调查局(USGS)发布的震中相差32 km, 与中国地震台网中心发布值相差16 km; 高频GNSS反演的震级, 与两个机构均仅差0.1个震级单位.针对地震预警、震后快速响应等时效性应用, 提出了一种联合高频GNSS和强震数据的线源破裂特征快速反演方法.泸定地震实验结果表明: 在震后20 s时可获得稳定的线源模型, 破裂长度、方向和破裂模式值分别为33.3 km、151°和0.6, 破裂方向与USGS震源机制解断层走向相差14°, 反演的断层破裂模式为双侧破裂.提出的地震断层破裂特征快速反演方法可用于地震预警、震后灾害快速评估以及紧急响应, 同时可为今后联合高频GNSS和强震数据快速测定地震破裂特征提供参考.

  相似文献   

10.
提升小波:可用于重磁资料处理的新方法   总被引:5,自引:5,他引:5  
小波变换在重磁资料处理中得到了广泛应用.通过提升结构构造的二代小波继承了一代小波的优良属性,并且具有灵活性、适应性、易于快速实现等优点.二代小波比一代小波有很多优点和好的属性,其应用范围更为广泛.本文介绍了提升结构构造二代小波的思想,并讨论了其在重磁资料处理中的应用前景.  相似文献   

11.
A closed-form stress plasticity solution is presented for gravitational and earthquake-induced earth pressures on retaining walls. The proposed solution is essentially an approximate yield-line approach, based on the theory of discontinuous stress fields, and takes into account the following parameters: (1) weight and friction angle of the soil material, (2) wall inclination, (3) backfill inclination, (4) wall roughness, (5) surcharge at soil surface, and (6) horizontal and vertical seismic acceleration. Both active and passive conditions are considered by means of different inclinations of the stress characteristics in the backfill. Results are presented in the form of dimensionless graphs and charts that elucidate the salient features of the problem. Comparisons with established numerical solutions, such as those of Chen and Sokolovskii, show satisfactory agreement (maximum error for active pressures about 10%). It is shown that the solution does not perfectly satisfy equilibrium at certain points in the medium, and hence cannot be classified in the context of limit analysis theorems. Nevertheless, extensive comparisons with rigorous numerical results indicate that the solution consistently overestimates active pressures and under-predicts the passive. Accordingly, it can be viewed as an approximate lower-bound solution, than a mere predictor of soil thrust. Compared to the Coulomb and Mononobe–Okabe equations, the proposed solution is simpler, more accurate (especially for passive pressures) and safe, as it overestimates active pressures and underestimates the passive. Contrary to the aforementioned solutions, the proposed solution is symmetric, as it can be expressed by a single equation—describing both active and passive pressures—using appropriate signs for friction angle and wall roughness.  相似文献   

12.
It is well established that small tuned mass dampers (TMDs) attached to structures are very effective in reducing excessive harmonic vibrations induced by external loads but are not as interesting within the context of earthquake engineering problems. For this reason, large mass ratio TMDs have been proposed with the objective of adding a significant amount of damping to structures, thus constituting a good means of reducing structural response in these cases. This solution has other important and attractive dynamic features such as robustness to system uncertainties and reduction of the motion of the inertial mass. In this context, this paper aims to describe an alternative methodology to existing procedures used to tune these devices to earthquake loads and to present some additional considerations regarding its performance in controlling seismic vibrations. The main feature of the proposed method consists of establishing a direct proportion between the damping ratios of the structure's first two vibration modes and the adopted mass ratio. By equalizing the damping ratios of the system's main vibration modes, this proposal also facilitates the use of simplified methods, such as modal analysis based on response spectra. To demonstrate the usefulness of this alternative methodology, an application example is presented, which was also used to perform a parametric study involving other tuning methods and to estimate mass ratio values from which there is no significant advantage in increasing the TMD mass. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
We present a new method for the prediction of the discontinuities and lithological variations ahead of the tunnel face. The automatic procedure is applied to data collected by seismic reflection surveys, with the sources and sensors located along the tunnel. The method allows: i) to estimate an average value of the wave velocity; ii) to detect the discontinuities for each source point; and iii) to analyze and plot the number of superposing estimates for each node of the domain. The final result can be interpreted as the probability to detect a discontinuity at a certain distance from the tunnel face. The method automatically estimates the peaks in the seismograms that can be related to a reflection. On the base of this process, the method only requires the source–receiver geometry and the data acquisition parameters. The procedure has been tested on synthetic and real data coming from a seismic survey on a tunnel under construction. The results indicate that the method runs very fast and it is reliable in the identification of lithological changes and discontinuities, up to more than 100 m ahead of the tunnel face.  相似文献   

14.
It is well-known that the mixed moisture content-pressure head formulation of Richards’ equation performs relatively poorly if the pressure head is used as primary variable, especially for problems involving infiltration into initially very dry material. For this reason, primary variable switching techniques have been proposed where, depending on the current degree of saturation, either the moisture content or the pressure head is used as primary variable when solving the discrete governing equations iteratively. In this paper, an alternative to these techniques is proposed. Although, from a mathematical point of view, the resulting procedure bears some resemblance to the standard primary variable switching procedure, it is much simpler to implement and involves only slight modification of existing codes making use of the mixed formulation with pressure head as primary variable. Representative examples are given to demonstrate the favourable performance of the new procedure.  相似文献   

15.
The ensemble Kalman filter (EnKF) is a commonly used real-time data assimilation algorithm in various disciplines. Here, the EnKF is applied, in a hydrogeological context, to condition log-conductivity realizations on log-conductivity and transient piezometric head data. In this case, the state vector is made up of log-conductivities and piezometric heads over a discretized aquifer domain, the forecast model is a groundwater flow numerical model, and the transient piezometric head data are sequentially assimilated to update the state vector. It is well known that all Kalman filters perform optimally for linear forecast models and a multiGaussian-distributed state vector. Of the different Kalman filters, the EnKF provides a robust solution to address non-linearities; however, it does not handle well non-Gaussian state-vector distributions. In the standard EnKF, as time passes and more state observations are assimilated, the distributions become closer to Gaussian, even if the initial ones are clearly non-Gaussian. A new method is proposed that transforms the original state vector into a new vector that is univariate Gaussian at all times. Back transforming the vector after the filtering ensures that the initial non-Gaussian univariate distributions of the state-vector components are preserved throughout. The proposed method is based in normal-score transforming each variable for all locations and all time steps. This new method, termed the normal-score ensemble Kalman filter (NS-EnKF), is demonstrated in a synthetic bimodal aquifer resembling a fluvial deposit, and it is compared to the standard EnKF. The proposed method performs better than the standard EnKF in all aspects analyzed (log-conductivity characterization and flow and transport predictions).  相似文献   

16.
By comparing three sequential extraction procedures, a new optimized extraction scheme for the molybdenum association in environmental samples was proposed.Five operational steps were described as exchangeable(KH_2PO_4+K_2HPO_4: including water-soluble), associated with organic matter(NaOH), Fe–Mn oxides and/or carbonates(HCl), sulfides(H_2O_2) and residue(HNO_3+HF+H_2O_2). An optimized extraction scheme was compared with Tessier's procedure and the Commission of European Communities Bureau of Reference(BCR) was applied to black shales. Results showed Tessier's procedure gave the lowest concentration values for exchangeable molybdenum and the highest values for the residual molybdenum, which could not present the efficiency of the extraction reagents. BCR's procedure showed the highest values in oxidizable molybdenum and presented four fractions of molybdenum, which did not demonstrate the fractions of molybdenum in the black shales in detail. The optimized extraction scheme demonstrated a certain improvement on extraction efficiency over Tessier's procedure for the lowest residual molybdenum, and revealed more featured fraction information of molybdenum in black shales than BCR's. Therefore, after a comparison with other two extraction procedures, the optimized extraction scheme proved suitable for the molybdenum in black shales and it also showed an accurate determination of the molybdenum in the fractions and source of bioavailable Mo.  相似文献   

17.
由于翡翠评估本身具有现实性、市场性、预测性等特性,传统的评估方法过于依赖评估人员的经验,主观随意性大,难以保证评估结果的客观、科学、公正。针对翡翠价格与其质量因素之间的复杂、不确定关系的特性,我们提出利用特尔斐法来判定各权重之间关系,从而利用数学模型计算出各质量因素之间的权重,以减少由于评估人员主观因素造成评估结果的偏差,使评估人员运用此方法能够迅速、准确、公正的对翡翠估价前期的数据进行量化处理。  相似文献   

18.
Atmospheric aerosol particle size distribution data derived from almucantar scans performed by the CIMEL sunphotometer at Belsk, Poland, in 2005 are used for the estimation of aerosol optical thickness (AOT) in the UV range by applying the Mie theory. The results obtained are compared with the direct Sun measurement data from the CIMEL sunphotometer and collocated Brewer spectrophotometer No. 064, as well as with AOT obtained by Angström extrapolation from the direct Sun measurements performed by the CIMEL sunphotometer in the visible range of wavelengths. Mean differences between calculated and measured values of AOT are up to about 8% in the UV-B range, which is close to the measurement uncertainty of the Brewer spectrophotometer and much less than that obtained by means of Angstrom extrapolation (over 24% in the UV-B range).  相似文献   

19.
Tracers, such as ?uorescein dye, are widely employed to measure overland ?ow speeds by time‐of‐travel along measured ?ow paths. Among several disadvantages of this method are the involvement of human reaction time when using stop‐watches, and the relatively long travel path that is consequently needed for reliable timing. Long ?ow paths mean that local variability along the ?ow path cannot be detected. This paper describes a new optical tachometer that overcomes these limitations, as well as offering other advantages. It is based on the use of a small ?oating re?ector target that is carried on the surface tension ?lm, and which passes between two re?ective sensors mounted above the ?ow. The new device allows virtual ‘spot’ measurements of surface ?ow speed over a path as short as 1 cm, and eliminates the in?uence of human reaction time. The new device is battery powered and portable, and provides an improved alternative to dye timing in many ?eld and laboratory applications. Its use will allow the collection of more re?ned data than have hitherto been easily achievable. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
Airborne electromagnetic (AEM) surveys are currently being flown over populated areas and applied to detailed problems using high flight line densities. Interpretation information is supplied through a model of the subsurface resistivity distribution. Theoretical and survey data are used here to study the character and reliability of such models. Although the survey data were obtained using a fixed-wing system, the corresponding associations with helicopter, towed-bird systems are discussed. Both Fraser half-space and 1D inversion techniques are considered in relation to their ability to distinguish geological, cultural and environmental influences on the survey data. Fraser half-space modelling provides the dual interpretation parameters of apparent resistivity and apparent depth at each operational frequency. The apparent resistivity was found to be a remarkably stable parameter and appears robust to the presence of a variety of at-surface cultural features. Such features provide both incorrect altitude data and multidimensional influences. Their influences are observed most strongly in the joint estimate of apparent depth and this accounts for the stability of the apparent resistivity. Positive apparent depths, in the example data, result from underestimated altitude measurements. It is demonstrated that increasingly negative apparent depths are associated with increasing misfits between a 1D model and the data. Centroid depth calculations, which are a transform of the Fraser half-space parameters, provide an example of the detection of non-1D influences on data obtained above a populated area. 1D inversion of both theoretical and survey data is examined. The simplest use of the 1D inversion method is in providing an estimate of a half-space resistivity. This can be undertaken prior to multilayer inversion as an initial assessment. Underestimated altitude measurements also enter the problem and, in keeping with the Fraser pseudo-layer concept, an at-surface highly resistive layer of variable thickness can be usefully introduced as a constrained parameter. It is clearly difficult to ascribe levels of significance to a ‘measure’ of misfit contained in a negative apparent depth with the dimensions of metres. The reliability of 1D models is better assessed using a formal misfit parameter. With the misfit parameter in place, the example data suggest that the 1D inversion methods provide reliable apparent resistivity values with a higher resolution than the equivalent information from the Fraser half-space estimates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号