The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies.
Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making. 相似文献
Maps showing the potential for soil erosion at 1:100,000 scale are produced in a study area within Lebanon that can be used for evaluating erosion of Mediterranean karstic terrain with two different sets of impact factors built into an erosion model. The first set of factors is: soil erodibility, morphology, land cover/use and rainfall erosivity. The second is obtained by the first adding a fifth factor, rock infiltration. High infiltration can reflect high recharge, therefore decreasing the potential of surface runoff and hence the quantity of transported materials. Infiltration is derived as a function of lithology, lineament density, karstification and drainage density, all of which can be easily extracted from satellite imagery. The influence of these factors is assessed by a weight/rate approach sharing similarities between quantitative and qualitative methods and depending on pair-wise comparison matrix.The main outcome was the production of factorial maps and erosion susceptibility maps (scale 1:100,000). Spatial and attribute comparison of erosion maps indicates that the model that includes a measure of rock infiltration better represents erosion potential. Field investigation of rills and gullies shows 87.5% precision of the model with rock infiltration. This is 17.5% greater than the precision of the model without rock infiltration. These results indicate the necessity and importance of integrating information on infiltration of rock outcrops to assess soil erosion in Mediterranean karst landscapes. 相似文献
We describe empirical results from a multi-disciplinary project that support modeling complex processes of land-use and land-cover change in exurban parts of Southeastern Michigan. Based on two different conceptual models, one describing the evolution of urban form as a consequence of residential preferences and the other describing land-cover changes in an exurban township as a consequence of residential preferences, local policies, and a diversity of development types, we describe a variety of empirical data collected to support the mechanisms that we encoded in computational agent-based models. We used multiple methods, including social surveys, remote sensing, and statistical analysis of spatial data, to collect data that could be used to validate the structure of our models, calibrate their specific parameters, and evaluate their output. The data were used to investigate this system in the context of several themes from complexity science, including have (a) macro-level patterns; (b) autonomous decision making entities (i.e., agents); (c) heterogeneity among those entities; (d) social and spatial interactions that operate across multiple scales and (e) nonlinear feedback mechanisms. The results point to the importance of collecting data on agents and their interactions when producing agent-based models, the general validity of our conceptual models, and some changes that we needed to make to these models following data analysis. The calibrated models have been and are being used to evaluate landscape dynamics and the effects of various policy interventions on urban land-cover patterns. 相似文献
Two algorithms for in-situ detection and identification of vertical free convective and double-diffusive flows in groundwater
monitoring wells or boreholes are proposed. With one algorithm the causes (driving forces) and with the other one the effects
(convection or double-diffusion) of vertical transport processes can be detected based on geophysical borehole measurements
in the water column. Five density-driven flow processes are identified: thermal, solutal, and thermosolutal convection leading
to an equalization, as well as saltfingers and diffusive layering leading to an intensification of a vertical density gradient.
The occurrence of density-driven transport processes could be proven in many groundwater monitoring wells and boreholes; especially
shallow sections of boreholes or groundwater monitoring wells are affected dramatically by such vertical flows. Deep sections
are also impaired as the critical threshold for the onset of a density-driven flow is considerably low. In monitoring wells
or boreholes, several sections with different types of density-driven vertical flows may exist at the same time. Results from
experimental investigations in a medium-scale testing facility with high aspect ratio (height/radius = 19) and from numerical
modeling of a water column agree well with paramters of in-situ detected convection cells. 相似文献
The time evolution of a two-dimensional line thermal-a turbulent flow produced by an initial element with signifi-cant buoyancy released in a large water body, is numerically studied with the two-equation k - e model for turbulence closure. The numerical results show that the thermal is characterized by a vortex pair flow and a kidney shaped concentra-tion structure with double peak maxima; the computed flow details and scalar mixing characteristics can be described by self-similar relations beyond a dimensionless time around 10. There are two regions in the flow field of a line thermal: a mixing region where the concentration of tracer fluid is high and the flow is turbulent and rotational with a pair of vortex eyes, and an ambient region where the concentration is zero and the flow is potential and well-described by a model of doublet with strength very close to those given by early experimental and analytical studies. The added virtual mass coeffi-cient of the thermal motion is found to be approximat 相似文献
Lock-release gravity currents with a viscous self-similar regime are simulated by use of the renormalization group(RNG) k - ε model for Reynolds-stress closure. Besides the turbulent regime with initially a slumping phase of a conslant current front speed and later an inviseid self-similar phase of front speed decreasing as t^-1/3(where t is the time measured from release), the viseous self-similar regime is satisfactorily reproduced with front speed decreasing as t^-4/5,consistent with well known experimental observations. 相似文献
We use a synthetic data experiment to assess the accuracy of ocean tides estimated from satellite altimetry data, with emphasis on the impact of the phase-locked internal tide, which has a surface expression of several centimeters near its sites of genesis. Previous tidal estimates have regarded this signal as a random measurement error; however, it is deterministic and not scale-separated from the barotropic (surface) tide around complex bathymetric features. The synthetic data experiments show that the internal tide has a negligible impact on the barotropic tidal fields inferred under these circumstances, and the barotropic dissipation (a quadratic functional of the tidal fields) is in good agreement with the energetics of the three-dimensional regional primitive equations model which is the source of the synthetic data. 相似文献