首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
在回顾总结了国外火山碎屑流灾害分析模型研究历史的基础上,本文选取了Flow3D模型对我国东北地区长白山天池火山未来大喷发可能产生的火山碎屑流进行了灾害区域划分。以长白山天池火山现代地形为依据,设定了11条未来爆炸式火山喷发时产生的火山碎屑流的可能流动线路。模拟结果表明,在喷发柱高度为10km的情况下,灾害区划最大半径为13.7km;在喷发柱高度为20km的情况下,灾害区划最大半径为35.4km;在喷发柱高度为30km的情况下,灾害区划最大半径为57.8km。在此基础上,得出了长白山天池火山未来发生中规模、大规模和超大规模火山喷发时火山碎屑流的覆盖范围,完成了我国第一幅长白山天池火山碎屑流灾害区划图。  相似文献   

2.
M. J. Booij 《水文研究》2003,17(13):2581-2598
Appropriate spatial scales of dominant variables are determined and integrated into an appropriate model scale. This is done in the context of the impact of climate change on flooding in the River Meuse in Western Europe. The objective is achieved by using observed elevation, soil type, land use type and daily precipitation data from several sources and employing different relationships between scales, variable statistics and outputs. The appropriate spatial scale of a key variable is assumed to be equal to a fraction of the spatial correlation length of that variable. This fraction was determined on the basis of relationships between statistics and scale and an accepted error in the estimation of the statistic of 10%. This procedure resulted in an appropriate spatial scale for precipitation of about 20 km in an earlier study. The application to river basin variables revealed appropriate spatial scales for elevation, soil and land use of respectively 0·1, 5·3 and 3·3 km. The appropriate model scale is determined by multiplying the appropriate variable scales with their associated weights. The weights are based on SCS curve number method relationships between the peak discharge and some specific parameters like slope and curve number. The values of these parameters are dependent on the scale of each key variable. The resulting appropriate model scale is about 10 km, implying 225–250 model cells in an appropriate model of the Meuse basin meant to assess the impact of climate change on river flooding. The usefulness of the appropriateness procedure is in its ability to assess the appropriate scales of the individual key variables before model construction and integrate them in a balanced way into an appropriate model scale. Another use of the procedure is that it provides a framework for decisions about the reduction or expansion of data networks and needs. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

3.
I. Haltas  M. L. Kavvas 《水文研究》2011,25(23):3659-3665
Fractals are famous for their self‐similar nature at different spatial scales. Similar to fractals, solutions of scale invariant processes are self‐similar at different space–time scales. This unique property of scale‐invariant processes can be utilized to translate the solution of the processes at a much larger or smaller space–time scale (domain) based on the solution calculated on the original space–time scale. This study investigates scale invariance conditions of kinematic wave overland flow process in one‐parameter Lie group of point transformations framework. Scaling (stretching) transformation is one of the one‐parameter Lie group of point transformations and it has a unique importance among the other transformations, as it leads to the scale invariance or scale dependence of a process. Scale invariance of a process yields a self‐similar solution at different space–time scales. However, the conditions for the process to be scale invariant usually dictate various relationships between the scaling coefficients of the dependent and independent variables of the process. Therefore, the scale invariance of a process does not assure a self‐similar solution at any arbitrary space and time scale. The kinematic wave overland flow process is modelled mathematically as initial‐boundary value problem. The conditions to be satisfied by the system of governing equations as well as the initial and boundary conditions of the kinematic wave overland flow process are established in order for the process to be scale invariant. Also, self‐similarity of the solution of the kinematic wave overland flow under the established invariance conditions is demonstrated by various numerical example problems. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
The scale issue is of central concern in hydrological processes to understand the potential upscaling or downscaling methodologies, and to develop models for scaling the dominant processes at different scales and in different environments. In this study, a typical permafrost watershed in the Qinghai‐Tibet Plateau was selected. Its hydrological processes were monitored for 4 years from 2004 to 2008, measuring the effects of freezing and thawing depth of active soil layers on runoff processes. To identify the nature and cause of variation in the runoff response in different size catchments, catchments ranging from 1·07 to 112 km2 were identified in the watershed. The results indicated that the variation of runoff coefficients showed a ‘V’ shape with increasing catchment size during the spring and autumn seasons, when the active soil was subjected to thawing or freezing processes. A two‐stage method was proposed to create runoff scaling models to indicate the effects of scale on runoff processes. In summer, the scaling transition model followed an exponential function for mean daily discharge, whereas the scaling model for flood flow exhibited a linear function. In autumn, the runoff process transition across multiple scales followed an exponential function with air temperature as the driving factor. These scaling models demonstrate relatively high simulation efficiency and precision, and provide a practical way for upscaling or downscaling runoff processes in a medium‐size permafrost watershed. For permafrost catchments of this scale, the results show that the synergistic effect of scale and vegetation cover is an important driving factor in the runoff response. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
The accurate evaluation and appropriate treatment of uncertainties is of primary importance in modern probabilistic seismic hazard assessment (PSHA). One of the objectives of the SIGMA project was to establish a framework to improve knowledge and data on two target regions characterized by low-to-moderate seismic activity. In this paper, for South-Eastern France, we present the final PSHA performed within the SIGMA project. A new earthquake catalogue for France covering instrumental and historical periods was used for the calculation of the magnitude-frequency distributions. The hazard model incorporates area sources, smoothed seismicity and a 3D faults model. A set of recently developed ground motion prediction equations (GMPEs) from global and regional data, evaluated as adequately representing the ground motion characteristics in the region, was used to calculate the hazard. The magnitude-frequency distributions, maximum magnitude, faults slip rate and style-of-faulting are considered as additional source of epistemic uncertainties. The hazard results for generic rock condition (Vs30 = 800 m/s) are displayed for 20 sites in terms of uniform hazard spectra at two return periods (475 years and 10,000 years). The contributions of the epistemic uncertainties in the ground motion characterizations and in the seismic source characterization to the total hazard uncertainties are analyzed. Finally, we compare the results with existing models developed at national scale in the framework of the first generation of models supporting the Eurocode 8 enforcement, (MEDD 2002 and AFPS06) and at the European scale (within the SHARE project), highlighting significant discrepancies at short return periods.  相似文献   

6.
The presence of scaling statistical properties in temporal rainfall has been well established in many empirical investigations during the latest decade. These properties have more and more come to be regarded as a fundamental feature of the rainfall process. How to best use the scaling properties for applied modelling remains to be assessed, however, particularly in the case of continuous rainfall time‐series. One therefore is forced to use conventional time‐series modelling, e.g. based on point process theory, which does not explicitly take scaling into account. In light of this, there is a need to investigate the degree to which point‐process models are able to ‘unintentionally’ reproduce the empirical scaling properties. In the present study, four 25‐year series of 20‐min rainfall intensities observed in Arno River basin, Italy, were investigated. A Neyman–Scott rectangular pulses (NSRP) model was fitted to these series, so enabling the generation of synthetic time‐series suitable for investigation. A multifractal scaling behaviour was found to characterize the raw data within a range of time‐scales between approximately 20 min and 1 week. The main features of this behaviour were surprisingly well reproduced in the simulated data, although some differences were observed, particularly at small scales below the typical duration of a rain cell. This suggests the possibility of a combined use of the NSRP model and a scaling approach, in order to extend the NSRP range of applicability for simulation purposes. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
Future catchment planning requires a good understanding of the impacts of land use and management, especially with regard to nutrient pollution. A range of readily usable tools, including models, can play a critical role in underpinning robust decision‐making. Modelling tools must articulate our process understanding, make links to a range of catchment characteristics and scales and have the capability to reflect future land‐use management changes. Hence, the model application can play an important part in giving confidence to policy makers that positive outcomes will arise from any proposed land‐use changes. Here, a minimum information requirement (MIR) modelling approach is presented that creates simple, parsimonious models based on more complex physically based models, which makes the model more appropriate to catchment‐scale applications. This paper shows three separate MIR models that represent flow, nitrate losses and phosphorus losses. These models are integrated into a single catchment model (TOPCAT‐NP), which has the advantage that certain model components (such as soil type and flow paths) are shared by all three MIR models. The integrated model can simulate a number of land‐use activities that relate to typical land‐use management practices. The modelling process also gives insight into the seasonal and event nature of nutrient losses exhibited at a range of catchment scales. Three case studies are presented to reflect the range of applicability of the model. The three studies show how different runoff and nutrient loss regimes in different soil/geological and global locations can be simulated using the same model. The first case study models intense agricultural land uses in Denmark (Gjern, 114 km2), the second is an intense agricultural area dominated by high superphosphate applications in Australia (Ellen Brook, 66 km2) and the third is a small research‐scale catchment in the UK (Bollington Hall, 2 km2). Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

8.
Seismic risk assessment requires adoption of appropriate models for the earthquake hazard, the structural system and for its performance, and quantification of the uncertainties involved in these models through appropriate probability distributions. Characterization of the seismic hazard comprises undoubtedly the most critical component of this process, the one associated with the largest amount of uncertainty. For applications involving dynamic analysis this hazard is frequently characterized through stochastic ground motion models. This paper discusses a novel, global sensitivity analysis for the seismic risk with emphasis on such a stochastic ground motion modeling. This analysis aims to identify the overall (i.e. global) importance of each of the uncertain model parameters, or of groups of them, towards the total risk. The methodology is based on definition of an auxiliary density (distribution) function, proportional to the integrand of the integral quantifying seismic risk, and on comparison of this density to the initial probability distribution for the model parameters of interest. Uncertainty in the rest of the model parameters is explicitly addressed through integration of their joint auxiliary distribution to calculate the corresponding marginal distributions. The relative information entropy is used to quantify the difference between the compared density functions and an efficient approach based on stochastic sampling is introduced for estimating this entropy for all quantities of interest. The framework is illustrated in an example that adopts a source-based stochastic ground motion model, and valuable insight is provided for its implementation within structural engineering applications.  相似文献   

9.
Abstract

Abstract Identification of the presence of scaling in the river flow process has been a challenging problem in hydrology. Studies conducted thus far have viewed this problem essentially from a stochastic perspective, because the river flow process has traditionally been assumed to be a result of a very large number of variables. However, recent studies employing nonlinear deterministic and chaotic dynamic concepts have reported that the river flow process could also be the outcome of a deterministic system with only a few dominant variables. In the wake of such reports, a preliminary attempt is made in this study to investigate the type of scaling behaviour in the river flow process (i.e. chaotic or stochastic). The investigation is limited only to temporal scaling. Flow data of three different scales (daily, 5-day and 7-day) observed in each of three rivers in the USA: the Kentucky River in Kentucky, the Merced River in California and the Stillaguamish River in Washington, are analysed. It is assumed that the dynamic behaviour of the river flow process at these individual scales provides clues about the scaling behaviour between these scales. The correlation dimension is used as an indicator to distinguish between chaotic and stochastic behaviours. The results are mixed with regard to the type of flow behaviour at individual scales and, hence, to the type of scaling behaviour, as some data sets show chaotic behaviour while others show stochastic behaviour. They suggest that characterization (chaotic or stochastic) of river flow should be a necessary first step in any scaling study, as it could provide important information on the appropriate approach for data transformation purposes.  相似文献   

10.
The quasi-normal scale elimination (QNSE) is an analytical spectral theory of turbulence based upon a successive ensemble averaging of the velocity and temperature modes over the smallest scales of motion and calculating corresponding eddy viscosity and eddy diffusivity. By extending the process of successive ensemble averaging to the turbulence macroscale one eliminates all fluctuating scales and arrives at models analogous to the conventional Reynolds stress closures. The scale dependency embedded in the QNSE method reflects contributions from different processes on different scales. Two of the most important processes in stably stratified turbulence, internal wave propagation and flow anisotropization, are explicitly accounted for in the QNSE formalism. For relatively weak stratification, the theory becomes amenable to analytical processing revealing just how increasing stratification modifies the flow field via growing anisotropy and gravity wave radiation. The QNSE theory yields the dispersion relation for internal waves in the presence of turbulence and provides a theoretical reasoning for the Gargett et al. (J Phys Oceanogr 11:1258–1271, 1981) scaling of the vertical shear spectrum. In addition, it shows that the internal wave breaking and flow anisotropization void the notion of the critical Richardson number at which turbulence is fully suppressed. The isopycnal and diapycnal viscosities and diffusivities can be expressed in the form of the Richardson diffusion laws thus providing a theoretical framework for the Okubo dispersion diagrams. Transitions in the spectral slopes can be associated with the turbulence- and wave-dominated ranges and have direct implications for the transport processes. We show that only quasi-isotropic, turbulence-dominated scales contribute to the diapycnal diffusivity. On larger, buoyancy dominated scales, the diapycnal diffusivity becomes scale independent. This result underscores the well-known fact that waves can only transfer momentum but not a scalar and sheds a new light upon the Ellison–Britter–Osborn mixing model. It also provides a general framework for separation of the effects of turbulence and waves even if they act on the same spatial and temporal scales. The QNSE theory-based turbulence models have been tested in various applications and demonstrated reliable performance. It is suggested that these models present a viable alternative to conventional Reynolds stress closures.  相似文献   

11.
Analysis of civil structures at the scale of life‐cycle requires stochastic modeling of degradation. Phenomena causing structures to degrade are typically categorized as aging and point‐in‐time overloads. Earthquake effects are the members of the latter category this study deals with in the framework of performance‐based earthquake engineering (PBEE). The focus is structural seismic reliability, which requires modeling of the stochastic process describing damage progression, because of subsequent events, over time. The presented study explicitly addresses this issue via a Markov‐chain‐based approach, which is able to account for the change in seismic response of damaged structures (i.e. state‐dependent seismic fragility) as well as uncertainty in occurrence and intensity of earthquakes (i.e. seismic hazard). The state‐dependent vulnerability issue arises when the seismic hysteretic response is evolutionary and/or when the damage measure employed is such that the degradation increment probabilistically depends on the conditions of the structure at the time of the shock. The framework set up takes advantage also of the hypotheses of classical probabilistic seismic hazard analysis, allowing to separate the modeling of the process of occurrence of seismic shocks and the effect they produce on the structure. It is also discussed how the reliability assessment, which is in closed‐form, may be virtually extended to describe a generic age‐ and state‐dependent degradation process (e.g. including aging and/or when aftershock risk is of interest). Illustrative applications show the options to calibrate the model and its potential in the context of PBEE. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Multi-scale support vector algorithms for hot spot detection and modelling   总被引:2,自引:2,他引:0  
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.  相似文献   

13.
Analysis of the patterns of eruption occurrences may improve our understanding of volcanic processes. In this paper, the available historical data of an individual volcano, Colima in México, are used to classify its eruptions by size using the Volcanic Explosivity Index (VEI). The data shows that, if eruptions are only taken into account above a certain VEI level, the stochastic process associated with the explosive volcanic events can be represented by a non-stationary Poisson point process, which can be reduced to a homogeneous Poisson process through a transformation of the time axis. When eruptions are separated by VEI values, the occurrence patterns of each magnitude category can also be represented by a Poisson distribution. Analysis of the rate of occurrence of all eruptions with VEI greater than 1 permits the recognition of three distinct regimes or rates of volcanic activity during the last 430 years. A double stochastic Poisson model is suggested to describe this non-stationary eruptive pattern of Colima volcano and a Bayesian approach permits an estimation the present hazard.  相似文献   

14.
L. E. Band 《水文研究》1989,3(2):151-162
A framework for a watershed information system is presented in which the topology of the drainage basin is closely followed in the data structure. The topography is partitioned into a set of subcatchments and hillslopes that are organized around the drainage network, automatically extracted and defined from standard digital elevation data. A set of recursive algorithms perform the actual topographic feature extraction and synthesis into a full basin model, and also serve as the basis for processing registered information. The techniques are particularly well suited to support and parameterize distributed components runoff models as full hydrologic connectivity throughout the basin is explicitly defined. Full scale flexibility in terms of the representation of the topography and aggregation of physical units is achieved by the recursive nature of the data stucture, allowing straightforward translation between scales and the investigation and choice of the appropriate scale for various hydrologic applications.  相似文献   

15.
The paper presents a computationally efficient algorithm to integrate a probabilistic, non-Gaussian parameter estimation approach for nonlinear finite element models with the performance-based earthquake engineering (PBEE) framework for accurate performance evaluations of instrumented civil infrastructures. The algorithm first utilizes a minimum variance framework to fuse predictions from a numerical model of a civil infrastructure with its measured behavior during a past earthquake to update the parameters of the numerical model that is, then, used for performance prediction of the civil infrastructure during future earthquakes. A nonproduct quadrature rule, based on the conjugate unscented transformation, forms an enabling tool to drive the computationally efficient model prediction, model-data fusion, and performance evaluation. The algorithm is illustrated and validated on Meloland Road overpass, a heavily instrumented highway bridge in El Centro, CA, which experienced three moderate earthquake events in the past. The benefits of integrating measurement data into the PBEE framework are highlighted by comparing damage fragilities of and annual probabilities of damages to the bridge estimated using the presented algorithm with that estimated using the conventional PBEE approach.  相似文献   

16.
Hans Van de Vyver 《水文研究》2018,32(11):1635-1647
Rainfall intensity–duration–frequency (IDF) curves are a standard tool in urban water resources engineering and management. They express how return levels of extreme rainfall intensity vary with duration. The simple scaling property of extreme rainfall intensity, with respect to duration, determines the form of IDF relationships. It is supposed that the annual maximum intensity follows the generalized extreme value (GEV) distribution. As well known, for simple scaling processes, the location parameter and scale parameter of the GEV distribution obey a power law with the same exponent. Although, the simple scaling hypothesis is commonly used as a suitable working assumption, the multiscaling approach provides a more general framework. We present a new IDF relationship that has been formulated on the basis of the multiscaling property. It turns out that the GEV parameters (location and scale) have a different scaling exponent. Next, we apply a Bayesian framework to estimate the multiscaling GEV model and to choose the most appropriate model. It is shown that the model performance increases when using the multiscaling approach. The new model for IDF curves reproduces the data very well and has a reasonable degree of complexity without overfitting on the data.  相似文献   

17.
《Advances in water resources》2002,25(8-12):1175-1213
Multi-component flow in porous media involves localized phenomena that could be due to several features, such as concentration fronts, wells or geometry of the media. Our approach to treating the localized phenomena is to use high-resolution discretization methods in combination with adaptive mesh refinement (AMR). The purpose of AMR is to concentrate the computational work near the regions of interest in the flow. When properly designed, AMR can significantly reduce the computational effort required to obtain a desired level of accuracy in the simulation. Necessarily, AMR requires appropriate techniques for communication between length scales in a hierarchy. The selection of appropriate scaling rules as well as computationally efficient data structures is essential to the success of the overall method. However, the emphasis here is on the development of efficient techniques for solving linear systems that arise in the numerical discretization of an elliptic equation for the incompressible pressure field. In this paper, the combined AMR technique has been applied to a two-component single-phase model for miscible flooding. Numerical results are discussed in one-dimensional and two-dimensional.  相似文献   

18.
Distributed hydrological models can make predictions with much finer spatial resolution than the supporting field data. They will, however, usually not have a predictive capability at model grid scale due to limitations of data availability and uncertainty of model conceptualizations. In previous publications, we have introduced the Representative Elementary Scale (RES) concept as the theoretically minimum scale at which a model with a given conceptualization has a potential for obtaining a predictive accuracy corresponding to a given acceptable accuracy. The new RES concept has similarities to the 25‐year‐old Representative Elementary Area concept, but it differs in the sense that while Representative Elementary Area addresses similarity between subcatchments by sampling within the catchment, RES focuses on effects of data or conceptualization uncertainty by Monte Carlo simulations followed by a scale analysis. In the present paper, we extend and generalize the RES concept to a framework for assessing the minimum scale of potential predictability of a distributed model applicable also for analyses of different model structures and data availabilities. We present three examples with RES analyses and discuss our findings in relation to Beven's alternative blueprint and environmental modeling philosophy from 2002. While Beven here addresses model structural and parameter uncertainties, he does not provide a thorough methodology for assessing to which extent model predictions for variables that are not measured possess opportunities to have meaningful predictive accuracies, or whether this is impossible due to limitations in data and models. This shortcoming is addressed by the RES framework through its analysis of the relationship between aggregation scale of model results and prediction uncertainties and for considering how alternative model structures and alternative data availability affects the results. We suggest that RES analysis should be applied in all modeling studies that aim to use simulation results at spatial scales smaller than the support scale of the calibration data.  相似文献   

19.
We present a system of ordinary differential equations (ODEs) capable of reproducing simultaneously the aggregated behavior of changes in water storage in the hillslope surface, the unsaturated and the saturated soil layers and the channel that drains the hillslope. The system of equations can be viewed as a two-state integral-balance model for soil moisture and groundwater dynamics. Development of the model was motivated by the need for landscape representation through hillslopes and channels organized following stream drainage network topology. Such a representation, with the basic discretization unit of a hillslope, allows ODEs-based simulation of the water transport in a basin. This, in turn, admits the use of highly efficient numerical solvers that enable space–time scaling studies. The goal of this paper is to investigate whether a nonlinear ODE system can effectively replicate observations of water storage in the unsaturated and saturated layers of the soil. Our first finding is that a previously proposed ODE hillslope model, based on readily available data, is capable of reproducing streamflow fluctuations but fails to reproduce the interactions between the surface and subsurface components at the hillslope scale. However, the more complex ODE model that we present in this paper achieves this goal. In our model, fluxes in the soil are described using a Taylor expansion of the underlying storage flux relationship. We tested the model using data collected in the Shale Hills watershed, a 7.9-ha forested site in central Pennsylvania, during an artificial drainage experiment in August 1974 where soil moisture in the unsaturated zone, groundwater dynamics and surface runoff were monitored. The ODE model can be used as an alternative to spatially explicit hillslope models, based on systems of partial differential equations, which require more computational power to resolve fluxes at the hillslope scale. Therefore, it is appropriate to be coupled to runoff routing models to investigate the effect of runoff and its uncertainty propagation across scales. However, this improved performance comes at the expense of introducing two additional parameters that have no obvious physical interpretation. We discuss the implications of this for hydrologic studies across scales.  相似文献   

20.
In all European countries the will to conserve the building heritage is very strong. Unfortunately, large areas in Europe are characterised by a high level of seismic hazard and the vulnerability of ancient masonry structures is often relevant. The large number of monumental buildings in urban areas requires facing the problem with a methodology that can be applied at territorial scale, with simplified models which need little easily obtainable, data. Within the Risk-UE project, a new methodology has been stated for the seismic vulnerability assessment of monumental buildings, which considers two different approaches: a macroseismic model, to be used with macroseismic intensity hazard maps, and a mechanical based model, to be applied when the hazard is provided in terms of peak ground accelerations and spectral values. Both models can be used with data of different reliability and depth. This paper illustrates the theoretical basis and defines the parameters of the two models. An application to an important church is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号