首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
A calibration is presented for an activity–composition model for amphiboles in the system Na2O–CaO–FeO–MgO–Al2O3–SiO2–H2O–O (NCFMASHO), formulated in terms of an independent set of six end‐members: tremolite, tschermakite, pargasite, glaucophane, ferroactinolite and ferritschermakite. The model uses mixing‐on‐sites for the ideal‐mixing activities, and for the activity coefficients, a macroscopic multicomponent van Laar model. This formulation involves 15 pairwise interaction energies and six asymmetry parameters. Calibration of the model is based on the geometrical constraints imposed by the size and shape of amphibole solvi inherent in a data set of 71 coexisting amphibole pairs from rocks, formed over 400–600 °C and 2–18 kbar. The model parameters are calibrated by combining these geometric constraints with qualitative consideration of parameter relationships, given that the data are insufficient to allow all the model parameters to be determined from a regression of the data. Use of coexisting amphiboles means that amphibole activity–composition relationships are calibrated independently of the thermodynamic properties of the end‐members. For practical applications, in geothermobarometry and the calculation of phase diagrams, the amphibole activity–composition relationships are placed in the context of the stability of other minerals by evaluating the properties of the end‐members in the independent set that are in internally consistent data sets. This has been performed using an extended natural data set for hornblende–garnet–plagioclase–quartz, giving the small adjustments necessary to the enthalpies of formation of tschermakite, pargasite and glaucophane for working with the Holland and Powell data set.  相似文献   

2.
Kazeev  Andrey  Postoev  German 《Natural Hazards》2016,86(1):81-105

The impact of natural hazards on mankind has increased dramatically over the past decades. Global urbanization processes and increasing spatial concentrations of exposed elements induce natural hazard risk at a uniquely high level. To mitigate affiliated perils requires detailed knowledge about elements at risk. Considering a high spatiotemporal variability of elements at risk, detailed information is costly in terms of both time and economic resources and therefore often incomplete, aggregated, or outdated. To alleviate these restrictions, the availability of very-high-resolution satellite images promotes accurate and detailed analysis of exposure over various spatial scales with large-area coverage. In the past, valuable approaches were proposed; however, the design of information extraction procedures with a high level of automatization remains challenging. In this paper, we uniquely combine remote sensing data and volunteered geographic information from the OpenStreetMap project (OSM) (i.e., freely accessible geospatial information compiled by volunteers) for a highly automated estimation of crucial exposure components (i.e., number of buildings and population) with a high level of spatial detail. To this purpose, we first obtain labeled training segments from the OSM data in conjunction with the satellite imagery. This allows for learning a supervised algorithmic model (i.e., rotation forest) in order to extract relevant thematic classes of land use/land cover (LULC) from the satellite imagery. Extracted information is jointly deployed with information from the OSM data to estimate the number of buildings with regression techniques (i.e., a multi-linear model from ordinary least-square optimization and a nonlinear support vector regression model are considered). Analogously, urban LULC information is used in conjunction with OSM data to spatially disaggregate population information. Experimental results were obtained for the city of Valparaíso in Chile. Thereby, we demonstrate the relevance of the approaches by estimating number of affected buildings and population referring to a historical tsunami event.

  相似文献   

3.
Shan  Yibo  Chen  Shengshui  Zhong  Qiming  Mei  Shengyao  Yang  Meng 《Landslides》2022,19(6):1491-1518

The existing empirical models do not consider the influence of material composition of landslide deposits on the peak breach flow due to the uncertainty in the material composition and the randomness of its distribution. In this study, based on the statistical analyses and case comparison, the factors influencing the peak breach flow were comprehensively investigated. The highlight is the material composition-based classification of landslide deposits of 86 landslide cases with detailed grain-size distribution information. In order to consider the geometric morphology of landslide dams and the potential energy of dammed lakes, as well as the material composition of landslide deposits in an empirical model, a multiple regression method was applied on a database, which comprises of 44 documented landslide dam breach cases. A new empirical model for predicting the peak breach flow of landslide dams was developed. Furthermore, for the same 44 documented landslide dam failures, the predicted peak breach flow obtained by using the existing empirical models for embankment and landslide dams and that obtained by using the newly developed model were compared. The comparison of the root mean square error (Erms) and the multiple coefficient of determination (R2) for each empirical model verifies the accuracy and rationality of the new empirical model. Furthermore, for fair validation, several landslide dam breach cases that occurred in recent years in China and have reliable measured data were also used in another comparison. The results show that the new empirical model can reasonably predict the peak breach flow, and exhibits the best performance among all the existing empirical models for embankment and landslide dam breaching.

  相似文献   

4.
《Applied Geochemistry》1999,14(7):861-871
To support and help hydrochemical evaluation a multivariate mathematical tool named M3 (Multivariate Mixing and Mass balance calculations) has been created within the Äspö Hard Rock Laboratory Research Programme. The computer code can be used to trace the origin of the groundwater and calculate the mixing portions and mass balances from ambiguous groundwater data. Groundwater composition data used traditionally to describe the reactions taking place in the bedrock can now be used to trace the effect from present and past groundwater flow with increased accuracy. The M3 model consists of the following 3 steps:
  • •Multivariate analysis, called Principal Component Analysis (PCA) is used to summarise the information from the data set. The summarised information shown in the PCA plots is used for finding relationships, patterns, extreme waters and for further M3 modelling.
  • •From the PCA plot mixing calculations are used to calculate the effect of the groundwater mixing on the obtained groundwater composition. This so-called ideal mixing model is used to calculate the mixing proportions given in %, for all the groundwater samples.
  • •The final step in M3 calculations is the mass balance calculations. Deviations from the ideal mixing model are used to trace the sources and sinks of elements, given in mg/l, which can be due to mass balance reactions.
The tested margin of error of the model is ±10% for the Äspö site data, but depends on the data to be modelled. A mixing portion of less than 10% is regarded as under the detection limit of the model and such calculations are therefore uncertain. This method can be used to trace the origin and calculate the mixing portions and effects from the reactions on the observed groundwater composition with a higher resolution and convenience compared to many standard methods.  相似文献   

5.
Li  Xiaobin  Li  Yunbo  Tang  Junting 《Natural Hazards》2019,97(1):83-97

Mine gas disaster prediction and prevention are based on gas content measurement, which results in initial stage loss when determining coal gas desorption contents in engineering applications. We propose a Bayesian probability statistical method in the coal gas desorption model on the basis of constrained prior information. First, we use a self-made coal sample gas desorption device to test initial stage gas desorption data of tectonic coal and undeformed coal. Second, we calculate the initial stage loss of different coal samples with the power exponential function parameters by using Bayesian probability statistics and least squares estimation. Results show that Bayesian probability statistics and least squares estimation can be used to obtain regression and desorption coefficients, thereby illustrating the Bayesian estimation method’s validity and reliability. Given that the Bayesian probability method can apply prior information to constrain the model’s posterior parameters, it provides results that are statistically significant in the initial stage loss of coal gas desorption by connecting observation data and prior information.

  相似文献   

6.
The Holland and Powell internally consistent data set version 5.5 has been augmented to include pyrite, troilite, trov (Fe0.875S), anhydrite, H2S, elemental S and S2 gas. Phase changes in troilite and pyrrhotite are modelled with a combination of multiple end‐members and a Landau tricritical model. Pyrrhotite is modelled as a solid solution between hypothetical end‐member troilite (trot) and Fe0.875S (trov); observed activity–composition relationships fit well to a symmetric formalism model with a value for wtrot?trov of ?3.19 kJ mol?1. The hypothetical end‐member approach is required to compensate for iron distribution irregularities in compositions close to troilite. Mixing in fluids is described with the van Laar asymmetric formalism model with aij values for H2O–H2S, H2S–CH4 and H2S–CO2 of 6.5, 4.15 and 0.045 kJ mol?1 respectively. The derived data set is statistically acceptable and replicates the input data and data from experiments that were not included in the initial regression. The new data set is applied to the construction of pseudosections for the bulk composition of mafic greenschist facies rocks from the Golden Mile, Kalgoorlie, Western Australia. The sequence of mineral assemblages is replicated successfully, with observed assemblages predicted to be stable at X(CO2) increasing with increasing degree of hydrothermal alteration. Results are compatible with those of previous work. Assemblages are insensitive to the S bulk content at S contents of less than 1 wt%, which means that volatilization of S‐bearing fluids and sulphidation are unlikely to have had major effects on the stable mineral assemblage in less metasomatized rocks. The sequence of sulphide and oxide phases is predicted successfully and there is potential to use these phases qualitatively for geobarometry. Increases in X(CO2) stabilized, in turn, pyrite–magnetite, pyrite–hematite and anhydrite–pyrite. Magnetite–pyrrhotite is predicted at temperatures greater than 410 °C. The prediction of a variety of sulphide and oxide phases in a rock of fixed bulk composition as a function of changes in fluid composition and temperature is of particular interest because it has been proposed that such a variation in phase assemblage is produced by the infiltration of multiple fluids with contrasting redox state. The work presented here shows that this need not be the case.  相似文献   

7.
Various approaches exist to relate saturated hydraulic conductivity (K s) to grain-size data. Most methods use a single grain-size parameter and hence omit the information encompassed by the entire grain-size distribution. This study compares two data-driven modelling methods??multiple linear regression and artificial neural networks??that use the entire grain-size distribution data as input for K s prediction. Besides the predictive capacity of the methods, the uncertainty associated with the model predictions is also evaluated, since such information is important for stochastic groundwater flow and contaminant transport modelling. Artificial neural networks (ANNs) are combined with a generalised likelihood uncertainty estimation (GLUE) approach to predict K s from grain-size data. The resulting GLUE-ANN hydraulic conductivity predictions and associated uncertainty estimates are compared with those obtained from the multiple linear regression models by a leave-one-out cross-validation. The GLUE-ANN ensemble prediction proved to be slightly better than multiple linear regression. The prediction uncertainty, however, was reduced by half an order of magnitude on average, and decreased at most by an order of magnitude. This demonstrates that the proposed method outperforms classical data-driven modelling techniques. Moreover, a comparison with methods from the literature demonstrates the importance of site-specific calibration. The data set used for this purpose originates mainly from unconsolidated sandy sediments of the Neogene aquifer, northern Belgium. The proposed predictive models are developed for 173 grain-size K s-pairs. Finally, an application with the optimised models is presented for a borehole lacking K s data.  相似文献   

8.

Blackouts aggravate the situation during an extreme river-flood event by affecting residents and visitors of an urban area. But also rescue services, fire brigades and basic urban infrastructure such as hospitals have to operate under suboptimal conditions. This paper aims to demonstrate how affected people, critical infrastructure, such as electricity, roads and civil protection infrastructure are intertwined during a flood event, and how this can be analysed in a spatially explicit way. The city of Cologne (Germany) is used as a case study since it is river-flood prone and thousands of people had been affected in the floods in 1993 and 1995. Components of vulnerability and resilience assessments are selected with a focus of analysing exposure to floods, and five steps of analysis are demonstrated using a geographic information system. Data derived by airborne and spaceborne earth observation to capture flood extent and demographic data are combined with place-based information about location and distance of objects. The results illustrate that even fire brigade stations, hospitals and refugee shelters are within the flood scenario area. Methodologically, the paper shows how criticality of infrastructure can be analysed and how static vulnerability assessments can be improved by adding routing calculations. Fire brigades can use this information to improve planning on how to access hospitals and shelters under flooded road conditions.

  相似文献   

9.
ABSTRACT

This paper adds to the ongoing discussion on the role of reliability calculations in geotechnical design. It situates design calculations, be it verified by a global factor of safety, partial factors, or reliability-based design (RBD), in a larger context of quality management over the life cycle of the structure. It clarifies that uncertainties amenable to probabilistic treatment typically fall under the category of “known unknowns” where some measured data and/or past experience exist for limited site-specific data to be supplemented by both objective regional data and subjective judgement derived from comparable sites elsewhere. Within this category, reliability is very useful in handling complex real-world information (multivariate correlated data) and information imperfections (scarcity of information or incomplete information). It is also very useful in handling real-world design aspects such as spatial variability that cannot be easily treated using deterministic means. Examples are presented to illustrate how reliability calculations could relieve engineering judgement from the unsuitable task of performance verification in the presence of uncertainties so that the engineer can focus on setting up the right lines of scientific investigation, selecting the appropriate models and parameters for calculations, and verifying the reasonableness of the results.  相似文献   

10.
When assessing zircon U-Pb data, Wendt’s (1984; 1989) 3-dimensional projection for calculating concordia intercept ages has a fundamental advantage over other methods: the best-fit plane in three dimensions defines a sample’s age without requiring any advance knowledge about the isotopic composition of the non-radiogenic Pb. However, until now the general validity of this approach has never been investigated using data sets measured on terrestrial samples. Best-fit plane calculations were made for three terrestrial zircon samples. The t1 and t2 concordia intercept ages of these samples were found to be statistically equivalent to the ages calculated by other means. However, the 3-dimensional calculations gave detectable differences in ages and precision estimates as compared to the mean207Pb/206Pb and line regression techniques; such differences could be important at moderate to high precision level. It was also found that the 3-dimensional concordia provides useful information for discerning which analyses should or should not be included within the final data set.  相似文献   

11.
Hausmann  Peter 《Natural Hazards》2016,86(1):197-198

As a leading global reinsurer, Swiss Re deals with many hazards and risks for which geospatial data are crucial in order to obtain reliable assessments of expected insured losses or large losses from catastrophes. Typically, such data are used in combination with insurance data either in pricing tools to calculate premiums, tail risks and more, or in mapping tools. In natural perils pricing applications—the most important group of tools—geospatial data are usually “not visible” but are instead used to create probabilistic event sets. For example, a flood event set may define spatially if and how frequently a given location is flooded. Mapping tools, such as Swiss Re’s CatNet® (www.swissre.com/catnet), visualize the data in the form of maps which include many useful attributes per geographic location.

  相似文献   

12.
Lai  Fengwen  Zhang  Ningning  Liu  Songyu  Sun  Yanxiao  Li  Yaoliang 《Acta Geotechnica》2021,16(9):2933-2961

The assessment and control of ground movements during the installation of large diameter deeply-buried (LDDB) caissons are critically important to maintain the stability of surrounding infrastructures. However, for twin LDDB caissons which have been installed worldwide, no well-documented guidelines for assessing the induced ground movements are available due to the complexities of caisson–soil interaction. To this end, considering the mechanical boundaries of caissons and mechanized installation process, this paper presents a simple kinematic mechanical model balancing both computational cost and accuracy, which can be easily incorporated in commercial finite-element (FE) programs. Based on a project of twin LDDB caissons alternately installed employing a newly developed installation technology in wet ground with stiff clays in Zhenjiang, China, a three-dimensional (3D) numerical model is developed to capture the ground movements in terms of surface settlements and radial displacements induced by the installation of twin LDDB caissons. Moreover, hardening soil model with small-strain stiffness (HSSmall model) conceptually capable of capturing the nonlinear soil stiffness from very small to large strain levels is used to simulate undrained ground. The validations against field observations, empirical predictions and centrifuge test data are carried out to demonstrate the accuracy and validity of the developed FE model. Subsequently, the comparisons of ground movements numerically obtained in three frequently used installation schemes (i.e., synchronous, asynchronous and alternating installation) are conducted for installation sequence optimization of twin caissons. It is found that synchronous installation is the optimal scheme for limiting ground movements. Parametric studies considering the effects of horizontal spacing between twin caissons, staged penetration depth, inner diameter, controllable soil-plugging height, frictional coefficient between caisson–soil interface, as well as cutting edge gradient are thus performed in synchronous installation scheme. Based on an artificial data set generated through FE calculation, the multivariate adaptive regression splines (MARS) model capable of accurately capturing the nonlinear relationships between a set of input variables and output variables in multi-dimensions is used to analyze the sensitivity of caisson design parameters. Finally, the MARS mathematical equations for predicting the maximum surface settlement and radial displacement used in preliminary caisson design are proposed.

  相似文献   

13.
Based on debris-flow inventories and using a geographical information system, the susceptibility models presented here take into account fluvio-morphologic parameters, gathered for every first-order catchment. Data mining techniques on the morphometric parameters are used, to work out and test three different models. The first model is a logistic regression analysis based on weighting the parameters. The other two are classification trees, which are rather novel susceptibility models. These techniques enable gathering the necessary data to evaluate the performance of the models tested, with and without optimization. The analysis was performed in the Catalan Pyrenees and covered an area of more than 4,000 km2. Results related to the training dataset show that the optimized models performance lie within former reported range, in terms of AUC, although closer to the lowest end (near 70 %). When the models are applied to the test set, the quality of most results decreases. However, out of the three different models, logistic regression seems to offer the best prediction, as training and test sets results are very similar, in terms of performance. Trees are better at extracting laws from a training set, but validation through a test set gives results unacceptable for a prediction at regional scale. Although omitting parameters in geology or vegetation, fluvio-morphologic models based on data mining, can be used in the framework of a regional debris-flow susceptibility assessment in areas where only a digital elevation model is available.  相似文献   

14.
Lai  Zhengshou  Chen  Qiushi 《Acta Geotechnica》2019,14(1):1-18

X-ray computed tomography (CT) has emerged as the most prevalent technique to obtain three-dimensional morphological information of granular geomaterials. A key challenge in using the X-ray CT technique is to faithfully reconstruct particle morphology based on the discretized pixel information of CT images. In this work, a novel framework based on the machine learning technique and the level set method is proposed to segment CT images and reconstruct particles of granular geomaterials. Within this framework, a feature-based machine learning technique termed Trainable Weka Segmentation is utilized for CT image segmentation, i.e., to classify material phases and to segregate particles in contact. This is a fundamentally different approach in that it predicts segmentation results based on a trained classifier model that implicitly includes image features and regression functions. Subsequently, an edge-based level set method is applied to approach an accurate characterization of the particle shape. The proposed framework is applied to reconstruct three-dimensional realistic particle shapes of the Mojave Mars Simulant. Quantitative accuracy analysis shows that the proposed framework exhibits superior performance over the conventional watershed-based method in terms of both the pixel-based classification accuracy and the particle-based segmentation accuracy. Using the reconstructed realistic particles, the particle-size distribution is obtained and validated against experiment sieve analysis. Quantitative morphology analysis is also performed, showing promising potentials of the proposed framework in characterizing granular geomaterials.

  相似文献   

15.
In this contribution, a methodology is reported in order to build an interval fuzzy model for the pollution index PLI (a composite index using relevant heavy metal concentration) with magnetic parameters as input variables. In general, modelling based on fuzzy set theory is designed to mimic how the human brain tends to classify imprecise information or data. The “interval fuzzy model” reported here, based on fuzzy logic and arithmetic of fuzzy numbers, calculates an “estimation interval” and seems to be an adequate mathematical tool for this nonlinear problem. For this model, fuzzy c-means clustering is used to partition data, hence the membership functions and rules are built. In addition, interval arithmetic is used to obtain the fuzzy intervals. The studied sets are different examples of pollution by different anthropogenic sources, in two different study areas: (a) soil samples collected in Antarctica and (b) road-deposited sediments collected in Argentina. The datasets comprise magnetic and chemical variables, and for both cases, relevant variables were selected: magnetic concentration-dependent variables, magnetic features-dependent variables and one chemical variable. The model output gives an estimation interval; its width depends on the data density, for the measured values. The results show not only satisfactory agreement between the estimation interval and data, but also provide valued information from the rules analysis that allows understanding the magnetic behaviour of the studied variables under different conditions.  相似文献   

16.

Glacial lake outburst floods (GLOFs) are among the most serious cryospheric hazards for mountain communities. Multiple studies have predicted the potential risks posed by rapidly expanding glacial lakes in the Sagarmatha (Mt. Everest) National Park and Buffer Zone of Nepal. People’s perceptions of such cryospheric hazards can influence their actions, beliefs, and responses to those hazards and associated risks. This study provides a systematic approach that combines household survey data with ethnography to analyze people’s perceptions of GLOF risks and the socioeconomic and cultural factors influencing their perceptions. A statistical logit model of household data showed a significant positive correlation between the perceptions of GLOF risks and livelihood sources, mainly tourism. Risk perceptions are also influenced by spatial proximity to glacial lakes and whether a village is in potential flood zones. The 2016 emergency remediation work implemented in the Imja Tsho (glacial lake) has served as a cognitive fix, especially in the low-lying settlements. Much of uncertainty and confusions related GLOF risks among locals can be attributed to a disconnect between how scientific information is communicated to the local communities and how government climate change policies have been limited to awareness campaigns and emergency remediation efforts. A sustainable partnership of scientists, policymakers, and local communities is urgently needed to build a science-driven, community-based initiative that focuses not just in addressing a single GLOF threat but develops on a comprehensive cryospheric risk management plan and considers opportunities and challenges of tourism in the local climate adaptation policies.

  相似文献   

17.
It is well-known that sediment composition strongly depends on grain size. A number of studies have tried to quantify this relationship focusing on the sand fraction, but only very limited data exists covering wider grain size ranges. Geologists have a clear conceptual model of the relation between grain size and sediment petrograpic composition, typically displayed in evolution diagrams. We chose a classical model covering grain sizes from fine gravel to clay, and distinguishing five types of grains (rock fragments, poly- and mono crystalline quartz, feldspar and mica/clay). A compositional linear process is fitted here to a digitized version of this model, by (i) applying classical regression to the set of all pairwise log-ratios of the 5-part composition against grain size, and (ii) looking for the compositions that best approximate the set of estimated parameters, one acting as slope and one as intercept. The method is useful even in the presence of several missing values. The linear fit suggests that the relative influence of the processes controlling the relationship between grain size and sediment composition is constant along most of the grain size spectrum.  相似文献   

18.
Applicability of the ACE algorithm for multiple regression in hydrogeology   总被引:1,自引:0,他引:1  
This paper introduces the alternating conditional expectation (ACE) algorithm of Breiman and Friedman (J Am Stat Assoc 80:580–619, 1985) for estimating the transformations of a response and a set of predictor variables in multiple regression problems in hydrogeology. The proposed nonparametric approach can be applied easily for estimating the optimal transformations of different hydrogeological data to obtain maximum correlation between observed variables. The approach does not require a priori assumptions of a functional form, and the optimal transformations are derived solely based on the data set. The advantages and applicability of this new approach to solve different multiple regression problems in hydrogeology or in Earth Sciences are illustrated by means of theoretical investigations and case studies. It is demonstrated that the ACE method has certain advantages in some fitting problems of hydrogeology over the traditional multiple regression. Based on our knowledge, this is the first application of the ACE algorithm to analyze and interpret groundwater data.  相似文献   

19.
The geochemical evolution of metamorphic rocks during subduction‐related metamorphism is described on the basis of multivariate statistical analyses. The studied data set comprises a series of mapped metamorphic rocks collected from the Sanbagawa metamorphic belt in central Shikoku, Japan, where metamorphic conditions range from the pumpellyite–actinolite to epidote–amphibolite facies. Recent progress in computational and information science provides a number of algorithms capable of revealing structures in large data sets. This study applies k‐means cluster analysis (KCA) and non‐negative matrix factorization (NMF) to a series of metapelites, which is the main lithotype of the Sanbagawa metamorphic belt. KCA describes the structures of the high‐dimensional data, while NMF provides end‐member decomposition which can be useful for evaluating the spatial distribution of continuous compositional trends. The analysed data set, derived from previously published work, contains 296 samples for which 14 elements (Si, Ti, Al, Fe, Mn, Mg, Ca, Na, K, P, Rb, Sr, Zr and Ba) have been analysed. The KCA and NMF analyses indicate five clusters and four end‐members, respectively, successfully explaining compositional variations within the data set. KCA indicates that the chemical compositions of metapelite samples from the western (Besshi) part of the sampled area differ significantly from those in the east (Asemigawa). In the west, clusters show a good correlation with the metamorphic grade. With increasing metamorphic grade, there are decreases in SiO2 and Na2O and increases in other components. However, the compositional change with metamorphic grade is less obvious in the eastern area. End‐member decomposition using NMF revealed that the evolutional change of whole‐rock composition, as correlated with metamorphic grade, approximates a stoichiometric increase of a garnet‐like component in the whole‐rock composition, possibly due to the precipitation of garnet and effusion of other components during progressive dehydration. Thermodynamic modelling of the evolution of the whole‐rock composition yielded the following results: (1) the whole‐rock composition at lower metamorphic grade favours the preferential crystallization of garnet under the conditions of the garnet zone, with biotite becoming stable together with garnet in higher‐grade rock compositions under the same P–T conditions; (2) with higher‐grade whole‐rock compositions, more H2O is retained. These results provide insight into the mechanism suppressing dehydration under high‐P metamorphic conditions. This mechanism should be considered in forward modelling of the fluid cycle in subduction zones, although such a quantitative model has yet to be developed.  相似文献   

20.
Liu  Dingli  Xu  Zhisheng  Fan  Chuangang 《Natural Hazards》2019,97(3):1175-1189

Frequent fires can affect ecosystems and public safety. The occurrence of fires has varied with hot and cold months in China. To analyze how temperature influences fire frequency, a fire dataset including 20,622 fires and a historical weather dataset for Changsha in China were gathered and processed. Through data mining, it was found that the mean daily fire frequency tended to be the lowest in the temperature range of (20 °C, 25 °C] and should be related to the low utilization rate of electricity. Through polynomial fitting, it was found that the prediction performance using the daily minimum temperature was generally better than that using the daily maximum temperature, and a quadruplicate polynomial model based on the mean daily minimum temperature of 3 days (the day and the prior 2 days) had the best performance. Then, a temperature-based fire frequency prediction model was established using quadruplicate polynomial regression. Moreover, the results are contrary to the content stipulated in China’s national standard of urban fire-danger weather ratings GB/T 20487-2006. The findings of this study can be applied as technical guidance for fire risk prediction and the revision of GB/T 20487-2006.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号