To drive an atmospheric general circulation model (AGCM), land surface boundary conditions like albedo and morphological roughness, which depend on the vegetation type present, have to be prescribed. For the late Quaternary there are some data available, but they are still sparse. Here an artificial neural network approach to assimilate these paleovegetation data is investigated. In contrast to a biome model the relation between climatological parameters and vegetation type is not based on biological knowledge but estimated from the available vegetation data and the AGCM climatology at the corresponding locations. For a test application, a data set for the modern vegetation reduced to the amount of data available for the Holocene climate optimum (about 6000 years B.P.) is used. From this, the neural network is able to reconstruct the complete global vegetation with a kappa value of 0.56. The most pronounced errors occur in Australia and South America in areas corresponding to large data gaps. 相似文献
Reviews of geographic software in this article: DEMO-GRAPHICS: WORLD POPULATIONS AND PROJECTIONS. ESP GAUSS. CEMODEL S. Damus LIMDEP. William H. Greene MICROSTAT 4.1 OTIS PCIPS. (Personal Computer Image Processing System) . H.J. Meyers and R. Bernstein. REGRESSION ANALYSIS OF TIME SERIES (RATS) SPSS/PC+ URBAN DATA MANAGEMENT SOFTWARE (UDMS) 相似文献
We analyzed thin sections from two palaeoseismic trenches across the low-slip-rate Geleen Fault in the Belgian Maas River valley to help identifying the most recent large palaeoearthquake on this fault segment. In the first trench we sampled silty sediment below and above a prehistoric stone pavement that was supposedly at or near the surface at the time of the event, and subsequently thrown down. The samples below show a well-developed in situ argillic Bt soil horizon in parent sediment containing remnants of stratification, whereas the sediment above is a structureless colluvium reworked at least partly from Bt-horizon material. Below the stone pavement, we also found evidence of contorted stratification, which is in agreement with macroscopic observations of both the sediment and the stone pavement itself, and which is attributed to co-seismic soft-sediment deformation. In the second trench, we sampled a sequence of vaguely discernible soil horizons in the hanging-wall, interpreted as a buried soil profile (Bt, E, and possibly A horizons), overlain by a featureless deposit. Thin-section analysis supports the colluvial nature of the latter, and also provides evidence that both the base of this layer and the top of the poorly developed A horizon below have occupied a shallow position in a soil profile. A sample from the same depth in the footwall is composed of very different material. Instead of colluvium, we find patches of Bt soil, most likely representing the same pedogenic level as the in situ Bt horizon at larger depth in the hanging-wall, but displaced and subsequently degraded. Furthermore, thin sections confirm that vertical structures cutting this Bt horizon are sand dykes. These dykes could be traced macroscopically upward to the base of the colluvium. In both trenches, we have thus identified a stratigraphic boundary in the hanging-wall, close to the surface, separating an in situ soil below from colluvium above. We interpret this limit and the overlying colluvium as the event horizon and the colluvial wedge, respectively, of a surface-rupturing palaeoearthquake. In addition, in both cases we found evidence of soft-sediment deformation (related to liquefaction) contemporaneous with the event within the stratigraphic resolution. 相似文献
This paper presents an example of application of the double solid reactant method (DSRM) of Accornero and Marini (Environmental
Geology, 2007a), an effective way for modeling the fate of several dissolved trace elements during water–rock interaction. The EQ3/6 software
package was used for simulating the irreversible water–rock mass transfer accompanying the generation of the groundwaters
of the Porto Plain shallow aquifer, starting from a degassed diluted crateric steam condensate. Reaction path modeling was
performed in reaction progress mode and under closed-system conditions. The simulations assumed: (1) bulk dissolution (i.e.,
without any constraint on the kinetics of dissolution/precipitation reactions) of a single solid phase, a leucite-latitic
glass, and (2) precipitation of amorphous silica, barite, alunite, jarosite, anhydrite, kaolinite, a solid mixture of smectites,
fluorite, a solid mixture of hydroxides, illite-K, a solid mixture of saponites, a solid mixture of trigonal carbonates and
a solid mixture of orthorhombic carbonates. Analytical concentrations of major chemical elements and several trace elements
(Cr, Mn, Fe, Ni, Cu, Zn, As, Sr and Ba) in groundwaters were satisfactorily reproduced. In addition to these simulations,
similar runs for a rhyolite, a latite and a trachyte permitted to calculate major oxide contents for the authigenic paragenesis
which are comparable, to a first approximation, with the corresponding data measured for local altered rocks belonging to
the silicic, advanced argillic and intermediate argillic alteration facies. The important role played by both the solid mixture
of trigonal carbonates as sequestrator of Mn, Zn, Cu and Ni and the solid mixture of orthorhombic carbonates as scavenger
of Sr and Ba is emphasized.
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies.
Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making. 相似文献
Maps showing the potential for soil erosion at 1:100,000 scale are produced in a study area within Lebanon that can be used for evaluating erosion of Mediterranean karstic terrain with two different sets of impact factors built into an erosion model. The first set of factors is: soil erodibility, morphology, land cover/use and rainfall erosivity. The second is obtained by the first adding a fifth factor, rock infiltration. High infiltration can reflect high recharge, therefore decreasing the potential of surface runoff and hence the quantity of transported materials. Infiltration is derived as a function of lithology, lineament density, karstification and drainage density, all of which can be easily extracted from satellite imagery. The influence of these factors is assessed by a weight/rate approach sharing similarities between quantitative and qualitative methods and depending on pair-wise comparison matrix.The main outcome was the production of factorial maps and erosion susceptibility maps (scale 1:100,000). Spatial and attribute comparison of erosion maps indicates that the model that includes a measure of rock infiltration better represents erosion potential. Field investigation of rills and gullies shows 87.5% precision of the model with rock infiltration. This is 17.5% greater than the precision of the model without rock infiltration. These results indicate the necessity and importance of integrating information on infiltration of rock outcrops to assess soil erosion in Mediterranean karst landscapes. 相似文献
We describe empirical results from a multi-disciplinary project that support modeling complex processes of land-use and land-cover change in exurban parts of Southeastern Michigan. Based on two different conceptual models, one describing the evolution of urban form as a consequence of residential preferences and the other describing land-cover changes in an exurban township as a consequence of residential preferences, local policies, and a diversity of development types, we describe a variety of empirical data collected to support the mechanisms that we encoded in computational agent-based models. We used multiple methods, including social surveys, remote sensing, and statistical analysis of spatial data, to collect data that could be used to validate the structure of our models, calibrate their specific parameters, and evaluate their output. The data were used to investigate this system in the context of several themes from complexity science, including have (a) macro-level patterns; (b) autonomous decision making entities (i.e., agents); (c) heterogeneity among those entities; (d) social and spatial interactions that operate across multiple scales and (e) nonlinear feedback mechanisms. The results point to the importance of collecting data on agents and their interactions when producing agent-based models, the general validity of our conceptual models, and some changes that we needed to make to these models following data analysis. The calibrated models have been and are being used to evaluate landscape dynamics and the effects of various policy interventions on urban land-cover patterns. 相似文献