To drive an atmospheric general circulation model (AGCM), land surface boundary conditions like albedo and morphological roughness, which depend on the vegetation type present, have to be prescribed. For the late Quaternary there are some data available, but they are still sparse. Here an artificial neural network approach to assimilate these paleovegetation data is investigated. In contrast to a biome model the relation between climatological parameters and vegetation type is not based on biological knowledge but estimated from the available vegetation data and the AGCM climatology at the corresponding locations. For a test application, a data set for the modern vegetation reduced to the amount of data available for the Holocene climate optimum (about 6000 years B.P.) is used. From this, the neural network is able to reconstruct the complete global vegetation with a kappa value of 0.56. The most pronounced errors occur in Australia and South America in areas corresponding to large data gaps. 相似文献
Reviews of geographic software in this article: DEMO-GRAPHICS: WORLD POPULATIONS AND PROJECTIONS. ESP GAUSS. CEMODEL S. Damus LIMDEP. William H. Greene MICROSTAT 4.1 OTIS PCIPS. (Personal Computer Image Processing System) . H.J. Meyers and R. Bernstein. REGRESSION ANALYSIS OF TIME SERIES (RATS) SPSS/PC+ URBAN DATA MANAGEMENT SOFTWARE (UDMS) 相似文献
Coal seams burning underneath the surface are recognized all over the world and have drawn increasing public attention in the past years. Frequently, such fires are analyzed by detecting anomalies like increased exhaust gas concentrations and soil temperatures at the surface. A proper analysis presumes the understanding of involved processes, which determine the spatial distribution and dynamic behavior of the anomalies.In this paper, we explain the relevance of mechanical and energy transport processes with respect to the occurrence of temperature anomalies at the surface. Two approaches are presented, aiming to obtain insight into the underground coal fire situation: In-situ temperature mapping and numerical simulation. In 2000 to 2005, annual temperature mapping in the Wuda (Inner Mongolia, PR China) coal fire area showed that most thermal anomalies on the surface are closely related to fractures, where hot exhaust gases from the coal fire are released. Those fractures develop due to rock mechanical failure after volume reduction in the seams. The measured signals at the surface are therefore strongly affected by mechanical processes.More insight into causes and effects of involved energy transport processes is obtained by numerical simulation of the dynamic behavior of coal fires. Simulations show the inter-relation between release and transport of thermal energy in and around underground coal fires. Our simulation results show a time delay between the coal fire propagation and the observed appearance of the surface temperature signal. Additionally, the overall energy flux away from the burning coal seam into the surrounding bedrock is about 30-times higher than the flux through the surface. This is of particular importance for an estimation of the energy released based on surface temperature measurements. Finally, the simulation results also prove that a fire propagation rate estimated from the interpretation of surface anomalies can differ from the actual rate in the seam. 相似文献
The imbalance between incoming and outgoing salt causes salinization of soils and sub-soils that result in increasing the
salinity of stream-flows and agriculture land. This salinization is a serious environmental hazard particularly in semi-arid
and arid lands. In order to estimate the magnitude of the hazard posed by salinity, it is important to understand and identify
the processes that control salt movement from the soil surface through the root zone to the ground water and stream flows.
In the present study, Malaprabha sub-basin (up to dam site) has been selected which has two distinct climatic zones, sub-humid
(upstream of Khanapur) and semi-arid region (downstream of Khanapur). In the upstream, both surface and ground waters are
used for irrigation, whereas in the downstream mostly groundwater is used. Both soils and ground waters are more saline in
downstream parts of the study area. In this study we characterized the soil salinity and groundwater quality in both areas.
An attempt is also made to model the distribution of potassium concentration in the soil profile in response to varying irrigation
conditions using the SWIM (Soil-Water Infiltration and Movement) model. Fair agreement was obtained between predicted and
measured results indicating the applicability of the model. 相似文献
This paper presents a new contact calculating algorithm for contacts between two polyhedra with planar boundaries in the three-dimensional discontinuous deformation analysis (3-D DDA). In this algorithm, all six type contacts in 3-D (vertex-to-face, vertex-to-edge, vertex-to-vertex, face-to-face, edge-to-edge, and edge-to-face) are simply transformed into the form of point-to-face contacts. The presented algorithm is a simple and efficient method and it can be easily coded into a computer program. In this paper, formulations of normal contact, shear contact and frictional force submatrices based on the new method are derived and the algorithm has been programmed in VC++. Examples are provided to demonstrate the new contact rule between two blocks. 相似文献
Geologic storage of CO2 is expected to produce plumes of large areal extent, and some leakage may occur along fractures, fault zones, or improperly
plugged pre-existing wellbores. A review of physical and chemical processes accompanying leakage suggests a potential for
self-enhancement. The numerical simulations presented here confirm this expectation, but reveal self-limiting features as
well. It seems unlikely that CO2 leakage could trigger a high-energy run-away discharge, a so-called “pneumatic eruption,” but present understanding is insufficient
to rule out this possibility. The most promising avenue for increasing understanding of CO2 leakage behavior is the study of natural analogues. 相似文献
This paper presents an example of application of the double solid reactant method (DSRM) of Accornero and Marini (Environmental
Geology, 2007a), an effective way for modeling the fate of several dissolved trace elements during water–rock interaction. The EQ3/6 software
package was used for simulating the irreversible water–rock mass transfer accompanying the generation of the groundwaters
of the Porto Plain shallow aquifer, starting from a degassed diluted crateric steam condensate. Reaction path modeling was
performed in reaction progress mode and under closed-system conditions. The simulations assumed: (1) bulk dissolution (i.e.,
without any constraint on the kinetics of dissolution/precipitation reactions) of a single solid phase, a leucite-latitic
glass, and (2) precipitation of amorphous silica, barite, alunite, jarosite, anhydrite, kaolinite, a solid mixture of smectites,
fluorite, a solid mixture of hydroxides, illite-K, a solid mixture of saponites, a solid mixture of trigonal carbonates and
a solid mixture of orthorhombic carbonates. Analytical concentrations of major chemical elements and several trace elements
(Cr, Mn, Fe, Ni, Cu, Zn, As, Sr and Ba) in groundwaters were satisfactorily reproduced. In addition to these simulations,
similar runs for a rhyolite, a latite and a trachyte permitted to calculate major oxide contents for the authigenic paragenesis
which are comparable, to a first approximation, with the corresponding data measured for local altered rocks belonging to
the silicic, advanced argillic and intermediate argillic alteration facies. The important role played by both the solid mixture
of trigonal carbonates as sequestrator of Mn, Zn, Cu and Ni and the solid mixture of orthorhombic carbonates as scavenger
of Sr and Ba is emphasized.
The research shows that in the Celje area (Slovenia), the historical anthropogenical emissions are 1,712 tons of Zn and 9.1 tons
of Cd. For Zn, this value represents approximately 0.3% of the total Zn production in that area. Close to the former zinc
smelting plant, the “Zn precipitation” has been estimated to be up to 0.036 mm. The 100-year Zn production left behind a heavily
contaminated area with maximum concentrations of Zn of up to 5.6% in attic dust and 0.85% in the soil, and 456 mg/kg of Cd
in attic dust and 59.1 mg/kg in the soil. The calculation of historical emissions is based on the data of heavy metals concentration
in the attic dust at 98 sampling points and on the data from 19 measurement sites of the weight of total monthly air deposit.
The main idea behind determining past emissions is that when the weight of the deposited dust on a small area is multiplied
by the concentration of the element in that area, the mass of the polluter which has been transported to the place of interest
by air can be calculated. If we sum up all the weight over the whole geochemical anomaly, we get the quantity of historical
emissions. 相似文献
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies.
Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making. 相似文献