The bulk composition of organic matter and saturated and aromatic hydrocarbons extracted from 16 samples collected from two Kuperschiefer profiles in the Rudna mine,Southwest Poland has been analyzed to study the role of organic matter during base metal enrichment in the Kupferschiefer shale.The results indicated that the extract yields and saturated hydrocarbon yields decreased with increasing base metal contents.GC and GC/MS analyses indicated that n -alkanes and alkylated aromatic compounds were depleted and may have served as hydrogen donators for thermochemical sulfate reduction.The enrichment of base metal is closely connected with the destruction of hydrocarbons. 相似文献
An integrated groundwater/surface water hydrological model with a 1 km2 grid has been constructed for Denmark covering 43,000 km2. The model is composed of a relatively simple root zone component for estimating the net precipitation, a comprehensive three-dimensional groundwater component for estimating recharge to and hydraulic heads in different geological layers, and a river component for streamflow routing and calculating stream–aquifer interaction. The model was constructed on the basis of the MIKE SHE code and by utilising comprehensive national databases on geology, soil, topography, river systems, climate and hydrology. The present paper describes the modelling process for the 7330 km2 island of Sjælland with emphasis on the problems experienced in combining the classical paradigms of groundwater modelling, such as inverse modelling of steady-state conditions, and catchment modelling, focussing on dynamic conditions and discharge simulation. Three model versions with different assumptions on input data and parameter values were required until the performance of the final, according to pre-defined accuracy criteria, model was evaluated as being satisfactory. The paper highlights the methodological issues related to establishment of performance criteria, parameterisation and assessment of parameter values from field data, calibration and validation test schemes. Most of the parameter values were assessed directly from field data, while about 10 ‘free’ parameters were subject to calibration using a combination of inverse steady-state groundwater modelling and manual trial-and-error dynamic groundwater/surface water modelling. Emphasising the importance of tests against independent data, the validation schemes included combinations of split-sample tests (another period) and proxy-basin tests (another area). 相似文献
Data from 25 local catalogues and 30special studies of earthquakes in central,northern and northwestern Europe have beenincorporated into a Databank. The dataprocessing includes discriminating eventtypes, eliminating fake events and dupletsand converting different magnitudes andintensities to Mw if this is not givenby the original source. The magnitudeconversion is a key task of the study andimplies establishment of regressionequations where no local relations exist.The Catalogue contains tectonic events fromthe Databank within the area44°N–72°N,25°W–32°E and the time period1300–1993. The lower magnitude level forthe Catalogue entries is setat Mw == 3.50. The area covered by thedifferent catalogues are associated withpolygons. Within each polygon only datafrom one or a small number of the localcatalogues, supplemented by data fromspecial studies, enter the Catalogue. Ifthere are two or more such catalogues orstudies providing a solution for an event,a priority algorithm selects one entry forthe Catalogue. Then Mw is calculatedfrom one of the magnitude types, or frommacroseismic data, given by the selectedentry according to another priority scheme.The origin time, location, Mw magnitude and reference are specified for eachentry of the Catalogue. So is theepicentral intensity, I0, if providedby the original source. Following thesecriteria, a total of about 5,000earthquakes constitute the Catalogue.Although originally derived for the purposeof seismic hazard calculation within GSHAP,the Catalogue provides a data base for manytypes of seismicity and seismic hazardstudies. 相似文献
In a novel biomanipulation experiment salmonids were used as a tool to improve water quality. The manipulation was initiated in spring 2000 as a response to non-point sources of phosphorus in a drinking water reservoir in Saxony, Germany. Salmonids (brown trout, Salmo trutta forma lacustris) were chosen as predators as the reservoir has a large hypolimnic water body and surface temperatures rarely exceed 20 °C. The vertical distributions of prey fish and brown trout were analysed with a fleet of vertical gill nets set in the pelagic zone of the reservoir. Consumption of brown trout was estimated by means of a bioenergetic model and the diet analyses of the trout. While the dominant planktivore (roach, Rutilus rutilus) was caught almost exclusively in the epilimnion during the stratification period trout were caught mainly below a depth of 10 m. Diet analysis revealed that the trout performed vertical migrations to consume food in the epilimnic layer, as an important food component were adult terrestrial and aquatic insects. The amount of fish in the food increased strongly with the size of the brown trout. The consumption estimate suggested that the trout had consumed 2-3% of the total roach stock during the study period (May-November 2000) of the first year of biomanipulation. We conclude that in general salmonids are suitable for food-web manipulation in deep reservoirs, but the stocked fish should be as large as possible (> 300 mm) and the proportion of large trout (> 500 mm) should be as high as possible. 相似文献
Spectacular shallow-level migmatization of ferrogabbroic rocks occurs in a metamorphic contact aureole of a gabbroic pluton of the Tierra Mala massif (TM) on Fuerteventura (Canary Islands). In order to improve our knowledge of the low pressure melting behavior of gabbroic rocks and to constrain the conditions of migmatization of the TM gabbros, we performed partial melting experiments on a natural ferrogabbro, which is assumed as protolith of the migmatites. The experiments were performed in an internally heated pressure vessel (IHPV) at 200 MPa, 930–1150 °C at relatively oxidizing conditions. Distinct amounts of water were added to the charge.
From 930 to 1000 °C, the observed experimental phases are plagioclase (An60–70), clinopyroxene, amphibole (titanian magnesiohastingsites), two Fe–Ti oxides, and a basaltic, K-poor melt. Above 1000 °C, amphibole is no longer stable. The first melts are very rich in normative plagioclase (>70 wt.%). This indicates that at the beginning of partial melting plagioclase is the major phase which is consumed to produce melt. In the experiments, plagioclase is stable up to high temperatures (1060 °C) showing increasing An content with temperature. This is not compatible with the natural migmatites, in which An-rich plagioclase is absent in the melanosomes, while amphibole is stable. Our results show that the partial melting of the natural rocks cannot be regarded as an “in-situ” process that occurred in a closed system. Considerable amounts of alkalis probably transported by water-rich fluids, derived from the mafic pluton underplating the TM gabbro, were necessary to drive the melting reaction out of the stability range of plagioclase. A partial melting experiment with a migmatite gabbro showing typical “in-situ” textures as starting material supports this assumption.
Crystallization experiments performed at 1000 °C on a glass of the fused ferrogabbro with different water contents added to the charge show that generally high water activities could be achieved (crystallization of amphibole), independently of the bulk water content, even in a system with very low initial bulk water content (0.3 wt.%). Increasing water contents produce plagioclase richer in An, reduces the modal proportion of plagioclase in the crystallizing assemblage and extends the melt fraction. High melt fractions of >30 wt.% could only be observed in systems with high bulk water contents (>2 wt.%). This indicates that the migmatites were generated under water-rich conditions (probably water-saturated), since those migmatites, which are characterized as “in-situ” formations, show generally high amounts of leucosomes (>30 wt.%). 相似文献
The traditional remove-restore technique for geoid computation suffers from two main drawbacks. The first is the assumption
of an isostatic hypothesis to compute the compensation masses. The second is the double consideration of the effect of the
topographic–isostatic masses within the data window through removing the reference field and the terrain reduction process.
To overcome the first disadvantage, the seismic Moho depths, representing, more or less, the actual compensating masses, have
been used with variable density anomalies computed by employing the topographic–isostatic mass balance principle. In order
to avoid the double consideration of the effect of the topographic–isostatic masses within the data window, the effect of
these masses for the used fixed data window, in terms of potential coefficients, has been subtracted from the reference field,
yielding an adapted reference field. This adapted reference field has been used for the remove–restore technique. The necessary
harmonic analysis of the topographic–isostatic potential using seismic Moho depths with variable density anomalies is given.
A wide comparison among geoids computed by the adapted reference field with both the Airy–Heiskanen isostatic model and seismic
Moho depths with variable density anomaly and a geoid computed by the traditional remove–restore technique is made. The results
show that using seismic Moho depths with variable density anomaly along with the adapted reference field gives the best relative
geoid accuracy compared to the GPS/levelling geoid.
Received: 3 October 2001 / Accepted: 20 September 2002
Correspondence to: H.A. Abd-Elmotaal 相似文献
Several stratospheric chemistry modules from box, 2-D or 3-D models, have been intercompared. The intercomparison was focused on the ozone loss and associated reactive species under the conditions found in the cold, wintertime Arctic and Antarctic vortices. Comparisons of both gas phase and heterogeneous chemistry modules show excellent agreement between the models under constrained conditions for photolysis and the microphysics of polar stratospheric clouds. While the mean integral ozone loss ranges from 4–80% for different 30–50 days long air parcel trajectories, the mean scatter of model results around these values is only about ±1.5%. In a case study, where the models employed their standard photolysis and microphysical schemes, the variation around the mean percentage ozone loss increases to about ±7%. This increased scatter of model results is mainly due to the different treatment of the PSC microphysics and heterogeneous chemistry in the models, whereby the most unrealistic assumptions about PSC processes consequently lead to the least representative ozone chemistry. Furthermore, for this case study the model results for the ozone mixing ratios at different altitudes were compared with a measured ozone profile to investigate the extent to which models reproduce the stratospheric ozone losses. It was found that mainly in the height range of strong ozone depletion all models underestimate the ozone loss by about a factor of two. This finding corroborates earlier studies and implies a general deficiency in our understanding of the stratospheric ozone loss chemistry rather than a specific problem related to a particular model simulation. 相似文献
A case study of warm air advection over the Arctic marginalsea-ice zone is presented, based on aircraft observations with direct flux measurements carriedout in early spring, 1998. A shallow atmospheric boundary layer (ABL) was observed, which wasgradually cooling with distance downwind of the ice edge. This process was mainly connected with astrong stable stratification and downward turbulent heat fluxes of about 10–20 W m-2, but wasalso due to radiative cooling. Two mesoscale models, one hydrostatic and the other non-hydrostatic,having different turbulence closures, were applied. Despite these fundamental differences betweenthe models, the results of both agreed well with the observed data. Various closure assumptions had amore crucial influence on the results than the differences between the models.Such an assumption was, for example,the parameterization of the surface roughness for momentum (z0) and heat (zT). This stronglyaffected the wind and temperature fields not only close to the surface but also within and abovethe temperature inversion layer. The best results were achieved using a formulation for z0 that took intoaccount the form drag effect of sea-ice ridges together withzT = 0.1z0. The stability within theelevated inversion strongly depended on the minimum eddy diffusivity Kmin. A simple ad hocparameterization seems applicable, where Kmin is calculated as 0.005 timesthe neutral eddy diffusivity. Although the longwave radiative cooling was largest within the ABL, theapplication of a radiation scheme was less important there than above the ABL. This was related to theinteraction of the turbulent and radiative fluxes. To reproduce the strong inversion, it wasnecessary to use vertical and horizontal resolutions higher than those applied in most regional andlarge-scale atmospheric models. 相似文献