首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Top-down methods for defining stream classifications are based on a conceptual model or expert-defined rules, whereas bottom-up methods use biological training data and statistical modelling. We compared the performance of six classification methods for explaining the taxonomic composition of invertebrate and fish assemblages recorded at 327 and 511 sites, respectively, distributed throughout France. Classification 1 and 2 were top-down classifications; The European Water Framework System A (WFDa,) and the French Hydro-ecoregions (HER 2). Four bottom-up classification procedures of increasing complexity were defined based on 11 variables that included watershed characteristics describing climate, topography, and geology, and site characteristics including elevation, bed slope and temperature. Classification 3 was defined using matrix correlation (MC) to select a combination of variable categories that produced the best discrimination of the observed taxonomic composition. Classification 4 and 5 were defined by clustering the sites based on their taxonomic data and then using linear discriminant analysis (LDA) and Random forests (RF) to discriminate the clusters based on the environmental variables. Classification 6 was defined using generalized dissimilarity modelling (GDM). Our hypothesis was that the bottom-up classifications would perform better because they flexibly accommodate complex relationships between compositional and environmental variation. We tested the classifications using the classification strength statistic (CS). The RF-based classification fitted the taxonomic patterns better than GDM or LDA and these latter classifications generally fitted better than the MC, WFDa or HER classifications. Cross validation analysis showed that differences in predictive CS (i.e. the CS statistics produced from sites not used in defining the classifications) were often significant. However, these differences were generally small. Gains in predictive performance of classifications appear to be small relative to the increase in complexity in the manner in which environmental variables are combined to define classes.  相似文献   

2.
3.
Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489–508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93–165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489–508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.  相似文献   

4.
Hydrogeology: time for a new beginning?   总被引:1,自引:0,他引:1  
Phillips FM 《Ground water》2002,40(3):217-219
  相似文献   

5.
Connections are ubiquitous. The hydrologic cycle is perhaps the best example: every component of this cycle is connected to every other component, but some connections are stronger than the others. Unraveling the nature and extent of connections in hydrologic systems, as well as their interactions with others, has always been a fundamental challenge in hydrology. Despite the progress in this direction, a strong scientific theory that is suitable for studying all types of connections in hydrology continues to be elusive. In this article, I argue that the theory of networks provides a generic theory for studying all types of connections in hydrology. After presenting a general discussion of complex systems as networks, I offer a brief account of the history of development of network theory and some basic concepts and measures of complex networks, and explain the relevance of complex network theory for hydrologic systems, with three specific examples.  相似文献   

6.
7.
8.
Long‐term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, such as Central America, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information—to locally observed discharge—can be used to constrain model parameter uncertainty for ungauged catchments. Given the strong influence that climatic large‐scale processes exert on streamflow variability in the Central American region, we explored the use of climate variability knowledge as process constraints to constrain the simulated discharge uncertainty for a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty, we first rejected parameter relationships that disagreed with our understanding of the system. Then, based on this reduced parameter space, we applied the climate‐based process constraints at long‐term, inter‐annual, and intra‐annual timescales. In the first step, we reduced the initial number of parameters by 52%, and then, we further reduced the number of parameters by 3% with the climate constraints. Finally, we compared the climate‐based constraints with a constraint based on global maps of low‐flow statistics. This latter constraint proved to be more restrictive than those based on climate variability (further reducing the number of parameters by 66% compared with 3%). Even so, the climate‐based constraints rejected inconsistent model simulations that were not rejected by the low‐flow statistics constraint. When taken all together, the constraints produced constrained simulation uncertainty bands, and the median simulated discharge followed the observed time series to a similar level as an optimized model. All the constraints were found useful in constraining model uncertainty for an—assumed to be—ungauged basin. This shows that our method is promising for modelling long‐term flow data for ungauged catchments on the Pacific side of Central America and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.  相似文献   

9.
We examine the initial subevent (ISE) of the M 6.7, 1994 Northridge, California, earthquake in order to discriminate between two end-member rupture initiation models: the preslip and cascade models. Final earthquake size may be predictable from an ISE's seismic signature in the preslip model but not in the cascade model. In the cascade model ISEs are simply small earthquakes that can be described as purely dynamic ruptures. In this model a large earthquake is triggered by smaller earthquakes; there is no size scaling between triggering and triggered events and a variety of stress transfer mechanisms are possible. Alternatively, in the preslip model, a large earthquake nucleates as an aseismically slipping patch in which the patch dimension grows and scales with the earthquake's ultimate size; the byproduct of this loading process is the ISE. In this model, the duration of the ISE signal scales with the ultimate size of the earthquake, suggesting that nucleation and earthquake size are determined by a more predictable, measurable, and organized process. To distinguish between these two end-member models we use short period seismograms recorded by the Southern California Seismic Network. We address questions regarding the similarity in hypocenter locations and focal mechanisms of the ISE and the mainshock. We also compare the ISE's waveform characteristics to those of small earthquakes and to the beginnings of earthquakes with a range of magnitudes. We find that the focal mechanisms of the ISE and mainshock are indistinguishable, and both events may have nucleated on and ruptured the same fault plane. These results satisfy the requirements for both models and thus do not discriminate between them. However, further tests show the ISE's waveform characteristics are similar to those of typical small earthquakes in the vicinity and more importantly, do not scale with the mainshock magnitude. These results are more consistent with the cascade model.  相似文献   

10.
11.
The focus of this work is to explore the use of the netted whelk, Nassarius reticulatus (L.), as an indicator of mercury (Hg) contamination, by assessing the concentration of Hg in the sediments and in the whelk along the entire Portuguese coast. Total Hg concentrations ranged from below the detection limit (0.01 ng absolute mercury) up to 0.87 mg kg(-1) dry weight (dwt) in sediments and between 0.06 and 1.02 mg kg(-1) (dwt) for organisms, with no significant differences between males and females. Although organic mercury was not detected in the sediments, it represented, on average, 52% of the total Hg in the whelk tissues, and as high as 88% in some cases, suggesting mercury accumulation from dietary intake. Significant negative correlations were found between the total Hg concentrations in the sediments and the log(10) of Hg concentrations in whelk tissues males (r=-0.64; P<0.01) and females (r=-0.52; P<0.01) indicating that the species is a poor indicator of Hg contamination. Nevertheless, since the highest concentrations of organic mercury in the whelk tissues were found in the least contaminated areas, this species must be highly relevant in the trophic web, namely on the possible biomagnification of mercury. The high dietary mercury accumulation from feeding on carrion and the low bioavailability of mercury to whelks in estuarine sediments may be the basis of the mercury accumulation pattern in N. reticulatus.  相似文献   

12.
We investigated two ‘gap-filler’ methods based on GPS-derived low-degree surface loading variations (GPS-I and GPS-C) and a more simple method (REF-S) which extends a seasonal harmonic variation into the expected Gravity Recovery and Climate Experiment (GRACE) mission gap. We simulated two mission gaps in a reference solution (REF), which is derived from a joint inversion of GRACE (RL05) data, GPS-derived surface loading and simulated ocean bottom pressure. The GPS-I and GPS-C methods both have a new type of constraint applied to mitigate the lack of GPS station network coverage over the ocean. To obtain the GPS-C solution, the GPS-I method is adjusted such that it fits the reference solution better in a 1.5 year overlapping period outside of the gap. As can be expected, the GPS-I and GPS-C solutions contain larger errors compared to the reference solution, which is heavily constrained by GRACE. Within the simulated gaps, the GPS-C solution generally fits the reference solution better compared to the GPS-I method, both in terms of spherical harmonic loading coefficients and in terms of selected basin-averaged hydrological mass variations. Depending on the basin, the RMS-error of the water storage variations (scaled for leakage effects) ranges between 1.6 cm (Yukon) and 15.3 cm (Orinoco). In terms of noise level, the seasonal gap-filler method (REF-S) even outperforms the GPS-I and GPS-C methods, which are still affected by spatial aliasing problems. However, it must be noted that the REF-S method cannot be used beyond the study of simple harmonic seasonal variations.  相似文献   

13.
The trends of malformation prevalence in embryos of dab, Limanda limanda, in the southern North Sea after the year 1990 mirrored the drop in major pollutants in the rivers draining into the German Bight. Despite this general decline, we detected a pollution event in the southern North Sea in winter 1995/1996 employing the prevalence of malformations in pelagic dab embryos as an indicator. An abrupt rise in malformation prevalence in the embryos of dab, corresponded to a dramatic increase in DDT levels in parent fish from the same area, indicating a hitherto unnoticed introduction of considerable quantities of DDT into the system.  相似文献   

14.
15.
Full‐waveform inversion is an appealing technique for time‐lapse imaging, especially when prior model information is included into the inversion workflow. Once the baseline reconstruction is achieved, several strategies can be used to assess the physical parameter changes, such as parallel difference (two separate inversions of baseline and monitor data sets), sequential difference (inversion of the monitor data set starting from the recovered baseline model) and double‐difference (inversion of the difference data starting from the recovered baseline model) strategies. Using synthetic Marmousi data sets, we investigate which strategy should be adopted to obtain more robust and more accurate time‐lapse velocity changes in noise‐free and noisy environments. This synthetic application demonstrates that the double‐difference strategy provides the more robust time‐lapse result. In addition, we propose a target‐oriented time‐lapse imaging using regularized full‐waveform inversion including a prior model and model weighting, if the prior information exists on the location of expected variations. This scheme applies strong prior model constraints outside of the expected areas of time‐lapse changes and relatively less prior constraints in the time‐lapse target zones. In application of this process to the Marmousi model data set, the local resolution analysis performed with spike tests shows that the target‐oriented inversion prevents the occurrence of artefacts outside the target areas, which could contaminate and compromise the reconstruction of the effective time‐lapse changes, especially when using the sequential difference strategy. In a strongly noisy case, the target‐oriented prior model weighting ensures the same behaviour for both time‐lapse strategies, the double‐difference and the sequential difference strategies and leads to a more robust reconstruction of the weak time‐lapse changes. The double‐difference strategy can deliver more accurate time‐lapse variation since it can focus to invert the difference data. However, the double‐difference strategy requires a preprocessing step on data sets such as time‐lapse binning to have a similar source/receiver location between two surveys, while the sequential difference needs less this requirement. If we have prior information about the area of changes, the target‐oriented sequential difference strategy can be an alternative and can provide the same robust result as the double‐difference strategy.  相似文献   

16.
The use of the term "heavy metal" is regularly questioned by the scientific community. Here, we followed the evolution(1970–2020) in the number of published papers including this term in their title. Thus, we can evidence a continuous, albeit sometimes stabilizing, increase especially in environmental journals. After several other warning opinions, we propose that it should be replaced in the scientific literature by terms like "metal", "metalloid","trace metal elements" or "potentially toxic element".  相似文献   

17.
Abstract

Available data from nearby gauging stations can provide a great source of hydrometric information that is potentially transferable to an ungauged site. Furthermore, streamflow measurements may even be available for the ungauged site. This paper explores the potential of four distance-based regionalization methods to simulate daily hydrographs at almost ungauged pollution-control sites. Two methods use only the hydrological information provided by neighbouring catchments; the other two are new regionalization methods parameterized with a limited number of streamflow data available at the site of interest. Based on a network of 149 streamgauges and 21 pollution-control sites located in the Upper Rhine-Meuse area, the comparative assessment demonstrates the benefit of making available point streamflow measurements at the location of interest for improving quantitative streamflow prediction. The advantage is moderate for the prediction of flow types (stormflow, recession flow, baseflow) and pulse shape (duration of rising limb and falling limb).
Editor Z.W. Kundzewicz; Associate editor A. Viglione  相似文献   

18.
Seismic hazard disaggregation is commonly used as an aid in ground‐motion selection for the seismic response analysis of structures. This short communication investigates two different approaches to disaggregation related to the exceedance and occurrence of a particular intensity. The impact the different approaches might have on a subsequent structural analysis at a given intensity is explored through the calculation of conditional spectra. It is found that the exceedance approach results in conditional spectra that will be conservative when used as targets for ground‐motion selection. It is however argued that the use of the occurrence disaggregation is more consistent with the objectives of seismic response analyses in the context of performance‐based earthquake engineering. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
The Chi Chi earthquake (Mw7.6) occurred at 17:47 UTC on Sept. 20,1999 (01:47 September 21, 1999, local time) in central Taiwan. CWB located the epi- center at (120.82°E; 23.85°N) and the focal depth 8 km. Chi Chi earthquake is the best documented earth- quake ever recorded. The abundance and quality of its near-source observations present an unparalleled op- portunity for studying the rupture history from a close distance. More than 400 free field digital accelerome- ters with 3-compon…  相似文献   

20.
In 1999, a seismic swarm of 237 teleseismically recorded events marked a submarine eruption along the Arctic Gakkel Ridge, later on also analyzed by sonar, bathymetric, hydrothermal, and local seismic studies. We relocated the swarm with the global location algorithm HYPOSAT and analyzed the waveforms of the stations closest to the events by cross-correlation. We find event locations scattered around 85°35 N and 85° E at the southern rift wall and inside the rift valley of the Gakkel Ridge. Waveforms of three highly correlating events indicate a volumetric moment tensor component and highly precise referenced double-difference arrival times lead us to believe that they occur at the same geographical position and mark the conduit located further southeast close to a chain of recently imaged volcanic cones. This result is supported by station residual anomalies in the direction of the potential conduit. Seismicity is focused at the crust–mantle boundary at 16–20 km depth, but ascending toward the potential conduit during the beginning of April 1999, indicating an opening of the vent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号