首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3163篇
  免费   171篇
  国内免费   37篇
测绘学   132篇
大气科学   261篇
地球物理   743篇
地质学   1246篇
海洋学   251篇
天文学   442篇
综合类   16篇
自然地理   280篇
  2023年   17篇
  2022年   30篇
  2021年   73篇
  2020年   94篇
  2019年   85篇
  2018年   133篇
  2017年   141篇
  2016年   154篇
  2015年   114篇
  2014年   146篇
  2013年   194篇
  2012年   151篇
  2011年   199篇
  2010年   173篇
  2009年   172篇
  2008年   157篇
  2007年   111篇
  2006年   105篇
  2005年   137篇
  2004年   138篇
  2003年   104篇
  2002年   69篇
  2001年   59篇
  2000年   48篇
  1999年   34篇
  1998年   30篇
  1997年   33篇
  1996年   34篇
  1995年   28篇
  1994年   18篇
  1993年   16篇
  1992年   24篇
  1991年   19篇
  1990年   25篇
  1989年   14篇
  1988年   16篇
  1987年   20篇
  1986年   9篇
  1985年   22篇
  1984年   26篇
  1983年   20篇
  1982年   22篇
  1981年   22篇
  1980年   16篇
  1979年   8篇
  1978年   14篇
  1977年   14篇
  1976年   12篇
  1975年   15篇
  1974年   16篇
排序方式: 共有3371条查询结果,搜索用时 46 毫秒
991.
In the Russian school, the total normalized gradient method belongs to the most wide-spread of direct interpretation methods for potential field data. This method was also used and partly developed by many experts from abroad. The main advantage of the total normalized gradient method is its relative independence of parameters such as the expected differential density of interpreted structures. The method is built from a construction of a specially transformed field (total normalized gradient) on a section crossing the potential field sources. The special properties of this transformed field allow it to be used to detect the source positions. From the 1960s, the mathematical basis of the method underwent enormous development and several modifications of the method have been elaborated. The total normalized gradient operator itself represents a relatively complicated, non-linear band-pass filter in the spectral domain. The properties of this operator can be handled by means of several parameters that act to separate the information about field sources at different depth levels. In this contribution, we describe the development of the method from its very beginning (based mostly on qualitative interpretation of simple total normalized gradient sections) through to more recent numerical improvements to the method. These improvements include the quasi-singular points method, which refines the filter properties of the total normalized gradient operator and defines an objective criteria (so called criterion ‘α’ and ‘Г') for the definition of source depths in the section. We end by describing possibilities for further development of the method in the future.  相似文献   
992.
Geomagnetism and Aeronomy - The kinetic theory is particularly suitable and convenient to calculate the collision parameters in the ionospheric medium. In this study, the mean free path (λ)...  相似文献   
993.
Truncated pluri-Gaussian simulation (TPGS) is suitable for the simulation of categorical variables that show natural ordering as the TPGS technique can consider transition probabilities. The TPGS assumes that categorical variables are the result of the truncation of underlying latent variables. In practice, only the categorical variables are observed. This translates the practical application of TPGS into a missing data problem in which all latent variables are missing. Latent variables are required at data locations in order to condition categorical realizations to observed categorical data. The imputation of missing latent variables at data locations is often achieved by either assigning constant values or spatially simulating latent variables subject to categorical observations. Realizations of latent variables can be used to condition all model realizations. Using a single realization or a constant value to condition all realizations is the same as assuming that latent variables are known at the data locations and this assumption affects uncertainty near data locations. The techniques for imputation of latent variables in TPGS framework are investigated in this article and their impact on uncertainty of simulated categorical models and possible effects on factors affecting decision making are explored. It is shown that the use of single realization of latent variables leads to underestimation of uncertainty and overestimation of measured resources while the use constant values for latent variables may lead to considerable over or underestimation of measured resources. The results highlight the importance of multiple data imputation in the context of TPGS.  相似文献   
994.
Contaminant intrusion in a water distribution network (DN) has three basic pre-conditions: source of contaminant (e.g., leaky sewer), a pathway (e.g., water main leaks), and a driving force (e.g., negative pressure). The impact of intrusion can be catastrophic if residual disinfectant (chlorine) is not present. To avoid microbiological water quality failure, higher levels of secondary chlorination doses can be a possible solution, but they can produce disinfectant by-products which lead to taste and odour complaints. This study presents a methodology to identify potential intrusion points in a DN and optimize booster chlorination based on trade-offs among microbiological risk, chemical risk and life-cycle cost for booster chlorination. A point-scoring scheme was developed to identify the potential intrusion points within a DN. It utilized factors such as pollutant source (e.g., sewer characteristics), pollution pathway (water main diameter, length, age, and surrounding soil properties, etc.), consequence of contamination (e.g., population, and land use), and operational factors (e.g., water pressure) integrated through a geographical information system using advanced ArcMap 10 operations. The contaminant intrusion was modelled for E. Coli O156: H7 (a microbiological indicator) using the EPANET-MSX programmer’s toolkit. The quantitative microbial risk assessment and chemical (human health) risk assessment frameworks were adapted to estimate risk potentials. Booster chlorination locations and dosages were selected using a multi-objective genetic algorithm. The methodology was illustrated through a case study on a portion of a municipal DN.  相似文献   
995.
Sewer inlet structures are vital components of urban drainage systems and their operational conditions can largely affect the overall performance of the system. However, their hydraulic behaviour and the way in which it is affected by clogging is often overlooked in urban drainage models, thus leading to misrepresentation of system performance and, in particular, of flooding occurrence. In the present paper, a novel methodology is proposed to stochastically model stormwater urban drainage systems, taking the impact of sewer inlet operational conditions (e.g. clogging due to debris accumulation) on urban pluvial flooding into account. The proposed methodology comprises three main steps: (i) identification of sewer inlets most prone to clogging based upon a spatial analysis of their proximity to trees and evaluation of sewer inlet locations; (ii) Monte Carlo simulation of the capacity of inlets prone to clogging and subsequent simulation of flooding for each sewer inlet capacity scenario, and (iii) delineation of stochastic flood hazard maps. The proposed methodology was demonstrated using as case study design storms as well as two real storm events observed in the city of Coimbra (Portugal), which reportedly led to flooding in different areas of the catchment. The results show that sewer inlet capacity can indeed have a large impact on the occurrence of urban pluvial flooding and that it is essential to account for variations in sewer inlet capacity in urban drainage models. Overall, the stochastic methodology proposed in this study constitutes a useful tool for dealing with uncertainties in sewer inlet operational conditions and, as compared to more traditional deterministic approaches, it allows a more comprehensive assessment of urban pluvial flood hazard, which in turn enables better-informed flood risk assessment and management decisions.  相似文献   
996.
A method for identification of pulsations in time series of magnetic field data which are simultaneously present in multiple channels of data at one or more sensor locations is described. Candidate pulsations of interest are first identified in geomagnetic time series by inspection.Time series of these ‘‘training events' ' are represented in matrix form and transpose-multiplied to generate timedomain covariance matrices. The ranked eigenvectors of this matrix are stored as a feature of the pulsation. In the second stage of the algorithm, a sliding window(approximately the width of the training event) is moved across the vector-valued time-series comprising the channels on which the training event was observed. At each window position, the data covariance matrix and associated eigenvectors are calculated. We compare the orientation of the dominant eigenvectors of the training data to those from the windowed data and flag windows where the dominant eigenvectors directions are similar. This was successful in automatically identifying pulses which share polarization and appear to be from the same source process. We apply the method to a case study of continuously sampled(50 Hz) data from six observatories, each equipped with threecomponent induction coil magnetometers. We examine a90-day interval of data associated with a cluster of four observatories located within 50 km of Napa, California,together with two remote reference stations-one 100 km to the north of the cluster and the other 350 km south. When the training data contains signals present in the remote reference observatories, we are reliably able to identify and extract global geomagnetic signals such as solar-generatednoise. When training data contains pulsations only observed in the cluster of local observatories, we identify several types of non-plane wave signals having similar polarization.  相似文献   
997.
This methods paper details the first attempt at monitoring bank erosion, flow and suspended sediment at a site during flooding on the Mekong River induced by the passage of tropical cyclones. We deployed integrated mobile laser scanning (MLS) and multibeam echo sounding (MBES), alongside acoustic Doppler current profiling (aDcp), to directly measure changes in river bank and bed at high (~0.05 m) spatial resolution, in conjunction with measurements of flow and suspended sediment dynamics. We outline the methodological steps used to collect and process this complex point cloud data, and detail the procedures used to process and calibrate the aDcp flow and sediment flux data. A comparison with conventional remote sensing methods of estimating bank erosion, using aerial images and Landsat imagery, reveals that traditional techniques are error prone at the high temporal resolutions required to quantify the patterns and volumes of bank erosion induced by the passage of individual flood events. Our analysis reveals the importance of cyclone‐driven flood events in causing high rates of erosion and suspended sediment transport, with a c. twofold increase in bank erosion volumes and a fourfold increase in suspended sediment volumes in the cyclone‐affected wet season. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
998.
Spectral decomposition is a powerful tool that can provide geological details dependent upon discrete frequencies. Complex spectral decomposition using inversion strategies differs from conventional spectral decomposition methods in that it produces not only frequency information but also wavelet phase information. This method was applied to a time‐lapse three‐dimensional seismic dataset in order to test the feasibility of using wavelet phase changes to detect and map injected carbon dioxide within the reservoir at the Ketzin carbon dioxide storage site, Germany. Simplified zero‐offset forward modelling was used to help verify the effectiveness of this technique and to better understand the wavelet phase response from the highly heterogeneous storage reservoir and carbon dioxide plume. Ambient noise and signal‐to‐noise ratios were calculated from the raw data to determine the extracted wavelet phase. Strong noise caused by rainfall and the assumed spatial distribution of sandstone channels in the reservoir could be correlated with phase anomalies. Qualitative and quantitative results indicate that the wavelet phase extracted by the complex spectral decomposition technique has great potential as a practical and feasible tool for carbon dioxide detection at the Ketzin pilot site.  相似文献   
999.
Laguna Melincué is a shallow lake located in Santa Fe Province, Argentina (33°41′27.8″S, 61°31′36.5″W). The catchment area is around 1495 km2 and it is located in the Pampean Plains. It was reduced to 678 km2 by the construction of the San Urbano channel in 1941 and reconditioned in 1977, which was built to avoid floods. The floods are related to some El Niño episodes, with high precipitation events. The lake has been previously studied from different approaches, mainly to understand hydrological and climatic variations, but more multidisciplinary studies are needed to understand its complex hydrological situation. Here we present the first paleomagnetic and rock magnetic studies made on a short sediment core collected from the lake in order to contribute to identifying paleoclimatic proxies and to present the first paleomagnetic results for the site. Rock magnetic analyses suggest that the well-preserved magnetic mineralogy is dominated by pseudo single-domain (titano)magnetite and/or maghemite. The results also indicate that a stable characteristic remanent magnetisation can be isolated and thus the directions of the geomagnetic field may be obtained, providing evidence for the use of this lake for paleomagnetic and paleoenvironmental studies. Changes in magnetic grain size and concentration of magnetic minerals suggest environmental variations and changes in the lake level, which are consistent with historical reports. The paleomagnetic results agree well with Cals3k.3 model for inclination and declination of the geomagnetic field except for the dry period probably due to the fact that the core was extracted near the shore.  相似文献   
1000.
Diverse technologies have been developed and tested for their efficacy in remediating perchlorate-contaminated surface water, groundwater, and soil. Biological reduction, particularly when coupled with electron donor augmentation, has been shown to be one of the most cost-effective alternatives. Numerous electron donors have been evaluated in the literature, but few studies have compared standard vs. slow-release electron donors for sequential nitrate and perchlorate reduction. This study evaluated the efficacy and kinetics of biological reduction in soil microcosms augmented with emulsified oil (EO), glycerol, and mulch extract. Results indicated that EO and glycerol spiked at approximately 100 times stoichiometric excess (i.e., 100x) achieved similar overall reductions and degradation rates for nitrate and perchlorate, although nitrate appeared to exhibit zero order kinetics while perchlorate exhibited first order kinetics. The zero order rate constants for nitrate reduction were 3.32 mg/L d and 2.57 mg/L d for EO and glycerol, respectively. The first order rate constant for perchlorate reduction was 0.36 day−1 for both EO and glycerol. Stable chemical oxygen demand (COD) concentrations also highlighted the slow-release properties of EO, which would reduce electron donor consumption in comparison to soluble substrates in soil remediation applications. The microcosms augmented with mulch extract failed to demonstrate any nitrate or perchlorate reduction due to the extract's lower COD concentration. Augmentation with compost or additional processing (i.e., concentration) would be necessary to make the extract a viable alternative.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号