首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   53380篇
  免费   687篇
  国内免费   321篇
测绘学   1353篇
大气科学   4245篇
地球物理   10948篇
地质学   17224篇
海洋学   4558篇
天文学   12211篇
综合类   104篇
自然地理   3745篇
  2020年   421篇
  2019年   452篇
  2018年   749篇
  2017年   734篇
  2016年   1100篇
  2015年   820篇
  2014年   1145篇
  2013年   2602篇
  2012年   1191篇
  2011年   1823篇
  2010年   1539篇
  2009年   2305篇
  2008年   2127篇
  2007年   1908篇
  2006年   1951篇
  2005年   1706篇
  2004年   1782篇
  2003年   1675篇
  2002年   1561篇
  2001年   1437篇
  2000年   1398篇
  1999年   1211篇
  1998年   1195篇
  1997年   1183篇
  1996年   1006篇
  1995年   978篇
  1994年   858篇
  1993年   815篇
  1992年   720篇
  1991年   646篇
  1990年   777篇
  1989年   665篇
  1988年   591篇
  1987年   747篇
  1986年   666篇
  1985年   786篇
  1984年   940篇
  1983年   908篇
  1982年   821篇
  1981年   787篇
  1980年   661篇
  1979年   662篇
  1978年   681篇
  1977年   642篇
  1976年   594篇
  1975年   558篇
  1974年   550篇
  1973年   564篇
  1972年   348篇
  1971年   306篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
An indirect method of estimating the surface heat flux from observations of vertical velocity variance at the lower mid-levels of the convective atmospheric boundary layer is described. Comparison of surface heat flux estimates with those from boundary-layer heating rates is good, and this method seems to be especially suitable for inhomogeneous terrain for which the surface-layer profile method cannot be used.  相似文献   
992.
A direct comparison among highly uncertain inventories of emissions is inadequate and may lead to paradoxes. This issue is of particular importance in the case of greenhouse gases. This paper reviews the methods for the comparison of uncertain inventories in the context of compliance checking. The problem is treated as a comparison of uncertain alternatives. It provides a categorization and ranking of the inventories which can induce compliance checking conditions. Two groups of techniques to compare uncertain estimates are considered in the paper: probabilistic and fuzzy approaches. They show certain similarities which are revealed and stressed throughout the paper. The group of methods most suitable for the compliance purpose is distinguished. They introduce new conditions for fulfilling compliance, depending on inventory uncertainty. These new conditions considerably change the present approach, where only the reported values of inventories are accounted for.  相似文献   
993.
The economic benefits of a multi-gas approach to climate change mitigation are clear. However, there is still a debate on how to make the trade-off between different greenhouse gases (GHGs). The trade-off debate has mainly centered on the use of Global Warming Potentials (GWPs), governing the trade-off under the Kyoto Protocol, with results showing that the cost-effective valuation of short-lived GHGs, like methane (CH4), should be lower than its current GWP value if the ultimate aim is to stabilize the anthropogenic temperature change. However, contrary to this, there have also been proposals that early mitigation mainly should be targeted on short-lived GHGs. In this paper we analyze the cost-effective trade-off between a short-lived GHG, CH4, and a long-lived GHG, carbon dioxide (CO2), when a temperature target is to be met, taking into consideration the current uncertainty of the climate sensitivity as well as the likelihood that this will be reduced in the future. The analysis is carried out using an integrated climate and economic model (MiMiC) and the results from this model are explored and explained using a simplified analytical economic model. The main finding is that the introduction of uncertainty and learning about the climate sensitivity increases the near-term cost-effective valuation of CH4 relative to CO2. The larger the uncertainty span, the higher the valuation of the short-lived gas. For an uncertainty span of ±1°C around an expected climate sensitivity of 3°C, CH4 is cost-effectively valued 6.8 times as high as CO2 in year 2005. This is almost twice as high as the valuation in a deterministic case, but still significantly lower than its GWP100 value.  相似文献   
994.
J Wang  M Ikeda  S Zhang  R Gerdes 《Climate Dynamics》2005,24(2-3):115-130
The nature of the reduction trend and quasi-decadal oscillation in Northern Hemisphere sea-ice extent is investigated. The trend and oscillation that seem to be two separate phenomena have been found in data. This study examines a hypothesis that the Arctic sea-ice reduction trend in the last three decades amplified the quasi-decadal Arctic sea-ice oscillation (ASIO) due to a positive ice/ocean-albedo feedback, based on data analysis and a conceptual model proposed by Ikeda et al. The theoretical, conceptual model predicts that the quasi-decadal oscillation is amplified by the thinning sea-ice, leading to the ASIO, which is driven by the strong positive feedback between the atmosphere and ice-ocean systems. Such oscillation is predicted to be out-of-phase between the Arctic Basin and the Nordic Seas with a phase difference of 3/4, with the Nordic Seas leading the Arctic. The wavelet analysis of the sea ice data reveals that the quasi-decadal ASIO occurred actively since the 1970s following the trend starting in the 1960s (i.e., as sea-ice became thinner and thinner), as the atmosphere experienced quasi-decadal oscillations during the last century. The wavelet analysis also confirms the prediction of such out-of-phase feature between these two basins, which varied from 0.62 in 1960 to 0.25 in 1995. Furthermore, a coupled ice-ocean general circulation model (GCM) was used to simulate two scenarios, one without the greenhouse gas warming and the other having realistic atmospheric forcing along with the warming that leads to sea-ice reduction trend. The quasi-decadal ASIO is excited in the latter case compared to the no-warming case. The wavelet analyses of the simulated ice volume were also conducted to derive decadal ASIO and similar phase relationship between the Arctic Ocean and the Nordic Seas. An independent data source was used to confirm such decadal oscillation in the upper layer (or freshwater) thickness, which is consistent with the model simulation. A modified feedback loop for the sea-ice trend and ASIO was proposed based on the previous one by Mysak and Venegas and the ice/albedo and cloud/albedo feedabcks, which are responsible for the sea ice reduction trend.  相似文献   
995.
996.
—Regional seismograms were collected to image the lateral variations of Lg coda Q at 1 Hz (Q 0?) and its frequency dependence <(eta)> in the Middle East using a back-projection method. The data include 124 vertical-component traces recorded at 10 stations during the period 1986–1996. The resulting images reveal lateral variations in both Q 0 and <eta>. In the Turkish and Iranian Plateaus, a highly deformed and tectonically active region, Q 0 ranges between about 150 and 300, with the lowest values occurring in western Anatolia where extremely high heat flow has been measured. The low Q 0 values found in this region agree with those found in other tectonically active regions of the world. Throughout most of the Arabian Peninsula, a relatively stable region, Q 0 varies between 350 and 450, being highest in the shield area and lowest in the eastern basins. All values are considerably lower than those found in most other stable regions. Low Q values throughout the Middle East may be caused by interstitial fluids that have migrated to the crust from the upper mantle, where they were probably generated by hydrothermal reactions at elevated temperatures known to occur there. Low Q 0 values (about 250) are also found in the Oman folded zone, a region with thick sedimentary deposits. <eta> varies inversely with Q 0 throughout most of the Middle East, with lower values (0.4–0.5) in the Arabian Peninsula and higher values (0.6–0.8) in Iran and Turkey. Q 0 and <eta> are both low in the Oman folded zone and western Anatolia.  相似文献   
997.
In several empirical and modelling studies on river hydraulics, dispersion was negatively correlated to surface roughness. In this study, it was aimed to investigate the influence of surface roughness on longitudinal dispersion under controlled conditions. In artificial flow channels with a length of 104 m, tracer experiments with variations in channel bed material were performed. By use of measured tracer breakthrough curves, average flow velocity, mean longitudinal dispersion, and mean longitudinal dispersivity were calculated. Longitudinal dispersion coefficients ranged from 0·018 m2 s?1 in channels with smooth bed surface up to 0·209 m2 s?1 in channels with coarse gravel as bed material. Longitudinal dispersion was linearly related to mean flow velocity. Accordingly, longitudinal dispersivities ranged between 0·152 ± 0·017 m in channels with smooth bed surface and 0·584 ± 0·015 m in identical channels with a coarse gravel substrate. Grain size and surface roughness of the channel bed were found to correlate positively to longitudinal dispersion. This finding contradicts several existing relations between surface roughness and dispersion. Future studies should include further variation in surface roughness to derive a better‐founded empirical equation forecasting longitudinal dispersion from surface roughness. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
998.
Understanding flow pathways and mechanisms that generate streamflow is important to understanding agrochemical contamination in surface waters in agricultural watersheds. Two environmental tracers, δ18O and electrical conductivity (EC), were monitored in tile drainage (draining 12 ha) and stream water (draining nested catchments of 6‐5700 ha) from 2000 to 2008 in the semi‐arid agricultural Missouri Flat Creek (MFC) watershed, near Pullman Washington, USA. Tile drainage and streamflow generated in the watershed were found to have baseline δ18O value of ?14·7‰ (VSMOW) year round. Winter precipitation accounted for 67% of total annual precipitation and was found to dominate streamflow, tile drainage, and groundwater recharge. ‘Old’ and ‘new’ water partitioning in streamflow were not identifiable using δ18O, but seasonal shifts of nitrate‐corrected EC suggest that deep soil pathways primarily generated summer streamflow (mean EC 250 µS/cm) while shallow soil pathways dominated streamflow generation during winter (EC declining as low as 100 µS/cm). Using summer isotopic and EC excursions from tile drainage in larger catchment (4700‐5700 ha) stream waters, summer in‐stream evaporation fractions were estimated to be from 20% to 40%, with the greatest evaporation occurring from August to October. Seasonal watershed and environmental tracer dynamics in the MFC watershed appeared to be similar to those at larger watershed scales in the Palouse River basin. A 0·9‰ enrichment, in shallow groundwater drained to streams (tile drainage and soil seepage), of δ18O values from 2000 to 2008 may be evidence of altered precipitation conditions due to the Pacific Decadal Oscillation (PDO) in the Inland Northwest. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
999.
In November 2012 EEFIT launched its first ever return mission to an earthquake affected site. The L’Aquila Earthquake site was chosen as this is a recent European event of interest to the UK and European earthquake engineering community. The main aims of this return mission were to document the earthquake recovery process and this paper presents an overview of the post-disaster emergency phase and transition to reconstruction in the Aquila area after the earthquake. It takes an earthquake engineering perspective, highlighting areas mainly of interest to the fields of structural/seismic engineering and reconstruction management. Within the paper, reference is made to published literature, but also to data collected in the field during the return mission that would not otherwise have been available. The paper presents some specific observations and lessons learned from the L’Aquila return mission. However, in light of current international efforts in conducting return missions, the paper ends with some reflections on the value that return missions can provide to the field of earthquake engineering in general, based on the EEFIT L’Aquila experience.  相似文献   
1000.
Methods of minimum entropy deconvolution (MED) try to take advantage of the non-Gaussian distribution of primary reflectivities in the design of deconvolution operators. Of these, Wiggins’(1978) original method performs as well as any in practice. However, we present examples to show that it does not provide a reliable means of deconvolving seismic data: its operators are not stable and, instead of whitening the data, they often band-pass filter it severely. The method could be more appropriately called maximum kurtosis deconvolution since the varimax norm it employs is really an estimate of kurtosis. Its poor performance is explained in terms of the relation between the kurtosis of a noisy band-limited seismic trace and the kurtosis of the underlying reflectivity sequence, and between the estimation errors in a maximum kurtosis operator and the data and design parameters. The scheme put forward by Fourmann in 1984, whereby the data are corrected by the phase rotation that maximizes their kurtosis, is a more practical method. This preserves the main attraction of MED, its potential for phase control, and leaves trace whitening and noise control to proven conventional methods. The correction can be determined without actually applying a whole series of phase shifts to the data. The application of the method is illustrated by means of practical and synthetic examples, and summarized by rules derived from theory. In particular, the signal-dominated bandwidth must exceed a threshold for the method to work at all and estimation of the phase correction requires a considerable amount of data. Kurtosis can estimate phase better than other norms that are misleadingly declared to be more efficient by theory based on full-band, noise-free data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号