首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   417篇
  免费   9篇
  国内免费   2篇
测绘学   10篇
大气科学   18篇
地球物理   92篇
地质学   155篇
海洋学   25篇
天文学   82篇
自然地理   46篇
  2021年   6篇
  2020年   11篇
  2019年   6篇
  2018年   6篇
  2017年   8篇
  2016年   10篇
  2015年   5篇
  2014年   10篇
  2013年   24篇
  2012年   14篇
  2011年   8篇
  2010年   12篇
  2009年   19篇
  2008年   11篇
  2007年   12篇
  2006年   15篇
  2005年   10篇
  2004年   10篇
  2003年   12篇
  2002年   17篇
  2001年   5篇
  2000年   6篇
  1999年   4篇
  1998年   6篇
  1997年   6篇
  1996年   6篇
  1995年   6篇
  1992年   8篇
  1989年   4篇
  1988年   4篇
  1987年   3篇
  1984年   6篇
  1983年   6篇
  1982年   3篇
  1977年   3篇
  1976年   6篇
  1975年   9篇
  1974年   5篇
  1973年   6篇
  1970年   4篇
  1962年   4篇
  1961年   3篇
  1960年   6篇
  1959年   4篇
  1956年   3篇
  1955年   4篇
  1952年   3篇
  1940年   3篇
  1937年   4篇
  1936年   3篇
排序方式: 共有428条查询结果,搜索用时 31 毫秒
331.
We have carried out targeted submillimetre observations as part of a programme to explore the connection between the rest-frame ultraviolet and far-infrared properties of star-forming galaxies at high redshift, which is currently poorly understood. On the one hand, the Lyman break technique is very effective at selecting     galaxies. On the other, 'blank-field' imaging in the submillimetre seems to turn up sources routinely, amongst which some are star-forming galaxies at similar redshifts. Already much work has been done searching for optical identifications of objects detected using the SCUBA instrument. Here we have taken the opposite approach, performing submillimetre photometry for a sample of Lyman break galaxies, the ultraviolet properties of which imply high star formation rates. The total signal from our Lyman break sample is undetected in the submillimetre, at an rms level of ∼0.5 mJy, which implies that the population of Lyman break galaxies does not constitute a large part of the recently detected blank-field submillimetre sources. However, our one detection suggests that with reasonable SCUBA integrations we might expect to detect those few Lyman break galaxies that are far-infrared brightest.  相似文献   
332.
We investigate the evolution of the metallicity of the intergalactic medium (IGM) with particular emphasis on its spatial distribution. We propose that metal enrichment occurs as a two-step process. First, supernova (SN) explosions eject metals into relatively small regions confined to the surroundings of star-forming galaxies. From a comprehensive treatment of blowout we show that SN by themselves fail by more than one order of magnitude to distribute the products of stellar nucleosynthesis over volumes large enough to pollute the whole IGM to the metallicity levels observed. Thus, an additional (but as yet unknown) physical mechanism must be invoked to mix the metals on scales comparable to the mean distance between the galaxies that are most efficient pollutants. From this simple hypothesis we derive a number of testable predictions for the evolution of the IGM metallicity. Specifically, we find that: (i) the fraction of metals ejected over the star-formation history of the Universe is about 50 per cent at     that is, approximately half of the metals today are found in the IGM; (ii) if the ejected metals were homogeneously mixed with the baryons in the Universe, the average IGM metallicity would be     at     However, due to spatial inhomogeneities, the mean of the distribution of metallicities in the diffusive zones has a wide (more than 2 orders of magnitude) spread around this value; (iii) if metals become more uniformly distributed at     as assumed, at     the metallicity of the IGM is narrowly confined within the range     Finally, we point out that our results can account for the observed metal content of the intracluster medium.  相似文献   
333.
We show how to decorrelate the (pre-whitened) power spectrum measured from a galaxy survey into a set of high-resolution uncorrelated band-powers. The treatment includes non-linearity, but not redshift distortions. Amongst the infinitely many possible decorrelation matrices, the square root of the Fisher matrix, or a scaled version thereof, offers a particularly good choice, in the sense that the band-power windows are narrow, approximately symmetric, and well-behaved in the presence of noise. We use this method to compute band-power windows for, and the information content of, the Sloan Digital Sky Survey, the Las Campanas Redshift Survey, and the IRAS 1.2-Jy Survey.  相似文献   
334.
Recognizing the beneficial effect of nonlinear soil–foundation response has led to a novel design concept, termed ‘rocking isolation’. The analysis and design of such rocking structures require nonlinear dynamic time history analyses. Analyzing the entire soil–foundation–structure system is computationally demanding, impeding the application of rocking isolation in practice. Therefore, there is an urgent need to develop efficient simplified analysis methods. This paper assesses the robustness of two simplified analysis methods, using (i) a nonlinear and (ii) a bilinear rocking stiffness combined with linear viscous damping. The robustness of the simplified methods is assessed by (i) one-to-one comparison with a benchmark finite element (FE) analysis using a selection of ground motions and (ii) statistical comparison of probability distributions of response quantities, which characterize the time history response of rocking systems. A bridge pier (assumed rigid) supported on a square foundation, lying on a stiff clay stratum, is used as an illustrative example. Nonlinear dynamic FE time history analysis serves as a benchmark. Both methods yield reasonably accurate predictions of the maximum rotation θmax. Their stochastic comparison with respect to the empirical cumulative distribution function of θmax reveals that the nonlinear and the bilinear methods are not biased. Thus, both can be used to estimate probabilities of exceeding a certain threshold value of θ. Developed in this paper, the bilinear method is much easier to calibrate than the nonlinear, offering similar performance.  相似文献   
335.
21-cm tomography is expected to be difficult in part because of serious foreground contamination. Previous studies have found that line-of-sight approaches are capable of cleaning foregrounds to an acceptable level on large spatial scales, but not on small spatial scales. In this paper, we introduce a Fourier space formalism for describing the line-of-sight methods, and use it to introduce an improved new method for 21-cm foreground cleaning. Heuristically, this method involves fitting foregrounds in Fourier space using weighted polynomial fits, with each pixel weighted according to its information content. We show that the new method reproduces the old one on large angular scales, and gives marked improvements on small scales at essentially no extra computational cost.  相似文献   
336.
We present a state-of-the-art linear redshift distortion analysis of the recently published IRAS Point Source Catalog Redshift Survey (PSC z ). The procedure involves linear compression into 4096 KarhunenLoève (signal-to-noise) modes culled from a potential pool of 3105 modes, followed by quadratic compression into three separate power spectra, the galaxygalaxy, galaxyvelocity and velocityvelocity power spectra. Least squares-fitting to the decorrelated power spectra yields a linear redshift distortion parameter  相似文献   
337.
A Rb-Sr whole-rock isochron study indicates that the entire Donegal granite suite was emplaced into orthotectonic Caledonian (Dalradian) rocks over a short interval during mid-Silurian to earliest-Devonian times. The Thorr pluton, probably the earliest member of the suite, yields an age of 418 ± 26 Myr and initial 87Sr/86Sr ratio of 0·7055 ± 4, while the latest member, the Main Donegal pluton has an age of 407 ± 23 Myr and initial 87Sr/86Sr ratio of 0·7063 ± 5 (Λ87Rb = 1·42 ± 10−11 yr−1). Errors on both the age and initial Sr isotope ratios incorporate both a priori and geological scatter components and are quoted at the 2-sigma level. The low and restricted range of initial Sr isotope ratios suggests small but significant differences in the composition of the parental granitic magmas which were derived from a low Rb/Sr, low 87Sr/86Sr source.  相似文献   
338.
Climate warming has not resulted in measurable thawing of the cold (-5°C to -10°C) permafrost in northern Alaska during the last half century. The maximum depths of summer thaw at five locations near Barrow, Alaska, in 2005 were within the ranges of the depths obtained at those same locations during the early 1950s. However, there has been a net warming of about 2°C, after a cooling of 0.4°C during 1953-1960, at the upper depths of the permafrost column at two of the locations. Thawing of permafrost from above (increase in active layer thickness) is determined by the summer thawing index for the specific year; any warming, or cooling, of the upper permafrost column results from the cumulative effect of changes in the average annual air temperatures over a period of years, assuming no change in surface conditions. Theoretically, thawing from the base of permafrost should be negligible even in areas of thin (about 100-200 m) permafrost in northern Alaska. The reported shoreline erosion along the northern Alaska coast is a secondary result from changes in the adjacent ocean ice coverage during the fall stormy period, and is not directly because of any "thawing" of the permafrost.  相似文献   
339.
The comparison of macroseismic intensity scales   总被引:5,自引:1,他引:4  
The number of different macroseismic scales that have been used to express earthquake shaking in the course of the last 200 years is not known; it may reach three figures. The number of important scales that have been widely adopted is much smaller, perhaps about eight, not counting minor variants. Where data sets exist that are expressed in different scales, it is often necessary to establish some sort of equivalence between them, although best practice would be to reassign intensity values rather than convert them. This is particularly true because difference between workers in assigning intensity is often greater than differences between the scales themselves, particularly in cases where one scale may not be very well defined. The extent to which a scale guides the user to arrive at a correct assessment of the intensity is a measure of the quality of the scale. There are a number of reasons why one should prefer one scale to another for routine use, and some of these tend in different directions. If a scale has many tests (diagnostics) for each degree, it is more likely that the scale can be applied in any case that comes to hand, but if the diagnostics are so numerous that they include ones that do not accurately indicate any one intensity level, then the use of the scale will tend to produce false values. The purpose of this paper is chiefly to discuss in a general way the principles involved in the analysis of intensity scales. Conversions from different scales to the European Macroseismic Scale are discussed.  相似文献   
340.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号