首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   148篇
  免费   4篇
测绘学   6篇
大气科学   37篇
地球物理   32篇
地质学   43篇
海洋学   3篇
天文学   15篇
自然地理   16篇
  2022年   1篇
  2021年   4篇
  2020年   5篇
  2019年   6篇
  2018年   1篇
  2017年   3篇
  2016年   2篇
  2015年   2篇
  2014年   3篇
  2013年   8篇
  2012年   3篇
  2011年   5篇
  2010年   4篇
  2009年   8篇
  2008年   3篇
  2007年   6篇
  2006年   6篇
  2005年   2篇
  2004年   7篇
  2003年   4篇
  2002年   2篇
  2001年   4篇
  2000年   3篇
  1999年   2篇
  1998年   1篇
  1997年   4篇
  1996年   3篇
  1995年   12篇
  1994年   3篇
  1993年   3篇
  1992年   5篇
  1991年   3篇
  1990年   4篇
  1989年   4篇
  1988年   1篇
  1987年   2篇
  1986年   1篇
  1985年   2篇
  1982年   1篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
  1976年   4篇
  1975年   1篇
  1974年   1篇
排序方式: 共有152条查询结果,搜索用时 359 毫秒
91.
On North Harris and southeast Lewis a weathering limit separates glacially-moulded bedrock on low ground from frost-shattered bedrock and blockfields on high plateaux. Analysis of the depths of horizontal stress-release joints demonstrates significant contrasts in bedrock weathering above and below this boundary, and the survival of gibbsite only in soils above the weathering limit indicates that it represents the upper limit of Late Devensian glacial erosion. The weathering limit declines regularly in altitude on either side of the former ice shed, and is therefore interpreted as a periglacial trimline defining the upper limit of a locally-nourished ice mass at its maximum extent, rather than a former thermal boundary between protective cold-based and erosive warm-based ice. Calculated basal shear stress values are consistent with this interpretation. The configuration of the trimline indicates that at the last glacial maximum the area supported an ice cap that achieved a maximum altitude of ca. 700 m above present sea level and declined in altitude to the west-northwest and east-southeast at an average gradient of ca. 20 m km?1. Extrapolation of the dimensions of this ice cap suggests that it terminated ca. 7–10 km west of the present coast of Harris, and was confluent with mainland ice a short distance east of the present coastline.  相似文献   
92.
We report on the discovery of over 50 strong Hα emitting objects towards the large OB association Cyg OB2 and the H  ii region DR 15 on its southern periphery. This was achieved using the INT Photometric Hα Survey of the Northern Galactic Plane (IPHAS), combined with follow-up spectroscopy using the MMT multi-object spectrometer HectoSpec. We present optical spectra, supplemented with optical r ',  i ' and H α photometry from IPHAS, and near-infrared J ,  H and K photometry from Two Micron All Sky Survey. The position of the objects in the ( J − H ) versus ( H − K ) diagram strongly suggests most of them are young. Many show Ca  ii infrared triplet emission indicating that they are in a pre-main-sequence phase of evolution of T Tauri and Herbig Ae nature. Among these, we have uncovered pronounced clustering of T Tauri stars roughly a degree south of the centre of Cyg OB2, in an arc close to the H  ii region DR 15, and the radio ring nebula G79.29+0.46, for which we discuss its candidacy as a luminous blue variable. The emission-line objects towards Cyg OB2 itself could be the brightest most prominent component of a population of lower mass pre-main-sequence stars that has yet to be uncovered. Finally, we discuss the nature of the ongoing star formation in Cyg OB2 and the possibility that the central OB stars have triggered star formation in the periphery.  相似文献   
93.
94.
Article 2 of the United Nations Framework Convention on Climate Change (UNFCCC) calls for stabilization of greenhouse gas (GHG) concentrations at levels that prevent dangerous anthropogenic interference (DAI) in the climate system. However, some of the recent policy literature has focused on dangerous climatic change (DCC) rather than on DAI. DAI is a set of increases in GHGs concentrations that has a non-negligible possibility of provoking changes in climate that in turn have a non-negligible possibility of causing unacceptable harm, including harm to one or more of ecosystems, food production systems, and sustainable socio-economic systems, whereas DCC is a change of climate that has actually occurred or is assumed to occur and that has a non-negligible possibility of causing unacceptable harm. If the goal of climate policy is to prevent DAI, then the determination of allowable GHG concentrations requires three inputs: the probability distribution function (pdf) for climate sensitivity, the pdf for the temperature change at which significant harm occurs, and the allowed probability (“risk”) of incurring harm previously deemed to be unacceptable. If the goal of climate policy is to prevent DCC, then one must know what the correct climate sensitivity is (along with the harm pdf and risk tolerance) in order to determine allowable GHG concentrations. DAI from elevated atmospheric CO2 also arises through its impact on ocean chemistry as the ocean absorbs CO2. The primary chemical impact is a reduction in the degree of supersaturation of ocean water with respect to calcium carbonate, the structural building material for coral and for calcareous phytoplankton at the base of the marine food chain. Here, the probability of significant harm (in particular, impacts violating the subsidiary conditions in Article 2 of the UNFCCC) is computed as a function of the ratio of total GHG radiative forcing to the radiative forcing for a CO2 doubling, using two alternative pdfs for climate sensitivity and three alternative pdfs for the harm temperature threshold. The allowable radiative forcing ratio depends on the probability of significant harm that is tolerated, and can be translated into allowable CO2 concentrations given some assumption concerning the future change in total non-CO2 GHG radiative forcing. If future non-CO2 GHG forcing is reduced to half of the present non-CO2 GHG forcing, then the allowable CO2 concentration is 290–430 ppmv for a 10% risk tolerance (depending on the chosen pdfs) and 300–500 ppmv for a 25% risk tolerance (assuming a pre-industrial CO2 concentration of 280 ppmv). For future non-CO2 GHG forcing frozen at the present value, and for a 10% risk threshold, the allowable CO2 concentration is 257–384 ppmv. The implications of these results are that (1) emissions of GHGs need to be reduced as quickly as possible, not in order to comply with the UNFCCC, but in order to minimize the extent and duration of non-compliance; (2) we do not have the luxury of trading off reductions in emissions of non-CO2 GHGs against smaller reductions in CO2 emissions, and (3) preparations should begin soon for the creation of negative CO2 emissions through the sequestration of biomass carbon.  相似文献   
95.
Identification details of the 66 first-order selenodetic control points - i.e., points consisting of craters having diameters larger than 8 km - are given. The distribution of all 211 reference points of the network on the Blagg-Müller maps is also presented. In the Appendix the corresponding Blagg-Müller and LPL Catalogue number for each crater as well as the frame numbers of Lunar Orbiter IV photographs in which the craters have been studied are included.On leave from the Astronomy Department, University of Athens, Greece.The Lunar Science Institute is operated by the Universities Space Research Association under Contract No. NSR 09-051-001 with the National Aeronautics and Space Administration. This paper is Lunar Science Institute Contribution No. 237.  相似文献   
96.
The glaciomarine model for deglaciation of the Irish Sea basin suggests that the weight of ice at the last glacial maximum was sufficient to raise relative sea‐levels far above their present height, destabilising the ice margin and causing rapid deglaciation. Glacigenic deposits throughout the basin have been interpreted as glaciomarine. The six main lines of evidence on which the hypothesis rests (sedimentology, deformation structures, delta deposits, marine fauna, amino‐acid ratios and radiocarbon dates) are reviewed critically. The sedimentological interpretation of many sections has been challenged and it is argued that subglacial sediments are common rather than rare and that there is widespread evidence of glaciotectonism. Density‐driven deformation associated with waterlain sediments is rare and occurs where water was ponded locally. Sand and gravel deposits interpreted as Gilbert‐type deltas are similarly the result of local ponding or occur where glaciers from different source areas uncoupled. They do not record past sea‐levels and the ad hoc theory of ‘piano‐key tectonics’ is not required to explain the irregular pattern of altitudes. The cold‐water foraminifers interpreted as in situ are regarded as reworked from Irish Sea sediments that accumulated during much of the late Quaternary, when the basin was cold and shallow with reduced salinities. Amino‐acid age estimates used in support of the glaciomarine model are regarded as unreliable. Radiocarbon dates from distinctive foraminiferal assemblages in northeast Ireland show that glaciomarine sediments do occur above present sea‐level, but they are restricted to low altitudes in the north of the basin and record a rise rather than a fall in sea‐level. It is suggested here that the oldest dates, around 17 000 yr BP, record the first Late Devensian (Weichselian) marine inundation above present sea‐level. This accords with the pattern but not the detail of recent models of sea‐level change. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   
97.
Summary. A normal mode superposition approach is used to synthesize complete seismic codas for flat layered earth models and the P-SV phases. Only modes which have real eigenwavenumbers are used so that the search for eigenvalues in the complex wavenumber plane is confined to the real axis. In order to synthesize early P -wave arrivals by summing a number of'trapped'modes, an anomalously high velocity cap layer is added to the bottom of the structure so that most of the seismic energy is contained in the upper layers as high-order surface waves. Causality arguments are used to define time windows for which the resulting synthetic seismograms are close approximations to the exact solutions without the cap layer. The traditional Thomson—Haskell matrix approach to computing the normal modes is reformulated so that numerical problems encountered at high frequencies are avoided and numerical results of the locked mode approximation are given.  相似文献   
98.
Paul Boyle  Danny Dorling 《Area》2004,36(2):101-110
National censuses are expensive. They are conducted infrequently. They collect information that some feel infringes their human rights, and people are required by law to complete them. The outputs are not perfect, and in some situations may be misleading. Some suggest that censuses hark back to a period when regularly collected administrative data were not available. These are some of the views held about national censuses. Why, then, would others argue that they are an essential resource? In this paper, we consider some of the pros and cons of conducting national censuses, before introducing a series of papers that draw on early data available from the 2001 UK census. We argue that these papers, and the wealth of research that will be conducted in the future with 2001 census data, make a strong case for supporting the compulsory collection of personal information about the 'entire' population every ten years.  相似文献   
99.
This paper describes the development of a comprehensive geographic database of historical precipitation and runoff measurements for the conterminous U.S. The database is used in a spatial analysis to characterize large scale precipitation and runoff patterns and to assess the utility and limitations of using historical hydro-meteorological data for providing spatially distributed precipitation estimates at regional and continental scales. Long-term annual average precipitation (P) and runoff (Q) surfaces (geographically referenced, digital representations of a continuous spatial distribution) generated from interpolation of point measurements are used in a distributed water balance calculation to check the reliability of precipitation estimates. The resulting input-output values (P- Q) illustrate the deficiency (sparse distribution and low elevation bias) of historical precipitation measurements in the mountainous western U.S. where snowmelt is an important component of the annual runoff. The incorporation of high elevation snow measurements into the precipitation record significantly improves the water balance estimates in some areas and enhances the utility of historical data for providing spatially distributed precipitation estimates in topographically diverse regions. Regions where the use of historical precipitation data may be most limited for precipitation estimation are identified and alternatives to the use of interpolated historical data for precipitation estimation across large heterogenous regions are suggested. The research establishes a database for continental scale studies and provides direction for the successful development of spatially distributed regional scale water balance models.  相似文献   
100.
Conventional atmospheric dispersion and air quality models require that the wind field be known with a higher resolution than is currently available from field monitoring stations in most coastal areas. In this paper, a numerical model is developed to predict the wind flow field during the land-sea breeze. The form and assumptions and method of solution of the model are described. The model output is compared to atmospheric data taken from a field study conducted in the Santa Barbara Channel and Ventura-Oxnard plain in southern California.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号