首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1630篇
  免费   99篇
  国内免费   19篇
测绘学   45篇
大气科学   137篇
地球物理   414篇
地质学   558篇
海洋学   151篇
天文学   291篇
综合类   7篇
自然地理   145篇
  2022年   10篇
  2021年   27篇
  2020年   32篇
  2019年   33篇
  2018年   47篇
  2017年   46篇
  2016年   66篇
  2015年   53篇
  2014年   61篇
  2013年   104篇
  2012年   62篇
  2011年   96篇
  2010年   91篇
  2009年   99篇
  2008年   97篇
  2007年   74篇
  2006年   85篇
  2005年   62篇
  2004年   47篇
  2003年   61篇
  2002年   59篇
  2001年   27篇
  2000年   11篇
  1999年   20篇
  1998年   17篇
  1997年   25篇
  1996年   24篇
  1995年   22篇
  1994年   16篇
  1993年   21篇
  1992年   11篇
  1991年   11篇
  1990年   12篇
  1989年   13篇
  1988年   8篇
  1987年   16篇
  1986年   9篇
  1985年   18篇
  1984年   17篇
  1983年   17篇
  1982年   15篇
  1981年   17篇
  1980年   13篇
  1979年   7篇
  1978年   13篇
  1977年   8篇
  1976年   6篇
  1974年   6篇
  1973年   8篇
  1970年   6篇
排序方式: 共有1748条查询结果,搜索用时 384 毫秒
41.
42.
43.
44.
A generic network design in close range photogrammetry is one where optimal multi-ray intersection geometry is obtained with as few camera stations as practicable. Hyper redundancy is a concept whereby, once the generic network is in place, many additional images are recorded, with the beneficial impact upon object point precision being equivalent to the presence of multiple exposures at each camera position within the generic network. The effective number of images per station within a hyper redundant network might well be in the range of 10 to 20 or more. As is apparent when it is considered that a hyper redundant network may comprise hundreds of images, the concept is only applicable in practice to fully automatic vision metrology systems, where it proves to be a very effective means of enhancing measurement accuracy at the cost of minimal additional work in the image recording phase. This paper briefly reviews the network design and accuracy aspects of hyper redundancy and illustrates the technique by way of the photogrammetric measurement of surface deformation of a radio telescope of 26 m diameter. This project required an object point measurement accuracy of σ  = 0·065 mm, or 1/400 000 of the diameter of the reflector.  相似文献   
45.
We established trophic guilds of macroinvertebrate and fish taxa using correspondence analysis and a hierarchical clustering strategy for a seagrass food web in winter in the northeastern Gulf of Mexico. To create the diet matrix, we characterized the trophic linkages of macroinvertebrate and fish taxa present inHalodule wrightii seagrass habitat areas within the St. Marks National Wildlife Refuge (Florida) using binary data, combining dietary links obtained from relevant literature for macroinvertebrates with stomach analysis of common fishes collected during January and February of 1994. Heirarchical average-linkage cluster analysis of the 73 taxa of fishes and macroinvertebrates in the diet matrix yielded 14 clusters with diet similarity ≥ 0.60. We then used correspondence analysis with three factors to jointly plot the coordinates of the consumers (identified by cluster membership) and of the 33 food sources. Correspondence analysis served as a visualization tool for assigning each taxon to one of eight trophic guilds: herbivores, detritivores, suspension feeders, omnivores, molluscivores, meiobenthos consumers, macrobenthos consumers and piscivores. These trophic groups, corss-classified with major taxonomic groups, were further used to develop consumer compartments in a network analysis model of carbon flow in this seagrass ecosystem. The method presented here should greatly improve the development of future network models of food webs by providing an objective procedure for aggregating trophic groups.  相似文献   
46.
A benthic index of biotic integrity was developed for use in estuaries of the mid-Atlantic region of the United States (Delaware Bay estuary through Albemarle-Pamlico Sound). The index was developed for the Mid-Atlantic Integrated Assessment Program (MAIA) of the U.S. Environmental Protection Agency using procedures similar to those applied previously in Chesapeake Bay and southeastern estuaries, and was based on sampling in July through early October. Data from seven federal and state sampling programs were used to categorize sites as degraded or non-degraded based on dissolved oxygen, sediment contaminant, and sediment toxicity criteria. Various metrics of benthic community structure and function that distinguished between degraded and reference (non-degraded) sites were selected for each of five major habitat types defined by classification analysis of assemblages. Each metric was scored according to thresholds established on the distribution of values at reference sites, so that sites with low scoring metrics would be expected to show signs of degradation. For each habitat, metrics that correctly classified at least 50% of the degraded sites in the calibration data set were selected whenever possible to derive the index. The final index integrated the average score of the combination of metrics that performed best according to several criteria. Selected metrics included measures of productivity (abundance), diversity (number of taxa, Shannon-Wiener diversity, percent dominance), species composition and life history (percent abundance of pollution-indicative taxa, percent abundance of pollution-sensitive taxa, percent abundance of Bivalvia, Tanypodinae-Chironomidae abundance ratio), and trophic composition (percent abundance of deep-deposit feeders). The index correctly classified 82% of all sites in an independent data set. Classification efficiencies of sites were higher in the mesohaline and polyhaline habitats (81–92%) than in the oligohaline (71%) and the tidal freshwater (61%). Although application of the index to low salinity habitats should be done with caution, the MAIA index appeared to be quite reliable with a high likelihood of correctly identifying both degraded and non-degraded conditions. The index is expected to be of great utility in regional assessments as a tool for evaluating the integrity of benthic assemblages and tracking their condition over time.  相似文献   
47.
The wavefield transform is a mathematical technique for transforming low-frequency electromagnetic (EM) signals to a non-diffusive wave domain. The ray approximation is valid in the transform space and this makes traveltime tomography for 3D mapping of the electrical conductivity distribution in the subsurface possible. The transform, however, imposes stringent frequency bandwidth and signal-to-noise ratio requirements on the data. Here we discuss a laboratory scale experiment designed to collect transform quality EM data, and to demonstrate the practical feasibility of transforming these data to the wavefield domain.
We have used the scalable nature of EM fields to design a time-domain experiment using graphite blocks to simulate realistic field conditions while leaving the time scale undisturbed. The spatial dimensions have been scaled down by a factor of a thousand by scaling conductivity up by a factor of a million. The graphite blocks have two holes drilled into them to carry out cross-well and borehole-to-surface experiments. Steel sheets have been inserted between the blocks to simulate a conductive layer.
Our experiments show that accurate EM data can be recorded on a laboratory scale model even when the scaling of some features, such as drill-hole diameters, is not maintained. More importantly, the time-domain EM data recorded in cross-well and surface-to-borehole modes can be usefully and accurately transformed to the wavefield domain. The observed wavefield propagation delay is proportional to the direct distance between the transmitter and receiver in a homogeneous medium. In a layered medium, data accuracy is reduced and, hence, our results are not so conclusive. On the basis of the experimental results we conclude that the wavefield transform could constitute a valid approach to the interpretation of accurate, undistorted time-domain data if further improvement in the transform can be realized.  相似文献   
48.
Isotopic variations in melting snow are poorly understood. We made weekly measurements at the Central Sierra Snow Laboratory, California, of snow temperature, density, water equivalent and liquid water volume to examine how physical changes within the snowpack govern meltwater δ18O. Snowpack samples were extracted at 0.1 m intervals from ground level to the top of the snowpack profile between December 1991 and April 1992. Approximately 800 mm of precipitation fell during the study period with δ18O values between −21.35 and −4.25‰. Corresponding snowpack δ18O ranged from −22.25 to −6.25‰. The coefficient of variation of δ18O in snowpack levels decreased from −0.37 to −0.07 from winter to spring, indicating isotopic snowpack homogenization. Meltwater δ18O ranged from −15.30 to −8.05‰, with variations of up to 2.95‰ observed within a single snowmelt episode, highlighting the need for frequent sampling. Early snowmelt originated in the lower snowpack with higher δ18O through ground heat flux and rainfall. After the snowpack became isothermal, infiltrating snowmelt displaced the higher δ18O liquid in the lower snowpack through a piston flow process. Fractionation analysis using a two-component mixing model on the isothermal snowpack indicated that δ18O in the initial and final half of major snowmelt was 1.30‰ lower and 1.45‰ higher, respectively, than the value from simple mixing. Mean snowpack δ18O on individual profiling days showed a steady increase from −15.15 to −12.05‰ due to removal of lower δ18O snowmelt and addition of higher δ18O rainfall. Results suggest that direct sampling of snowmelt and snow cores should be undertaken to quantify tracer input compositions adequately. The snowmelt sequence also suggests that regimes of early lower δ18O and later higher δ18O melt may be modeled and used in catchment tracing studies.  相似文献   
49.
Alex Brisbourne reports on a one-day discussion meeting organized by SEIS-UK for seismologists to discuss current seismological research undertaken in the UK and to consider how the provision of new seismic equipment and training facility may enhance that work.  相似文献   
50.
Ocean Drilling Program (ODP) Hole 504B near the Costa Rica Rift is the deepest hole drilled in the ocean crust, penetrating a volcanic section, a transition zone and a sheeted dike complex. The distribution of Li and its isotopes through this 1.8-km section of oceanic crust reflects the varying conditions of seawater alteration with depth. The upper volcanic rocks, altered at low temperatures, are enriched in Li (5.6-27.3 ppm) and have heavier isotopic compositions (δ7Li=6.6-20.8‰) relative to fresh mid-ocean ridge basalt (MORB) due to uptake of seawater Li into alteration clays. The Li content and isotopic compositions of the deeper volcanic rocks are similar to MORB, reflecting restricted seawater circulation in this section. The transition zone is a region of mixing of seawater with upwelling hydrothermal fluids and sulfide mineralization. Li enrichment in this zone is accompanied by relatively light isotopic compositions (−0.8-2.1‰) which signify influence of basalt-derived Li during mineralization and alteration. Li decreases with depth to 0.6 ppm in the sheeted dike complex as a result of increasing hydrothermal extraction in the high-temperature reaction zone. Rocks in the dike complex have variable isotopic values that range from −1.7 to 7.9‰, depending on the extent of hydrothermal recrystallization and off-axis low-temperature alteration. Hydrothermally altered rocks are isotopically light because 6Li is preferentially retained in greenschist and amphibolite facies minerals. The δ7Li values of the highly altered rocks of the dike complex are complementary to those of high-temperature mid-ocean ridge vent fluids and compatible to equilibrium control by the alteration mineral assemblage. The inventory of Li in basement rocks permits a reevaluation of the role of oceanic crust in the budget of Li in the ocean. On balance, the upper 1.8 km of oceanic crusts remains a sink for oceanic Li. The observations at 504B and an estimated flux from the underlying 0.5 km of gabbro suggest that the global hydrothermal flux is at most 8×109 mol/yr, compatible with geophysical thermal models. This work defines the distribution of Li and its isotopes in the upper ocean crust and provides a basis to interpret the contribution of subducted lithosphere to arc magmas and cycling of crustal material in the deep mantle.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号