首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   582篇
  免费   14篇
  国内免费   7篇
测绘学   19篇
大气科学   40篇
地球物理   140篇
地质学   159篇
海洋学   45篇
天文学   151篇
综合类   4篇
自然地理   45篇
  2024年   1篇
  2022年   1篇
  2021年   3篇
  2020年   6篇
  2019年   8篇
  2018年   14篇
  2017年   7篇
  2016年   11篇
  2015年   11篇
  2014年   21篇
  2013年   33篇
  2012年   31篇
  2011年   29篇
  2010年   23篇
  2009年   28篇
  2008年   27篇
  2007年   31篇
  2006年   19篇
  2005年   27篇
  2004年   46篇
  2003年   27篇
  2002年   35篇
  2001年   28篇
  2000年   25篇
  1999年   17篇
  1998年   16篇
  1997年   9篇
  1996年   10篇
  1995年   10篇
  1994年   5篇
  1993年   3篇
  1992年   4篇
  1991年   1篇
  1990年   4篇
  1989年   4篇
  1988年   4篇
  1987年   1篇
  1986年   2篇
  1985年   4篇
  1984年   4篇
  1983年   5篇
  1981年   2篇
  1980年   3篇
  1977年   1篇
  1976年   2篇
排序方式: 共有603条查询结果,搜索用时 15 毫秒
91.
92.
93.
We present MUSE, a software framework for combining existing computational tools for different astrophysical domains into a single multiphysics, multiscale application. MUSE facilitates the coupling of existing codes written in different languages by providing inter-language tools and by specifying an interface between each module and the framework that represents a balance between generality and computational efficiency. This approach allows scientists to use combinations of codes to solve highly coupled problems without the need to write new codes for other domains or significantly alter their existing codes. MUSE currently incorporates the domains of stellar dynamics, stellar evolution and stellar hydrodynamics for studying generalized stellar systems. We have now reached a “Noah’s Ark” milestone, with (at least) two available numerical solvers for each domain. MUSE can treat multiscale and multiphysics systems in which the time- and size-scales are well separated, like simulating the evolution of planetary systems, small stellar associations, dense stellar clusters, galaxies and galactic nuclei. In this paper we describe three examples calculated using MUSE: the merger of two galaxies, the merger of two evolving stars, and a hybrid N-body simulation. In addition, we demonstrate an implementation of MUSE on a distributed computer which may also include special-purpose hardware, such as GRAPEs or GPUs, to accelerate computations. The current MUSE code base is publicly available as open source at http://muse.li.  相似文献   
94.
95.
We study the 37 brightest radio sources in the Subaru/ XMM–Newton Deep Field. We have spectroscopic redshifts for 24 of 37 objects and photometric redshifts for the remainder, yielding a median redshift z med for the whole sample of   z med≃ 1.1  and a median radio luminosity close to the 'Fanaroff–Riley type I/type II (FR I/FR II)' luminosity divide. Using mid-infrared (mid-IR) ( Spitzer MIPS 24 μm) data we expect to trace nuclear accretion activity, even if it is obscured at optical wavelengths, unless the obscuring column is extreme. Our results suggest that above the FR I/FR II radio luminosity break most of the radio sources are associated with objects that have excess mid-IR emission, only some of which are broad-line objects, although there is one clear low-accretion-rate object with an FR I radio structure. For extended steep-spectrum radio sources, the fraction of objects with mid-IR excess drops dramatically below the FR I/FR II luminosity break, although there exists at least one high-accretion-rate 'radio-quiet' QSO. We have therefore shown that the strong link between radio luminosity (or radio structure) and accretion properties, well known at z ∼ 0.1, persists to z ∼ 1. Investigation of mid-IR and blue excesses shows that they are correlated as predicted by a model in which, when significant accretion exists, a torus of dust absorbs ∼30 per cent of the light, and the dust above and below the torus scatters ≳1 per cent of the light.  相似文献   
96.
We have constructed a computer model for simulation of point-sources imaged on two-dimensional detectors. An attempt has been made to ensure that the model produces data that mimic real data taken with 2-D detectors. To be realistic, such simulations must include randomly generated noise of the appropriate type from all sources (e.g. source, background, and detector). The model is generic and accepts input values for parameters such as pixel size, read noise, source magnitude, and sky brightness. Point-source profiles are then generated with noise and detector characteristics added via our model. The synthetic data are output as simple integrations (onedimensional), as radial slices (two-dimensional), and as intensity-contour plots (three-dimensional). Each noise source can be turned on or off so that they can be studied separately as well as in combination to yield a realistic view of an image. This paper presents the basic properties of the model and some examples of how it can be used to simulate the effects of changing image position, image scale, signal strength, noise characteristics, and data reduction procedures.Use of the model has allowed us to confirm and quantify three points: 1) The use of traditionalsize apertures for photometry of faint point-sources adds substantial noise to the measurement which can significantly degrade the quality of the observation; 2) The number of pixels used to estimate the background is important and must be considered when estimating errors; and 3) The CCD equation normally used by the astronomical community consistently overestimates the signal-to-noise obtainable by a measurement while a revised equation, discussed here, provides a better estimator.  相似文献   
97.
Diatoms respond rapidly to eutrophication and diatom-based models for inferring total phosphorus (TP) have found wide application in palaeolimnology, especially in tracking trajectories of past and recent nutrient enrichment and in establishing pre-disturbance targets for restoration. Using new analysis of existing training sets and sediment-cores we examine the statistical and ecological constraints of diatom-inferred TP (DI-TP) models. Although the models show an apparently strong relationship between measured and inferred TP in the training sets, even under cross-validation, the models display three fundamental weaknesses, namely (1) the relationship between TP and diatom relative abundance is heavily confounded with secondary variables such as alkalinity and lake depth, (2) the models contain many taxa that are not significantly related to TP, and (3) comparison between different models shows poor or no spatial replicability. At some sites the sediment-core diatom assemblage change tracks the TP gradient in the training sets and DI-TP reconstructions are consistent with monitored TP data and known catchment histories for the recent past. At others diatom species turnover is apparently related to variables other than TP, and DI-TP fails to even reproduce plausible trends. Pre-disturbance DI-TP values are also questionable at most sites. We argue that these problems pervade many DI-TP models, particularly those where violations of the basic assumptions of the transfer function approach are ignored.  相似文献   
98.
In this paper, a new approach to planetary mission design is described which automates the search for gravity-assist trajectories. This method finds all conic solutions given a range of launch dates, a range of launch energies and a set of target planets. The new design tool is applied to the problems of finding multiple encounter trajectories to the outer planets and Venus gravity-assist trajectories to Mars. The last four-planet grand tour opportunity (until the year 2153) is identified. It requires an Earth launch in 1996 and encounters Jupiter, Uranus, Neptune, and Pluto. Venus gravity-assist trajectories to Mars for the 30 year period 1995–2024 are examined. It is shown that in many cases these trajectories require less launch energy to reach Mars than direct ballistic trajectories.Assistant Professor, School of Aeronautics and AstronauticsGraduate Student, School of Aeronautics and Astronautics  相似文献   
99.
100.
High-resolution seismic reflection profile data show that the modern sediment cover (over the last 150 years) in Georgian Bay is thin and spatially discontinuous. Sediments rich in ragweed pollen, largely derived from siltation linked to land clearing and European settlement, form a thin, discontinuous veneer on the lakebed. Much of the lakebed consists of exposed sediments deposited during the late glacial or early postglacial. Accumulation rates of modern sediments range from < 0 mm/year (net erosion) to ∼3.2 mm/year, often within a few hundred metres spatially. These rates are much lower than those reported for the main basin of Lake Huron and the other Great Lakes, and are attributed to the low sediment supply. Only a few small rivers flow into Georgian Bay, and most of the basin is surrounded by bedrock of Precambrian gneiss and granite to the east, and Silurian dolostone, limestone and shale to the west. Thick deposits of Pleistocene drift, found on the Georgian Bay shoreline only between Meaford and Port Severn, are the main sediment source for the entire basin at present. Holocene to modern sediments are even absent from some deep basins of Georgian Bay. These findings have implications for the ultimate fate of anthropogenic contaminants in Georgian Bay. While microfossil assemblages in the ragweed-rich sediments record increased eutrophication over the last 150 years, most pollutants generated in the Georgian Bay catchment are not accumulating on the lakebed and are probably exported from the Bay.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号