首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14192篇
  免费   1551篇
  国内免费   1812篇
测绘学   5117篇
大气科学   2026篇
地球物理   2142篇
地质学   3047篇
海洋学   1228篇
天文学   1228篇
综合类   1398篇
自然地理   1369篇
  2024年   91篇
  2023年   192篇
  2022年   463篇
  2021年   603篇
  2020年   615篇
  2019年   663篇
  2018年   474篇
  2017年   762篇
  2016年   693篇
  2015年   715篇
  2014年   807篇
  2013年   994篇
  2012年   880篇
  2011年   807篇
  2010年   646篇
  2009年   812篇
  2008年   837篇
  2007年   970篇
  2006年   896篇
  2005年   729篇
  2004年   693篇
  2003年   525篇
  2002年   448篇
  2001年   384篇
  2000年   307篇
  1999年   273篇
  1998年   218篇
  1997年   156篇
  1996年   154篇
  1995年   141篇
  1994年   125篇
  1993年   111篇
  1992年   71篇
  1991年   63篇
  1990年   41篇
  1989年   41篇
  1988年   30篇
  1987年   19篇
  1986年   18篇
  1985年   11篇
  1984年   11篇
  1982年   8篇
  1981年   6篇
  1980年   6篇
  1979年   4篇
  1977年   11篇
  1973年   4篇
  1972年   4篇
  1971年   5篇
  1954年   6篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
41.
近30a来福州盆地中心的城市扩展进程   总被引:3,自引:1,他引:3  
徐涵秋 《地理科学》2011,31(3):351-357
位于福州盆地中心的福州市在城市化进程的推动下,城市空间快速扩展。利用多时相遥感影像和IBI建筑用地指数获得的建筑用地信息表明,福州市建成区在1976~2006年的30a间面积增加了105km2,增幅达到3.2倍。由于金山、快安等新城区的形成和马尾区的扩展,整个福州城区从西到东几乎连成一片,城市扩展经历了先慢后快,先北、后东、西的扩展历程。研究发现,福州城市的空间扩展并不属于工业主导型,而是第三产业推动型。城市空间的快速扩展已给福州带来了一定的环境资源问题,其中最突出的就是城市热岛效应与城市建筑用地总量控制问题。  相似文献   
42.
李军 《地理空间信息》2009,7(5):155-156
对于测绘技术人员来说,需要编写设计书、技术总结、监测方案等测绘类文档。利用Word2003自带的绘图工具,远远不能满足要求。介绍了“利用复制+粘贴工具”、“转换图像格式”、“使用插入对象”三种方式在Word2003文档中插入AutoCAD2004图形的三种方法,  相似文献   
43.
The continually increasing size of geospatial data sets poses a computational challenge when conducting interactive visual analytics using conventional desktop-based visualization tools. In recent decades, improvements in parallel visualization using state-of-the-art computing techniques have significantly enhanced our capacity to analyse massive geospatial data sets. However, only a few strategies have been developed to maximize the utilization of parallel computing resources to support interactive visualization. In particular, an efficient visualization intensity prediction component is lacking from most existing parallel visualization frameworks. In this study, we propose a data-driven view-dependent visualization intensity prediction method, which can dynamically predict the visualization intensity based on the distribution patterns of spatio-temporal data. The predicted results are used to schedule the allocation of visualization tasks. We integrated this strategy with a parallel visualization system deployed in a compute unified device architecture (CUDA)-enabled graphical processing units (GPUs) cloud. To evaluate the flexibility of this strategy, we performed experiments using dust storm data sets produced from a regional climate model. The results of the experiments showed that the proposed method yields stable and accurate prediction results with acceptable computational overheads under different types of interactive visualization operations. The results also showed that our strategy improves the overall visualization efficiency by incorporating intensity-based scheduling.  相似文献   
44.
While the inversion of electromagnetic data to recover electrical conductivity has received much attention, the inversion of those data to recover magnetic susceptibility has not been fully studied. In this paper we invert frequency-domain electromagnetic (EM) data from a horizontal coplanar system to recover a 1-D distribution of magnetic susceptibility under the assumption that the electrical conductivity is known. The inversion is carried out by dividing the earth into layers of constant susceptibility and minimizing an objective function of the susceptibility subject to fitting the data. An adjoint Green's function solution is used in the calculation of sensitivities, and it is apparent that the sensitivity problem is driven by three sources. One of the sources is the scaled electric field in the layer of interest, and the other two, related to effective magnetic charges, are located at the upper and lower boundaries of the layer. These charges give rise to a frequency-independent term in the sensitivities. Because different frequencies penetrate to different depths in the earth, the EM data contain inherent information about the depth distribution of susceptibility. This contrasts with static field measurements, which can be reproduced by a surface layer of magnetization. We illustrate the effectiveness of the inversion algorithm on synthetic and field data and show also the importance of knowing the background conductivity. In practical circumstances, where there is no a priori information about conductivity distribution, a simultaneous inversion of EM data to recover both electrical conductivity and susceptibility will be required.  相似文献   
45.

深地震反射剖面早已被国际上证实是一种地球深部探测的先锋技术,它利用比石油地震勘探更长的接收排列、更大的激发能量,探测上至地表,下达上地幔的精细结构和构造特征,现已被国内外越来越多地应用于大陆及海洋地壳与岩石圈上地幔探测上。深地震反射剖面技术在揭示地球深部动力学过程,论述大地构造演化,确定盆山耦合关系,推测成矿成藏条件,分析地震灾害等方面取得了丰硕成果。系统地梳理和概括了国内外陆地地区深地震反射剖面技术的研究现状以及典型应用案例。在此基础上,从深地震反射野外数据采集、数据资料处理、剖面地质解译以及多方法联合探测4个方面,对地震反射剖面技术未来发展方向进行了展望。研究认为:在野外采集方面,研发不同激发、接收组合类型的采集技术、提升深地震反射数据采集质量,减少环境破坏,降低经济成本;在数据处理中,继续探究提高深地震反射剖面的信噪比与分辨率,定量监控深地震反射数据处理过程中振幅保真度的相对变化;在资料解译时,将深入挖掘深地震反射剖面的叠前、叠后潜在信息,降低单纯依靠深地震反射振幅解译的非唯一性;在综合研究上,从多学科、多角度、多尺度、多方法上相互补充印证,降低剖面的多解性,提高成果的准确性和可靠度。

  相似文献   
46.
Spatial clustering is widely used in many fields such as WSN (Wireless Sensor Networks), web clustering, remote sensing and so on for discovery groups and to identify interesting distributions in the underlying database. By discussing the relationships between the optimal clustering and the initial seeds, a clustering validity index and the principle of seeking initial seeds were proposed, and on this principle we recommend an initial seed-seeking strategy: SSPG (Single-Shortest-Path Graph). With SSPG strategy used in clustering algorithms, we find that the result of clustering is optimized with more probability. At the end of the paper, according to the combinational theory of optimization, a method is proposed to obtain optimal reference k value of cluster number, and is proven to be efficient.  相似文献   
47.
We study the latitudinal distribution of sunspots observed from 1874 to 2009 using the center-of-latitude (COL). We calculate COL by taking the area-weighted mean latitude of sunspots for each calendar month. We then form the latitudinal distribution of COL for the sunspots appearing in the northern and southern hemispheres separately, and in both hemispheres with unsigned and signed latitudes, respectively. We repeat the analysis with subsets which are divided based on the criterion of which hemisphere is dominant for a given solar cycle. Our primary findings are as follows: (1) COL is not monotonically decreasing with time in each cycle. Small humps can be seen (or short plateaus) around every solar maxima. (2) The distribution of COL resulting from each hemisphere is bimodal, which can well be represented by the double Gaussian function. (3) As far as the primary component of the double Gaussian function is concerned, for a given data subset, the distributions due to the sunspots appearing in two different hemispheres are alike. Regardless of which hemisphere is magnetically dominant, the primary component of the double Gaussian function seems relatively unchanged. (4) When the northern (southern) hemisphere is dominant the width of the secondary component of the double Gaussian function in the northern (southern) hemisphere case is about twice as wide as that in the southern (northern) hemisphere. (5) For the distribution of the COL averaged with signed latitude, whose distribution is basically described by a single Gaussian function, it is shifted to the positive (negative) side when the northern (southern) hemisphere is dominant. Finally, we conclude by briefly discussing the implications of these findings on the variations in the solar activity.  相似文献   
48.
This article introduces a new classification scheme—head/tail breaks—to find groupings or hierarchy for data with a heavy-tailed distribution. The heavy-tailed distributions are heavily right skewed, with a minority of large values in the head and a majority of small values in the tail, commonly characterized by a power law, a lognormal, or an exponential function. For example, a country's population is often distributed in such a heavy-tailed manner, with a minority of people (e.g., 20 percent) in the countryside and the vast majority (e.g., 80 percent) in urban areas. This new classification scheme partitions all of the data values around the mean into two parts and continues the process iteratively for the values (above the mean) in the head until the head part values are no longer heavy-tailed distributed. Thus, the number of classes and the class intervals are both naturally determined. I therefore claim that the new classification scheme is more natural than the natural breaks in finding the groupings or hierarchy for data with a heavy-tailed distribution. I demonstrate the advantages of the head/tail breaks method over Jenks's natural breaks in capturing the underlying hierarchy of the data.  相似文献   
49.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.  相似文献   
50.
为有效管理和利用矿井长期积累的大量水化学数据,介绍了基于Microsoft Excel的数据管理方法。该方法既方便水化学特点分析,也有利于质量标准化建设,设计的数据表可作为模板供生产单位防治水技术人员使用。通过应用实例说明了水化学数据在防治水工作中的重要性,同时,也证明传统简单分析方法仍是常用且有效的方法。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号