首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   25621篇
  免费   207篇
  国内免费   940篇
测绘学   1435篇
大气科学   2130篇
地球物理   4766篇
地质学   12038篇
海洋学   1059篇
天文学   1887篇
综合类   2166篇
自然地理   1287篇
  2021年   17篇
  2020年   16篇
  2019年   15篇
  2018年   4781篇
  2017年   4052篇
  2016年   2605篇
  2015年   267篇
  2014年   122篇
  2013年   81篇
  2012年   1042篇
  2011年   2779篇
  2010年   2047篇
  2009年   2354篇
  2008年   1940篇
  2007年   2397篇
  2006年   106篇
  2005年   244篇
  2004年   441篇
  2003年   437篇
  2002年   291篇
  2001年   79篇
  2000年   79篇
  1999年   40篇
  1998年   50篇
  1997年   8篇
  1996年   19篇
  1995年   14篇
  1994年   20篇
  1993年   15篇
  1992年   14篇
  1991年   27篇
  1990年   16篇
  1989年   8篇
  1988年   19篇
  1987年   9篇
  1986年   13篇
  1985年   12篇
  1984年   20篇
  1983年   17篇
  1981年   42篇
  1980年   33篇
  1979年   10篇
  1978年   14篇
  1977年   12篇
  1976年   22篇
  1975年   12篇
  1974年   8篇
  1973年   15篇
  1970年   8篇
  1960年   8篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
61.
Buildings and other human-made constructions have been accepted as an indicator of human habitation and are identified as built-up area. Identification of built-up area in a region and its subsequent measurement is a key step in many fields of studies like urban planning, environmental studies, and population demography. Remote sensing techniques utilising medium resolution images (e.g. LISS III, Landsat) are extensively used for the extraction of the built-up area as high-resolution images are expensive, and its processing is difficult. Extraction of built land use from medium resolution images poses a challenge in regions like Western-Ghats, North-East regions of India, and countries in tropical region, due to the thick evergreen tree cover. The spectral signature of individual houses with a small footprint are easily overpowered by the overlapping tree canopy in a medium resolution image when the buildings are not clustered. Kerala is a typical case for this scenario. The research presented here proposes a stochastic-dasymetric process to aid in the built-up area recognition process by taking Kerala as a case study. The method utilises a set of ancillary information to derive a probability surface. The ancillary information used here includes distance from road junctions, distance from road network, population density, built-up space visible in the LISS III image, the population of the region, and the household size. The methodology employs logistic regression and Monte Carlo simulation in two sub processes. The algorithm estimates the built-up area expected in the region and distributes the estimated built-up area among pixels according to the probability estimated from the ancillary information. The output of the algorithm has two components. The first component is an example scenario of the built-up area distribution. The second component is a probability surface, where the value of each pixel denotes the probability of that pixel to have a significant built-up area within it. The algorithm is validated for regions in Kerala and found to be significant. The model correctly predicted the built-up pixel count count over a validation grid of 900 m in 95.2% of the cases. The algorithm is implemented using Python and ArcGIS.  相似文献   
62.
63.
It is difficult to obtain digital elevation model (DEM) in the mountainous regions. As an emerging technology, Light Detection and Ranging (LiDAR) is an enabling technology. However, the amount of points obtained by LiDAR is huge. When processing LiDAR point cloud, huge data will lead to a rapid decline in data processing speed, so it is necessary to thin LiDAR point cloud. In this paper, a new terrain sampling rule had been built based on the integrated terrain complexity, and then based on the rule a LiDAR point cloud simplification method, which was referred as to TCthin, had been proposed. The TCthin method was evaluated by experiments in which XUthin and Lasthin were selected as the TCthin’s comparative methods. The TCthin’s simplification degree was estimated by the simplification rate value, and the TCthin’s simplification quality was evaluated by Root Mean Square Deviation. The experimental results show that the TCthin method can thin LiDAR point cloud effectively and improve the simplification quality, and at 5 m, 10 m, 30 m scale levels, the TCthin method has a good applicability in the areas with different terrain complexity. This study has theoretical and practical value in sampling theory, thinning LiDAR point cloud, building high-precision DEM and so on.  相似文献   
64.
The characteristics of sea-level change in the China Sea and its vicinity are studied by combining TOPEX/Poseidon (T/P), Jason-1, Jason-2, and Jason-3 altimeter data. First, the sea-surface height is computed by using monthly data via collinear adjustment, regional selection, and crossover adjustment. The sea-level anomaly (SLA) from October 1992 to July 2017 is calculated based on the difference that is obtained by the value derived from the inverse distance weighting method to interpolate the CNES_CLS15 model value at a normal point. By analyzing the satellite data at the same time in orbit, three mean bias groups over the China Sea and its vicinity are obtained: the difference between T/P and Jason-1 is ??11.76 cm, the difference between Jason-1 and Jason-2 is 9.6 cm, and the difference between Jason-2 and Jason-3 is 2.42 cm. To establish an SLA series for 25 years in the study area, the SLAs are corrected. Mean rate of sea-level rise of the Bohai Sea, Yellow Sea, East China Sea, and South China Sea of 4.87 mm/a, 2.68 mm/a, 2.88 mm/a, and 4.67 mm/a, respectively, is found by analyzing the series of SLAs.  相似文献   
65.
We evaluated the relationships among three Landsat Enhanced Thematic Mapper (ETM+) datasets, top-of-atmosphere (TOA) reflectance, surface reflectance climate data records (surface reflectance-CDR) and atmospherically corrected images using Fast Line-of-Sight atmospheric analysis of Spectral Hypercubes model (surface reflectance-FLAASH) and their linkto pecan foliar chlorophyll content(chl-cont). Foliar chlorophyll content as determined with a SPAD meter, and remotely-sensed data were collected from two mature pecan orchards (one grown in a sandy loam and the other in clay loam soil) during the experimental period. Enhanced vegetation index derived from remotely sensed data was correlated to chl-cont. At both orchards, TOA reflectance was significantly lower than surface reflectance within the 550–2400 nm wavelength range. Reflectance from atmospherically corrected images (surface reflectance-CDR and surface reflectance-FLAASH) was similar in the shortwave infrared (SWIR: 1550–1750 and 2080–2350 nm) and statistically different in the visible (350–700 nm). Enhanced vegetation index derived from surface reflectance-CDR and surface reflectance-FLAASH had higher correlation with chl-cont than TOA. Accordingly, surface reflectance is an essential prerequisite for using Landsat ETM+  data and TOA reflectance could lead to miss-/or underestimate chl-cont in pecan orchards. Interestingly, the correlation comparisons (Williams t test) between surface reflectance-CDR and chl-cont was statistically similar to the correlation between chl-cont and commercial atmospheric correction model. Overall, surface reflectance-CDR, which is freely available from the earth explorer portal, is a reliable atmospherically corrected Landsat ETM+ image source to study foliar chlorophyll content in pecan orchards.  相似文献   
66.
67.
68.
Point cloud produced by using theoretically and practically different techniques is one of the most preferred data types in various engineering applications and projects. The advanced methods to obtain point cloud data in terrestrial studies are close range photogrammetry (CRP) and terrestrial laser scanning (TLS). In the TLS technique, separated from the CRP in terms of system structure, denser point cloud at certain intervals can be produced. However, point clouds can be produced with the help of photographs taken at appropriate conditions depending on the hardware and software technologies. Adequate quality photographs can be obtained by consumer grade digital cameras, and photogrammetric software widely used nowadays provides the generation of point cloud support. The tendency and the desire for the TLS are higher since it constitutes a new area of research. Moreover, it is believed that TLS takes the place of CRP, reviewed as antiquated. In this study that is conducted on rock surfaces located at Istanbul Technical University Ayazaga Campus, whether point cloud produced by means photographs can be used instead of point cloud obtained by laser scanner device is investigated. Study is worked on covers approximately area of 30 m?×?10 m. In order to compare the methods, 2D and 3D analyses as well as accuracy assessment were conducted. 2D analysis is areal-based whereas 3D analysis is volume-based. Analyses results showed that point clouds in both cases are similar to each other and can be used for similar other studies. Also, because the factors affecting the accuracy of the basic data and derived product for both methods are quite variable, it was concluded that it is not appropriate to make a choice regardless of the object of interest and the working conditions.  相似文献   
69.

Background

The credibility and effectiveness of country climate targets under the Paris Agreement requires that, in all greenhouse gas (GHG) sectors, the accounted mitigation outcomes reflect genuine deviations from the type and magnitude of activities generating emissions in the base year or baseline. This is challenging for the forestry sector, as the future net emissions can change irrespective of actual management activities, because of age-related stand dynamics resulting from past management and natural disturbances. The solution implemented under the Kyoto Protocol (2013–2020) was accounting mitigation as deviation from a projected (forward-looking) “forest reference level”, which considered the age-related dynamics but also allowed including the assumed future implementation of approved policies. This caused controversies, as unverifiable counterfactual scenarios with inflated future harvest could lead to credits where no change in management has actually occurred, or conversely, failing to reflect in the accounts a policy-driven increase in net emissions. Instead, here we describe an approach to set reference levels based on the projected continuation of documented historical forest management practice, i.e. reflecting age-related dynamics but not the future impact of policies. We illustrate a possible method to implement this approach at the level of the European Union (EU) using the Carbon Budget Model.

Results

Using EU country data, we show that forest sinks between 2013 and 2016 were greater than that assumed in the 2013–2020 EU reference level under the Kyoto Protocol, which would lead to credits of 110–120 Mt CO2/year (capped at 70–80 Mt CO2/year, equivalent to 1.3% of 1990 EU total emissions). By modelling the continuation of management practice documented historically (2000–2009), we show that these credits are mostly due to the inclusion in the reference levels of policy-assumed harvest increases that never materialized. With our proposed approach, harvest is expected to increase (12% in 2030 at EU-level, relative to 2000–2009), but more slowly than in current forest reference levels, and only because of age-related dynamics, i.e. increased growing stocks in maturing forests.

Conclusions

Our science-based approach, compatible with the EU post-2020 climate legislation, helps to ensure that only genuine deviations from the continuation of historically documented forest management practices are accounted toward climate targets, therefore enhancing the consistency and comparability across GHG sectors. It provides flexibility for countries to increase harvest in future reference levels when justified by age-related dynamics. It offers a policy-neutral solution to the polarized debate on forest accounting (especially on bioenergy) and supports the credibility of forest sector mitigation under the Paris Agreement.
  相似文献   
70.
Proper understanding of how the Earth’s mass distributions and redistributions influence the Earth’s gravity field-related functionals is crucial for numerous applications in geodesy, geophysics and related geosciences. Calculations of the gravitational curvatures (GC) have been proposed in geodesy in recent years. In view of future satellite missions, the sixth-order developments of the gradients are becoming requisite. In this paper, a set of 3D integral GC formulas of a tesseroid mass body have been provided by spherical integral kernels in the spatial domain. Based on the Taylor series expansion approach, the numerical expressions of the 3D GC formulas are provided up to sixth order. Moreover, numerical experiments demonstrate the correctness of the 3D Taylor series approach for the GC formulas with order as high as sixth order. Analogous to other gravitational effects (e.g., gravitational potential, gravity vector, gravity gradient tensor), numerically it is found that there exist the very-near-area problem and polar singularity problem in the GC east–east–radial, north–north–radial and radial–radial–radial components in spatial domain, and compared to the other gravitational effects, the relative approximation errors of the GC components are larger due to not only the influence of the geocentric distance but also the influence of the latitude. This study shows that the magnitude of each term for the nonzero GC functionals by a grid resolution 15\(^{{\prime } }\,\times \) 15\(^{{\prime }}\) at GOCE satellite height can reach of about 10\(^{-16}\) m\(^{-1}\) s\(^{2}\) for zero order, 10\(^{-24 }\) or 10\(^{-23}\) m\(^{-1}\) s\(^{2}\) for second order, 10\(^{-29}\) m\(^{-1}\) s\(^{2}\) for fourth order and 10\(^{-35}\) or 10\(^{-34}\) m\(^{-1}\) s\(^{2}\) for sixth order, respectively.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号