首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   107859篇
  免费   1307篇
  国内免费   1435篇
测绘学   3077篇
大气科学   7442篇
地球物理   20515篇
地质学   41176篇
海洋学   8838篇
天文学   21331篇
综合类   2353篇
自然地理   5869篇
  2022年   619篇
  2021年   1048篇
  2020年   1119篇
  2019年   1248篇
  2018年   6960篇
  2017年   6077篇
  2016年   5113篇
  2015年   1565篇
  2014年   2691篇
  2013年   4504篇
  2012年   3739篇
  2011年   6146篇
  2010年   5236篇
  2009年   6373篇
  2008年   5468篇
  2007年   5999篇
  2006年   3602篇
  2005年   2708篇
  2004年   2829篇
  2003年   2670篇
  2002年   2503篇
  2001年   2015篇
  2000年   2004篇
  1999年   1550篇
  1998年   1616篇
  1997年   1467篇
  1996年   1243篇
  1995年   1242篇
  1994年   1042篇
  1993年   1008篇
  1992年   934篇
  1991年   967篇
  1990年   938篇
  1989年   817篇
  1988年   752篇
  1987年   883篇
  1986年   774篇
  1985年   946篇
  1984年   1070篇
  1983年   1031篇
  1982年   950篇
  1981年   906篇
  1980年   808篇
  1979年   748篇
  1978年   726篇
  1977年   618篇
  1976年   632篇
  1975年   613篇
  1974年   589篇
  1973年   656篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
771.
Currently, methods of extracting spatial information from satellite images are mainly based on visual interpretations and drawing the consequences by human factor, which is both costly and time consuming. A large volume of data collected by satellite sensors, and significant improvement in spatial and spectral resolution of these images require the development of new methods for optimal use of these data in order to produce rapid economic and updating road maps. In this study, a new automatic method is proposed for road extraction by integrating the SVM and Level Set methods. The estimated probability of classification by SVM is used as input in Level Set Method. The average of completeness, correctness, and quality was 84.19, 88.69 and 76.06% respectively indicate high performance of proposed method for road extraction from Google Earth images.  相似文献   
772.
773.
To reduce the possibility of poor efficiency and weak anti-error capability while encoding and transmitting hyperspectral images, we present a distributed source coding scheme for hyperspectral images based on three-dimensional (3D) set partitioning in hierarchical trees (SPIHT). First, the 3D wavelet transform is performed on the hyperspectral image. Thereafter, the low frequency section is regarded as the Key frame and the high frequency section as the Wyner–Ziv frame to enable independent SPIHT coding through different transmission channels. The Wyner–Ziv encoder uses Turbo channel coding to create high frequency information that reflects the details of the image with better anti-error capacity, while the low frequency information shows the main energy of the image. In this study, we used SPIHT coding to acquire a bit stream with quality scalability. Results show that the proposed scheme is more efficient during coding, while at the same time providing improved anti-error capability and quality scalability of the bit stream.  相似文献   
774.
Point cloud produced by using theoretically and practically different techniques is one of the most preferred data types in various engineering applications and projects. The advanced methods to obtain point cloud data in terrestrial studies are close range photogrammetry (CRP) and terrestrial laser scanning (TLS). In the TLS technique, separated from the CRP in terms of system structure, denser point cloud at certain intervals can be produced. However, point clouds can be produced with the help of photographs taken at appropriate conditions depending on the hardware and software technologies. Adequate quality photographs can be obtained by consumer grade digital cameras, and photogrammetric software widely used nowadays provides the generation of point cloud support. The tendency and the desire for the TLS are higher since it constitutes a new area of research. Moreover, it is believed that TLS takes the place of CRP, reviewed as antiquated. In this study that is conducted on rock surfaces located at Istanbul Technical University Ayazaga Campus, whether point cloud produced by means photographs can be used instead of point cloud obtained by laser scanner device is investigated. Study is worked on covers approximately area of 30 m?×?10 m. In order to compare the methods, 2D and 3D analyses as well as accuracy assessment were conducted. 2D analysis is areal-based whereas 3D analysis is volume-based. Analyses results showed that point clouds in both cases are similar to each other and can be used for similar other studies. Also, because the factors affecting the accuracy of the basic data and derived product for both methods are quite variable, it was concluded that it is not appropriate to make a choice regardless of the object of interest and the working conditions.  相似文献   
775.
Object-based image analysis (OBIA) has attained great importance for the delineation of landscape features, particularly with the accessibility to satellite images with high spatial resolution acquired by recent sensors. Statistical parametric classifiers have become ineffective mainly due to their assumption of normal distribution, vast increase in the dimensions of the data and availability of limited ground sample data. Despite pixel-based approaches, OBIA takes semantic information of extracted image objects into consideration, and thus provides more comprehensive image analysis. In this study, Indian Pines hyperspectral data set, which was recorded by the AVIRIS hyperspectral sensor, was used to analyse the effects of high dimensional data with limited ground reference data. To avoid the dimensionality curse, principal component analysis (PCA) and feature selection based on Jeffries–Matusita (JM) distance were utilized. First 19 principal components representing 98.5% of the image were selected using the PCA technique whilst 30 spectral bands of the image were determined using JM distance. Nearest neighbour (NN) and random forest (RF) classifiers were employed to test the performances of pixel- and object-based classification using conventional accuracy metrics. It was found that object-based approach outperformed the traditional pixel-based approach for all cases (up to 18% improvement). Also, the RF classifier produced significantly more accurate results (up to 10%) than the NN classifier.  相似文献   
776.

Background

The credibility and effectiveness of country climate targets under the Paris Agreement requires that, in all greenhouse gas (GHG) sectors, the accounted mitigation outcomes reflect genuine deviations from the type and magnitude of activities generating emissions in the base year or baseline. This is challenging for the forestry sector, as the future net emissions can change irrespective of actual management activities, because of age-related stand dynamics resulting from past management and natural disturbances. The solution implemented under the Kyoto Protocol (2013–2020) was accounting mitigation as deviation from a projected (forward-looking) “forest reference level”, which considered the age-related dynamics but also allowed including the assumed future implementation of approved policies. This caused controversies, as unverifiable counterfactual scenarios with inflated future harvest could lead to credits where no change in management has actually occurred, or conversely, failing to reflect in the accounts a policy-driven increase in net emissions. Instead, here we describe an approach to set reference levels based on the projected continuation of documented historical forest management practice, i.e. reflecting age-related dynamics but not the future impact of policies. We illustrate a possible method to implement this approach at the level of the European Union (EU) using the Carbon Budget Model.

Results

Using EU country data, we show that forest sinks between 2013 and 2016 were greater than that assumed in the 2013–2020 EU reference level under the Kyoto Protocol, which would lead to credits of 110–120 Mt CO2/year (capped at 70–80 Mt CO2/year, equivalent to 1.3% of 1990 EU total emissions). By modelling the continuation of management practice documented historically (2000–2009), we show that these credits are mostly due to the inclusion in the reference levels of policy-assumed harvest increases that never materialized. With our proposed approach, harvest is expected to increase (12% in 2030 at EU-level, relative to 2000–2009), but more slowly than in current forest reference levels, and only because of age-related dynamics, i.e. increased growing stocks in maturing forests.

Conclusions

Our science-based approach, compatible with the EU post-2020 climate legislation, helps to ensure that only genuine deviations from the continuation of historically documented forest management practices are accounted toward climate targets, therefore enhancing the consistency and comparability across GHG sectors. It provides flexibility for countries to increase harvest in future reference levels when justified by age-related dynamics. It offers a policy-neutral solution to the polarized debate on forest accounting (especially on bioenergy) and supports the credibility of forest sector mitigation under the Paris Agreement.
  相似文献   
777.

Background

Urban trees have long been valued for providing ecosystem services (mitigation of the “heat island” effect, suppression of air pollution, etc.); more recently the potential of urban forests to store significant above ground biomass (AGB) has also be recognised. However, urban areas pose particular challenges when assessing AGB due to plasticity of tree form, high species diversity as well as heterogeneous and complex land cover. Remote sensing, in particular light detection and ranging (LiDAR), provide a unique opportunity to assess urban AGB by directly measuring tree structure. In this study, terrestrial LiDAR measurements were used to derive new allometry for the London Borough of Camden, that incorporates the wide range of tree structures typical of an urban setting. Using a wall-to-wall airborne LiDAR dataset, individual trees were then identified across the Borough with a new individual tree detection (ITD) method. The new allometry was subsequently applied to the identified trees, generating a Borough-wide estimate of AGB.

Results

Camden has an estimated median AGB density of 51.6 Mg ha–1 where maximum AGB density is found in pockets of woodland; terrestrial LiDAR-derived AGB estimates suggest these areas are comparable to temperate and tropical forest. Multiple linear regression of terrestrial LiDAR-derived maximum height and projected crown area explained 93% of variance in tree volume, highlighting the utility of these metrics to characterise diverse tree structure. Locally derived allometry provided accurate estimates of tree volume whereas a Borough-wide allometry tended to overestimate AGB in woodland areas. The new ITD method successfully identified individual trees; however, AGB was underestimated by ≤?25% when compared to terrestrial LiDAR, owing to the inability of ITD to resolve crown overlap. A Monte Carlo uncertainty analysis identified assigning wood density values as the largest source of uncertainty when estimating AGB.

Conclusion

Over the coming century global populations are predicted to become increasingly urbanised, leading to an unprecedented expansion of urban land cover. Urban areas will become more important as carbon sinks and effective tools to assess carbon densities in these areas are therefore required. Using multi-scale LiDAR presents an opportunity to achieve this, providing a spatially explicit map of urban forest structure and AGB.
  相似文献   
778.
The paper presents a numerical solution of the oblique derivative boundary value problem on and above the Earth’s topography using the finite volume method (FVM). It introduces a novel method for constructing non-uniform hexahedron 3D grids above the Earth’s surface. It is based on an evolution of a surface, which approximates the Earth’s topography, by mean curvature. To obtain optimal shapes of non-uniform 3D grid, the proposed evolution is accompanied by a tangential redistribution of grid nodes. Afterwards, the Laplace equation is discretized using FVM developed for such a non-uniform grid. The oblique derivative boundary condition is treated as a stationary advection equation, and we derive a new upwind type discretization suitable for non-uniform 3D grids. The discretization of the Laplace equation together with the discretization of the oblique derivative boundary condition leads to a linear system of equations. The solution of this system gives the disturbing potential in the whole computational domain including the Earth’s surface. Numerical experiments aim to show properties and demonstrate efficiency of the developed FVM approach. The first experiments study an experimental order of convergence of the method. Then, a reconstruction of the harmonic function on the Earth’s topography, which is generated from the EGM2008 or EIGEN-6C4 global geopotential model, is presented. The obtained FVM solutions show that refining of the computational grid leads to more precise results. The last experiment deals with local gravity field modelling in Slovakia using terrestrial gravity data. The GNSS-levelling test shows accuracy of the obtained local quasigeoid model.  相似文献   
779.
The GOCE gravity gradiometer measured highly accurate gravity gradients along the orbit during GOCE’s mission lifetime from March 17, 2009, to November 11, 2013. These measurements contain unique information on the gravity field at a spatial resolution of 80 km half wavelength, which is not provided to the same accuracy level by any other satellite mission now and in the foreseeable future. Unfortunately, the gravity gradient in cross-track direction is heavily perturbed in the regions around the geomagnetic poles. We show in this paper that the perturbing effect can be modeled accurately as a quadratic function of the non-gravitational acceleration of the satellite in cross-track direction. Most importantly, we can remove the perturbation from the cross-track gravity gradient to a great extent, which significantly improves the accuracy of the latter and offers opportunities for better scientific exploitation of the GOCE gravity gradient data set.  相似文献   
780.
Autonomous orbit determination is the ability of navigation satellites to estimate the orbit parameters on-board using inter-satellite link (ISL) measurements. This study mainly focuses on data processing of the ISL measurements as a new measurement type and its application on the centralized autonomous orbit determination of the new-generation Beidou navigation satellite system satellites for the first time. The ISL measurements are dual one-way measurements that follow a time division multiple access (TDMA) structure. The ranging error of the ISL measurements is less than 0.25 ns. This paper proposes a derivation approach to the satellite clock offsets and the geometric distances from TDMA dual one-way measurements without a loss of accuracy. The derived clock offsets are used for time synchronization, and the derived geometry distances are used for autonomous orbit determination. The clock offsets from the ISL measurements are consistent with the L-band two-way satellite, and time–frequency transfer clock measurements and the detrended residuals vary within 0.5 ns. The centralized autonomous orbit determination is conducted in a batch mode on a ground-capable server for the feasibility study. Constant hardware delays are present in the geometric distances and become the largest source of error in the autonomous orbit determination. Therefore, the hardware delays are estimated simultaneously with the satellite orbits. To avoid uncertainties in the constellation orientation, a ground anchor station that “observes” the satellites with on-board ISL payloads is introduced into the orbit determination. The root-mean-square values of orbit determination residuals are within 10.0 cm, and the standard deviation of the estimated ISL hardware delays is within 0.2 ns. The accuracy of the autonomous orbits is evaluated by analysis of overlap comparison and the satellite laser ranging (SLR) residuals and is compared with the accuracy of the L-band orbits. The results indicate that the radial overlap differences between the autonomous orbits are less than 15.0 cm for the inclined geosynchronous orbit (IGSO) satellites and less than 10.0 cm for the MEO satellites. The SLR residuals are approximately 15.0 cm for the IGSO satellites and approximately 10.0 cm for the MEO satellites, representing an improvement over the L-band orbits.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号