首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24401篇
  免费   171篇
  国内免费   916篇
测绘学   1410篇
大气科学   1977篇
地球物理   4497篇
地质学   11591篇
海洋学   1002篇
天文学   1631篇
综合类   2161篇
自然地理   1219篇
  2020年   1篇
  2018年   4761篇
  2017年   4037篇
  2016年   2576篇
  2015年   234篇
  2014年   80篇
  2013年   24篇
  2012年   988篇
  2011年   2728篇
  2010年   2014篇
  2009年   2310篇
  2008年   1889篇
  2007年   2362篇
  2006年   52篇
  2005年   194篇
  2004年   402篇
  2003年   409篇
  2002年   249篇
  2001年   47篇
  2000年   51篇
  1999年   13篇
  1998年   21篇
  1981年   21篇
  1980年   19篇
  1976年   6篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
201.
202.
To reduce the possibility of poor efficiency and weak anti-error capability while encoding and transmitting hyperspectral images, we present a distributed source coding scheme for hyperspectral images based on three-dimensional (3D) set partitioning in hierarchical trees (SPIHT). First, the 3D wavelet transform is performed on the hyperspectral image. Thereafter, the low frequency section is regarded as the Key frame and the high frequency section as the Wyner–Ziv frame to enable independent SPIHT coding through different transmission channels. The Wyner–Ziv encoder uses Turbo channel coding to create high frequency information that reflects the details of the image with better anti-error capacity, while the low frequency information shows the main energy of the image. In this study, we used SPIHT coding to acquire a bit stream with quality scalability. Results show that the proposed scheme is more efficient during coding, while at the same time providing improved anti-error capability and quality scalability of the bit stream.  相似文献   
203.
Point cloud produced by using theoretically and practically different techniques is one of the most preferred data types in various engineering applications and projects. The advanced methods to obtain point cloud data in terrestrial studies are close range photogrammetry (CRP) and terrestrial laser scanning (TLS). In the TLS technique, separated from the CRP in terms of system structure, denser point cloud at certain intervals can be produced. However, point clouds can be produced with the help of photographs taken at appropriate conditions depending on the hardware and software technologies. Adequate quality photographs can be obtained by consumer grade digital cameras, and photogrammetric software widely used nowadays provides the generation of point cloud support. The tendency and the desire for the TLS are higher since it constitutes a new area of research. Moreover, it is believed that TLS takes the place of CRP, reviewed as antiquated. In this study that is conducted on rock surfaces located at Istanbul Technical University Ayazaga Campus, whether point cloud produced by means photographs can be used instead of point cloud obtained by laser scanner device is investigated. Study is worked on covers approximately area of 30 m?×?10 m. In order to compare the methods, 2D and 3D analyses as well as accuracy assessment were conducted. 2D analysis is areal-based whereas 3D analysis is volume-based. Analyses results showed that point clouds in both cases are similar to each other and can be used for similar other studies. Also, because the factors affecting the accuracy of the basic data and derived product for both methods are quite variable, it was concluded that it is not appropriate to make a choice regardless of the object of interest and the working conditions.  相似文献   
204.
Object-based image analysis (OBIA) has attained great importance for the delineation of landscape features, particularly with the accessibility to satellite images with high spatial resolution acquired by recent sensors. Statistical parametric classifiers have become ineffective mainly due to their assumption of normal distribution, vast increase in the dimensions of the data and availability of limited ground sample data. Despite pixel-based approaches, OBIA takes semantic information of extracted image objects into consideration, and thus provides more comprehensive image analysis. In this study, Indian Pines hyperspectral data set, which was recorded by the AVIRIS hyperspectral sensor, was used to analyse the effects of high dimensional data with limited ground reference data. To avoid the dimensionality curse, principal component analysis (PCA) and feature selection based on Jeffries–Matusita (JM) distance were utilized. First 19 principal components representing 98.5% of the image were selected using the PCA technique whilst 30 spectral bands of the image were determined using JM distance. Nearest neighbour (NN) and random forest (RF) classifiers were employed to test the performances of pixel- and object-based classification using conventional accuracy metrics. It was found that object-based approach outperformed the traditional pixel-based approach for all cases (up to 18% improvement). Also, the RF classifier produced significantly more accurate results (up to 10%) than the NN classifier.  相似文献   
205.

Background

The credibility and effectiveness of country climate targets under the Paris Agreement requires that, in all greenhouse gas (GHG) sectors, the accounted mitigation outcomes reflect genuine deviations from the type and magnitude of activities generating emissions in the base year or baseline. This is challenging for the forestry sector, as the future net emissions can change irrespective of actual management activities, because of age-related stand dynamics resulting from past management and natural disturbances. The solution implemented under the Kyoto Protocol (2013–2020) was accounting mitigation as deviation from a projected (forward-looking) “forest reference level”, which considered the age-related dynamics but also allowed including the assumed future implementation of approved policies. This caused controversies, as unverifiable counterfactual scenarios with inflated future harvest could lead to credits where no change in management has actually occurred, or conversely, failing to reflect in the accounts a policy-driven increase in net emissions. Instead, here we describe an approach to set reference levels based on the projected continuation of documented historical forest management practice, i.e. reflecting age-related dynamics but not the future impact of policies. We illustrate a possible method to implement this approach at the level of the European Union (EU) using the Carbon Budget Model.

Results

Using EU country data, we show that forest sinks between 2013 and 2016 were greater than that assumed in the 2013–2020 EU reference level under the Kyoto Protocol, which would lead to credits of 110–120 Mt CO2/year (capped at 70–80 Mt CO2/year, equivalent to 1.3% of 1990 EU total emissions). By modelling the continuation of management practice documented historically (2000–2009), we show that these credits are mostly due to the inclusion in the reference levels of policy-assumed harvest increases that never materialized. With our proposed approach, harvest is expected to increase (12% in 2030 at EU-level, relative to 2000–2009), but more slowly than in current forest reference levels, and only because of age-related dynamics, i.e. increased growing stocks in maturing forests.

Conclusions

Our science-based approach, compatible with the EU post-2020 climate legislation, helps to ensure that only genuine deviations from the continuation of historically documented forest management practices are accounted toward climate targets, therefore enhancing the consistency and comparability across GHG sectors. It provides flexibility for countries to increase harvest in future reference levels when justified by age-related dynamics. It offers a policy-neutral solution to the polarized debate on forest accounting (especially on bioenergy) and supports the credibility of forest sector mitigation under the Paris Agreement.
  相似文献   
206.

Background

Urban trees have long been valued for providing ecosystem services (mitigation of the “heat island” effect, suppression of air pollution, etc.); more recently the potential of urban forests to store significant above ground biomass (AGB) has also be recognised. However, urban areas pose particular challenges when assessing AGB due to plasticity of tree form, high species diversity as well as heterogeneous and complex land cover. Remote sensing, in particular light detection and ranging (LiDAR), provide a unique opportunity to assess urban AGB by directly measuring tree structure. In this study, terrestrial LiDAR measurements were used to derive new allometry for the London Borough of Camden, that incorporates the wide range of tree structures typical of an urban setting. Using a wall-to-wall airborne LiDAR dataset, individual trees were then identified across the Borough with a new individual tree detection (ITD) method. The new allometry was subsequently applied to the identified trees, generating a Borough-wide estimate of AGB.

Results

Camden has an estimated median AGB density of 51.6 Mg ha–1 where maximum AGB density is found in pockets of woodland; terrestrial LiDAR-derived AGB estimates suggest these areas are comparable to temperate and tropical forest. Multiple linear regression of terrestrial LiDAR-derived maximum height and projected crown area explained 93% of variance in tree volume, highlighting the utility of these metrics to characterise diverse tree structure. Locally derived allometry provided accurate estimates of tree volume whereas a Borough-wide allometry tended to overestimate AGB in woodland areas. The new ITD method successfully identified individual trees; however, AGB was underestimated by ≤?25% when compared to terrestrial LiDAR, owing to the inability of ITD to resolve crown overlap. A Monte Carlo uncertainty analysis identified assigning wood density values as the largest source of uncertainty when estimating AGB.

Conclusion

Over the coming century global populations are predicted to become increasingly urbanised, leading to an unprecedented expansion of urban land cover. Urban areas will become more important as carbon sinks and effective tools to assess carbon densities in these areas are therefore required. Using multi-scale LiDAR presents an opportunity to achieve this, providing a spatially explicit map of urban forest structure and AGB.
  相似文献   
207.
The paper presents a numerical solution of the oblique derivative boundary value problem on and above the Earth’s topography using the finite volume method (FVM). It introduces a novel method for constructing non-uniform hexahedron 3D grids above the Earth’s surface. It is based on an evolution of a surface, which approximates the Earth’s topography, by mean curvature. To obtain optimal shapes of non-uniform 3D grid, the proposed evolution is accompanied by a tangential redistribution of grid nodes. Afterwards, the Laplace equation is discretized using FVM developed for such a non-uniform grid. The oblique derivative boundary condition is treated as a stationary advection equation, and we derive a new upwind type discretization suitable for non-uniform 3D grids. The discretization of the Laplace equation together with the discretization of the oblique derivative boundary condition leads to a linear system of equations. The solution of this system gives the disturbing potential in the whole computational domain including the Earth’s surface. Numerical experiments aim to show properties and demonstrate efficiency of the developed FVM approach. The first experiments study an experimental order of convergence of the method. Then, a reconstruction of the harmonic function on the Earth’s topography, which is generated from the EGM2008 or EIGEN-6C4 global geopotential model, is presented. The obtained FVM solutions show that refining of the computational grid leads to more precise results. The last experiment deals with local gravity field modelling in Slovakia using terrestrial gravity data. The GNSS-levelling test shows accuracy of the obtained local quasigeoid model.  相似文献   
208.
The GOCE gravity gradiometer measured highly accurate gravity gradients along the orbit during GOCE’s mission lifetime from March 17, 2009, to November 11, 2013. These measurements contain unique information on the gravity field at a spatial resolution of 80 km half wavelength, which is not provided to the same accuracy level by any other satellite mission now and in the foreseeable future. Unfortunately, the gravity gradient in cross-track direction is heavily perturbed in the regions around the geomagnetic poles. We show in this paper that the perturbing effect can be modeled accurately as a quadratic function of the non-gravitational acceleration of the satellite in cross-track direction. Most importantly, we can remove the perturbation from the cross-track gravity gradient to a great extent, which significantly improves the accuracy of the latter and offers opportunities for better scientific exploitation of the GOCE gravity gradient data set.  相似文献   
209.
Autonomous orbit determination is the ability of navigation satellites to estimate the orbit parameters on-board using inter-satellite link (ISL) measurements. This study mainly focuses on data processing of the ISL measurements as a new measurement type and its application on the centralized autonomous orbit determination of the new-generation Beidou navigation satellite system satellites for the first time. The ISL measurements are dual one-way measurements that follow a time division multiple access (TDMA) structure. The ranging error of the ISL measurements is less than 0.25 ns. This paper proposes a derivation approach to the satellite clock offsets and the geometric distances from TDMA dual one-way measurements without a loss of accuracy. The derived clock offsets are used for time synchronization, and the derived geometry distances are used for autonomous orbit determination. The clock offsets from the ISL measurements are consistent with the L-band two-way satellite, and time–frequency transfer clock measurements and the detrended residuals vary within 0.5 ns. The centralized autonomous orbit determination is conducted in a batch mode on a ground-capable server for the feasibility study. Constant hardware delays are present in the geometric distances and become the largest source of error in the autonomous orbit determination. Therefore, the hardware delays are estimated simultaneously with the satellite orbits. To avoid uncertainties in the constellation orientation, a ground anchor station that “observes” the satellites with on-board ISL payloads is introduced into the orbit determination. The root-mean-square values of orbit determination residuals are within 10.0 cm, and the standard deviation of the estimated ISL hardware delays is within 0.2 ns. The accuracy of the autonomous orbits is evaluated by analysis of overlap comparison and the satellite laser ranging (SLR) residuals and is compared with the accuracy of the L-band orbits. The results indicate that the radial overlap differences between the autonomous orbits are less than 15.0 cm for the inclined geosynchronous orbit (IGSO) satellites and less than 10.0 cm for the MEO satellites. The SLR residuals are approximately 15.0 cm for the IGSO satellites and approximately 10.0 cm for the MEO satellites, representing an improvement over the L-band orbits.  相似文献   
210.
All BeiDou navigation satellite system (BDS) satellites are transmitting signals on three frequencies, which brings new opportunity and challenges for high-accuracy precise point positioning (PPP) with ambiguity resolution (AR). This paper proposes an effective uncalibrated phase delay (UPD) estimation and AR strategy which is based on a raw PPP model. First, triple-frequency raw PPP models are developed. The observation model and stochastic model are designed and extended to accommodate the third frequency. Then, the UPD is parameterized in raw frequency form while estimating with the high-precision and low-noise integer linear combination of float ambiguity which are derived by ambiguity decorrelation. Third, with UPD corrected, the LAMBDA method is used for resolving full or partial ambiguities which can be fixed. This method can be easily and flexibly extended for dual-, triple- or even more frequency. To verify the effectiveness and performance of triple-frequency PPP AR, tests with real BDS data from 90 stations lasting for 21 days were performed in static mode. Data were processed with three strategies: BDS triple-frequency ambiguity-float PPP, BDS triple-frequency PPP with dual-frequency (B1/B2) and three-frequency AR, respectively. Numerous experiment results showed that compared with the ambiguity-float solution, the performance in terms of convergence time and positioning biases can be significantly improved by AR. Among three groups of solutions, the triple-frequency PPP AR achieved the best performance. Compared with dual-frequency AR, additional the third frequency could apparently improve the position estimations during the initialization phase and under constraint environments when the dual-frequency PPP AR is limited by few satellite numbers.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号