首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24729篇
  免费   178篇
  国内免费   916篇
测绘学   1413篇
大气科学   1998篇
地球物理   4579篇
地质学   11721篇
海洋学   1041篇
天文学   1650篇
综合类   2161篇
自然地理   1260篇
  2024年   2篇
  2022年   3篇
  2021年   8篇
  2020年   9篇
  2019年   5篇
  2018年   4767篇
  2017年   4037篇
  2016年   2586篇
  2015年   239篇
  2014年   84篇
  2013年   46篇
  2012年   998篇
  2011年   2739篇
  2010年   2024篇
  2009年   2325篇
  2008年   1907篇
  2007年   2372篇
  2006年   73篇
  2005年   203篇
  2004年   417篇
  2003年   415篇
  2002年   261篇
  2001年   58篇
  2000年   61篇
  1999年   18篇
  1998年   30篇
  1997年   3篇
  1996年   6篇
  1995年   2篇
  1993年   3篇
  1992年   6篇
  1991年   4篇
  1990年   5篇
  1989年   3篇
  1988年   2篇
  1987年   4篇
  1986年   3篇
  1985年   2篇
  1984年   7篇
  1983年   7篇
  1982年   3篇
  1981年   26篇
  1980年   21篇
  1979年   6篇
  1978年   2篇
  1977年   3篇
  1976年   7篇
  1975年   2篇
  1974年   3篇
  1973年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
161.
162.
163.
To reduce the possibility of poor efficiency and weak anti-error capability while encoding and transmitting hyperspectral images, we present a distributed source coding scheme for hyperspectral images based on three-dimensional (3D) set partitioning in hierarchical trees (SPIHT). First, the 3D wavelet transform is performed on the hyperspectral image. Thereafter, the low frequency section is regarded as the Key frame and the high frequency section as the Wyner–Ziv frame to enable independent SPIHT coding through different transmission channels. The Wyner–Ziv encoder uses Turbo channel coding to create high frequency information that reflects the details of the image with better anti-error capacity, while the low frequency information shows the main energy of the image. In this study, we used SPIHT coding to acquire a bit stream with quality scalability. Results show that the proposed scheme is more efficient during coding, while at the same time providing improved anti-error capability and quality scalability of the bit stream.  相似文献   
164.
Point cloud produced by using theoretically and practically different techniques is one of the most preferred data types in various engineering applications and projects. The advanced methods to obtain point cloud data in terrestrial studies are close range photogrammetry (CRP) and terrestrial laser scanning (TLS). In the TLS technique, separated from the CRP in terms of system structure, denser point cloud at certain intervals can be produced. However, point clouds can be produced with the help of photographs taken at appropriate conditions depending on the hardware and software technologies. Adequate quality photographs can be obtained by consumer grade digital cameras, and photogrammetric software widely used nowadays provides the generation of point cloud support. The tendency and the desire for the TLS are higher since it constitutes a new area of research. Moreover, it is believed that TLS takes the place of CRP, reviewed as antiquated. In this study that is conducted on rock surfaces located at Istanbul Technical University Ayazaga Campus, whether point cloud produced by means photographs can be used instead of point cloud obtained by laser scanner device is investigated. Study is worked on covers approximately area of 30 m?×?10 m. In order to compare the methods, 2D and 3D analyses as well as accuracy assessment were conducted. 2D analysis is areal-based whereas 3D analysis is volume-based. Analyses results showed that point clouds in both cases are similar to each other and can be used for similar other studies. Also, because the factors affecting the accuracy of the basic data and derived product for both methods are quite variable, it was concluded that it is not appropriate to make a choice regardless of the object of interest and the working conditions.  相似文献   
165.
Object-based image analysis (OBIA) has attained great importance for the delineation of landscape features, particularly with the accessibility to satellite images with high spatial resolution acquired by recent sensors. Statistical parametric classifiers have become ineffective mainly due to their assumption of normal distribution, vast increase in the dimensions of the data and availability of limited ground sample data. Despite pixel-based approaches, OBIA takes semantic information of extracted image objects into consideration, and thus provides more comprehensive image analysis. In this study, Indian Pines hyperspectral data set, which was recorded by the AVIRIS hyperspectral sensor, was used to analyse the effects of high dimensional data with limited ground reference data. To avoid the dimensionality curse, principal component analysis (PCA) and feature selection based on Jeffries–Matusita (JM) distance were utilized. First 19 principal components representing 98.5% of the image were selected using the PCA technique whilst 30 spectral bands of the image were determined using JM distance. Nearest neighbour (NN) and random forest (RF) classifiers were employed to test the performances of pixel- and object-based classification using conventional accuracy metrics. It was found that object-based approach outperformed the traditional pixel-based approach for all cases (up to 18% improvement). Also, the RF classifier produced significantly more accurate results (up to 10%) than the NN classifier.  相似文献   
166.

Background

The credibility and effectiveness of country climate targets under the Paris Agreement requires that, in all greenhouse gas (GHG) sectors, the accounted mitigation outcomes reflect genuine deviations from the type and magnitude of activities generating emissions in the base year or baseline. This is challenging for the forestry sector, as the future net emissions can change irrespective of actual management activities, because of age-related stand dynamics resulting from past management and natural disturbances. The solution implemented under the Kyoto Protocol (2013–2020) was accounting mitigation as deviation from a projected (forward-looking) “forest reference level”, which considered the age-related dynamics but also allowed including the assumed future implementation of approved policies. This caused controversies, as unverifiable counterfactual scenarios with inflated future harvest could lead to credits where no change in management has actually occurred, or conversely, failing to reflect in the accounts a policy-driven increase in net emissions. Instead, here we describe an approach to set reference levels based on the projected continuation of documented historical forest management practice, i.e. reflecting age-related dynamics but not the future impact of policies. We illustrate a possible method to implement this approach at the level of the European Union (EU) using the Carbon Budget Model.

Results

Using EU country data, we show that forest sinks between 2013 and 2016 were greater than that assumed in the 2013–2020 EU reference level under the Kyoto Protocol, which would lead to credits of 110–120 Mt CO2/year (capped at 70–80 Mt CO2/year, equivalent to 1.3% of 1990 EU total emissions). By modelling the continuation of management practice documented historically (2000–2009), we show that these credits are mostly due to the inclusion in the reference levels of policy-assumed harvest increases that never materialized. With our proposed approach, harvest is expected to increase (12% in 2030 at EU-level, relative to 2000–2009), but more slowly than in current forest reference levels, and only because of age-related dynamics, i.e. increased growing stocks in maturing forests.

Conclusions

Our science-based approach, compatible with the EU post-2020 climate legislation, helps to ensure that only genuine deviations from the continuation of historically documented forest management practices are accounted toward climate targets, therefore enhancing the consistency and comparability across GHG sectors. It provides flexibility for countries to increase harvest in future reference levels when justified by age-related dynamics. It offers a policy-neutral solution to the polarized debate on forest accounting (especially on bioenergy) and supports the credibility of forest sector mitigation under the Paris Agreement.
  相似文献   
167.

Background

Urban trees have long been valued for providing ecosystem services (mitigation of the “heat island” effect, suppression of air pollution, etc.); more recently the potential of urban forests to store significant above ground biomass (AGB) has also be recognised. However, urban areas pose particular challenges when assessing AGB due to plasticity of tree form, high species diversity as well as heterogeneous and complex land cover. Remote sensing, in particular light detection and ranging (LiDAR), provide a unique opportunity to assess urban AGB by directly measuring tree structure. In this study, terrestrial LiDAR measurements were used to derive new allometry for the London Borough of Camden, that incorporates the wide range of tree structures typical of an urban setting. Using a wall-to-wall airborne LiDAR dataset, individual trees were then identified across the Borough with a new individual tree detection (ITD) method. The new allometry was subsequently applied to the identified trees, generating a Borough-wide estimate of AGB.

Results

Camden has an estimated median AGB density of 51.6 Mg ha–1 where maximum AGB density is found in pockets of woodland; terrestrial LiDAR-derived AGB estimates suggest these areas are comparable to temperate and tropical forest. Multiple linear regression of terrestrial LiDAR-derived maximum height and projected crown area explained 93% of variance in tree volume, highlighting the utility of these metrics to characterise diverse tree structure. Locally derived allometry provided accurate estimates of tree volume whereas a Borough-wide allometry tended to overestimate AGB in woodland areas. The new ITD method successfully identified individual trees; however, AGB was underestimated by ≤?25% when compared to terrestrial LiDAR, owing to the inability of ITD to resolve crown overlap. A Monte Carlo uncertainty analysis identified assigning wood density values as the largest source of uncertainty when estimating AGB.

Conclusion

Over the coming century global populations are predicted to become increasingly urbanised, leading to an unprecedented expansion of urban land cover. Urban areas will become more important as carbon sinks and effective tools to assess carbon densities in these areas are therefore required. Using multi-scale LiDAR presents an opportunity to achieve this, providing a spatially explicit map of urban forest structure and AGB.
  相似文献   
168.
The paper presents a numerical solution of the oblique derivative boundary value problem on and above the Earth’s topography using the finite volume method (FVM). It introduces a novel method for constructing non-uniform hexahedron 3D grids above the Earth’s surface. It is based on an evolution of a surface, which approximates the Earth’s topography, by mean curvature. To obtain optimal shapes of non-uniform 3D grid, the proposed evolution is accompanied by a tangential redistribution of grid nodes. Afterwards, the Laplace equation is discretized using FVM developed for such a non-uniform grid. The oblique derivative boundary condition is treated as a stationary advection equation, and we derive a new upwind type discretization suitable for non-uniform 3D grids. The discretization of the Laplace equation together with the discretization of the oblique derivative boundary condition leads to a linear system of equations. The solution of this system gives the disturbing potential in the whole computational domain including the Earth’s surface. Numerical experiments aim to show properties and demonstrate efficiency of the developed FVM approach. The first experiments study an experimental order of convergence of the method. Then, a reconstruction of the harmonic function on the Earth’s topography, which is generated from the EGM2008 or EIGEN-6C4 global geopotential model, is presented. The obtained FVM solutions show that refining of the computational grid leads to more precise results. The last experiment deals with local gravity field modelling in Slovakia using terrestrial gravity data. The GNSS-levelling test shows accuracy of the obtained local quasigeoid model.  相似文献   
169.
All BeiDou navigation satellite system (BDS) satellites are transmitting signals on three frequencies, which brings new opportunity and challenges for high-accuracy precise point positioning (PPP) with ambiguity resolution (AR). This paper proposes an effective uncalibrated phase delay (UPD) estimation and AR strategy which is based on a raw PPP model. First, triple-frequency raw PPP models are developed. The observation model and stochastic model are designed and extended to accommodate the third frequency. Then, the UPD is parameterized in raw frequency form while estimating with the high-precision and low-noise integer linear combination of float ambiguity which are derived by ambiguity decorrelation. Third, with UPD corrected, the LAMBDA method is used for resolving full or partial ambiguities which can be fixed. This method can be easily and flexibly extended for dual-, triple- or even more frequency. To verify the effectiveness and performance of triple-frequency PPP AR, tests with real BDS data from 90 stations lasting for 21 days were performed in static mode. Data were processed with three strategies: BDS triple-frequency ambiguity-float PPP, BDS triple-frequency PPP with dual-frequency (B1/B2) and three-frequency AR, respectively. Numerous experiment results showed that compared with the ambiguity-float solution, the performance in terms of convergence time and positioning biases can be significantly improved by AR. Among three groups of solutions, the triple-frequency PPP AR achieved the best performance. Compared with dual-frequency AR, additional the third frequency could apparently improve the position estimations during the initialization phase and under constraint environments when the dual-frequency PPP AR is limited by few satellite numbers.  相似文献   
170.
In order to accelerate the spherical harmonic synthesis and/or analysis of arbitrary function on the unit sphere, we developed a pair of procedures to transform between a truncated spherical harmonic expansion and the corresponding two-dimensional Fourier series. First, we obtained an analytic expression of the sine/cosine series coefficient of the \(4 \pi \) fully normalized associated Legendre function in terms of the rectangle values of the Wigner d function. Then, we elaborated the existing method to transform the coefficients of the surface spherical harmonic expansion to those of the double Fourier series so as to be capable with arbitrary high degree and order. Next, we created a new method to transform inversely a given double Fourier series to the corresponding surface spherical harmonic expansion. The key of the new method is a couple of new recurrence formulas to compute the inverse transformation coefficients: a decreasing-order, fixed-degree, and fixed-wavenumber three-term formula for general terms, and an increasing-degree-and-order and fixed-wavenumber two-term formula for diagonal terms. Meanwhile, the two seed values are analytically prepared. Both of the forward and inverse transformation procedures are confirmed to be sufficiently accurate and applicable to an extremely high degree/order/wavenumber as \(2^{30}\,{\approx }\,10^9\). The developed procedures will be useful not only in the synthesis and analysis of the spherical harmonic expansion of arbitrary high degree and order, but also in the evaluation of the derivatives and integrals of the spherical harmonic expansion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号