全文获取类型
收费全文 | 53715篇 |
免费 | 853篇 |
国内免费 | 566篇 |
专业分类
测绘学 | 1451篇 |
大气科学 | 3823篇 |
地球物理 | 9919篇 |
地质学 | 19358篇 |
海洋学 | 4920篇 |
天文学 | 12910篇 |
综合类 | 188篇 |
自然地理 | 2565篇 |
出版年
2022年 | 365篇 |
2021年 | 629篇 |
2020年 | 658篇 |
2019年 | 707篇 |
2018年 | 1582篇 |
2017年 | 1504篇 |
2016年 | 1871篇 |
2015年 | 990篇 |
2014年 | 1748篇 |
2013年 | 2883篇 |
2012年 | 1860篇 |
2011年 | 2378篇 |
2010年 | 2072篇 |
2009年 | 2684篇 |
2008年 | 2301篇 |
2007年 | 2350篇 |
2006年 | 2199篇 |
2005年 | 1634篇 |
2004年 | 1639篇 |
2003年 | 1552篇 |
2002年 | 1480篇 |
2001年 | 1303篇 |
2000年 | 1221篇 |
1999年 | 1000篇 |
1998年 | 1046篇 |
1997年 | 965篇 |
1996年 | 831篇 |
1995年 | 786篇 |
1994年 | 694篇 |
1993年 | 614篇 |
1992年 | 602篇 |
1991年 | 606篇 |
1990年 | 628篇 |
1989年 | 497篇 |
1988年 | 512篇 |
1987年 | 532篇 |
1986年 | 489篇 |
1985年 | 615篇 |
1984年 | 678篇 |
1983年 | 590篇 |
1982年 | 564篇 |
1981年 | 501篇 |
1980年 | 470篇 |
1979年 | 482篇 |
1978年 | 460篇 |
1977年 | 370篇 |
1976年 | 349篇 |
1975年 | 359篇 |
1974年 | 309篇 |
1973年 | 343篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
871.
872.
S-S. Xu A. F. Nieto-Samaniego S. A. Alaniz-Álvarez L. G. Velasquillo-Martínez 《International Journal of Earth Sciences》2006,95(5):841-853
The power-law exponent (n) in the equation: D=cL
n
, with D = maximum displacement and L = fault length, would be affected by deviations of fault trace length. (1) Assuming n=1, numerical simulations on the effect of sampling and linkage on fault length and length–displacement relationship are done in this paper. The results show that: (a) uniform relative deviations, which means all faults within a dataset have the same relative deviation, do not affect the value of n; (b) deviations of the fault length due to unresolved fault tip decrease the values of n and the deviations of n increase with the increasing length deviations; (c) fault linkage and observed dimensions either increase or decrease the value of n depending on the distribution of deviations within a dataset; (d) mixed deviations of the fault lengths are either negative or positive and cause the values of n to either decrease or increase; (e) a dataset combined from two or more datasets with different values of c and orders of magnitude also cause the values of n to deviate. (2) Data including 19 datasets and spanning more than eight orders of fault length magnitudes (10−2–105 m) collected from the published literature indicate that the values of n range from 0.55 to 1.5, the average value being 1.0813, and the peak value of n
d (double regression) is 1.0–1.1. Based on above results from the simulations and published data, we propose that the relationship between the maximum displacement and fault length in a single tectonic environment with uniform mechanical properties is linear, and the value of n deviated from 1 is mainly caused by the sampling and linkage effects. 相似文献
873.
Uncertainties in polar motion and length-of-day measurements are evaluated empirically using several data series from the space-geodetic techniques of the global positioning system (GPS), satellite laser ranging (SLR), and very long baseline interferometry (VLBI) during 1997–2002. In the evaluation procedure employed here, known as the three-corner hat (TCH) technique, the signal common to each series is eliminated by forming pair-wise differences between the series, thus requiring no assumed values for the “truth” signal. From the variances of the differenced series, the uncertainty of each series can be recovered when reasonable assumptions are made about the correlations between the series. In order to form the pair-wise differences, the series data must be given at the same epoch. All measurement data sets studied here were sampled at noon (UTC); except for the VLBI series, whose data are interpolated to noon and whose UT1 values are also numerically differentiated to obtain LOD. The numerical error introduced to the VLBI values by the interpolation and differentiation is shown to be comparable in magnitude to the values determined by the TCH method for the uncertainties of the VLBI series. The TCH estimates for the VLBI series are corrupted by such numerical errors mostly as a result of the relatively large data intervals. Of the remaining data sets studied here, it is found that the IGS Final combined series has the smallest polar motion and length-of-day uncertainties. 相似文献
874.
Georg Bergeton Larsen Stig Syndergaard Per Høeg Martin Bjært Sørensen 《GPS Solutions》2005,9(2):144-155
The Global Positioning System (GPS) radio occultation measurements obtained using the TurboRogue GPS receiver on the Danish satellite Ørsted have been processed using the single frequency method. Atmospheric profiles of refractivity and temperature are derived and validated against numerical weather prediction data from the European Centre for Medium-Range Weather Forecast (ECMWF). Results from the Ørsted GPS measurement campaign in February 2000 indicate that the single frequency method can provide retrievals with accuracy comparable to that of using two frequencies. From comparisons between measured dry temperature profiles and corresponding dry temperature profiles derived from ECMWF analysis fields, we find a mean difference of less than 0.5 K and a standard deviation of 2–4 K between 500 and 30 hPa in height. Above 30 hPa the impact of the ionosphere becomes more dominant and more difficult to eliminate using the single frequency method, and the results show degraded accuracy when compared to previous analysis results of occultation data from other missions using the dual frequency method. At latitudes less than 40° (denoted low latitudes), the standard deviation is generally smaller than at latitudes higher than 40° (denoted high latitudes). A small temperature bias is observed centered at 200 hPa for low latitudes and at 300 hPa for high latitudes. This indicates that the ECMWF analyses do not adequately resolve the tropopause temperature minimum. In the lowest part of the troposphere an observed warm bias is thought to be due to erroneous tracking of the GPS signal in cases of atmospheric multipath propagation. 相似文献
875.
B. Tapley J. Ries S. Bettadpur D. Chambers M. Cheng F. Condi B. Gunter Z. Kang P. Nagel R. Pastor T. Pekker S. Poole F. Wang 《Journal of Geodesy》2005,79(8):467-478
A new generation of Earth gravity field models called GGM02 are derived using approximately 14 months of data spanning from
April 2002 to December 2003 from the Gravity Recovery And Climate Experiment (GRACE). Relative to the preceding generation,
GGM01, there have been improvements to the data products, the gravity estimation methods and the background models. Based
on the calibrated covariances, GGM02 (both the GRACE-only model GGM02S and the combination model GGM02C) represents an improvement
greater than a factor of two over the previous GGM01 models. Error estimates indicate a cumulative error less than 1 cm geoid
height to spherical harmonic degree 70, which can be said to have met the GRACE minimum mission goals.
Electronic Supplementary Material Supplementary material is available in the online version of this article at 相似文献
876.
In this letter we develop a new concept, the negative alpha filter, which we suggest has application for quantitative estimation of surface parameters beneath vegetation using polarimetric synthetic aperture radar (SAR) interferometry (POLInSAR). We first derive the filter and then validate it using simulations of L-band coherent forest scattering. We then show initial results of applying the filter to airborne data from the German Aerospace Center's E-SAR L-band sensor. 相似文献
877.
Analysis of geophysical networks derived from multiscale digital elevation models: a morphological approach 总被引:1,自引:0,他引:1
Tay L.T. Sagar B.S.D. Hean Teik Chuah 《Geoscience and Remote Sensing Letters, IEEE》2005,2(4):399-403
We provide a simple and elegant framework based on morphological transformations to generate multiscale digital elevation models (DEMs) and to extract topologically significant multiscale geophysical networks. These terrain features at multiple scales are collectively useful in deriving scaling laws, which exhibit several significant terrain characteristics. We present results derived from a part of Cameron Highlands DEM. 相似文献
878.
A generic network design in close range photogrammetry is one where optimal multi-ray intersection geometry is obtained with as few camera stations as practicable. Hyper redundancy is a concept whereby, once the generic network is in place, many additional images are recorded, with the beneficial impact upon object point precision being equivalent to the presence of multiple exposures at each camera position within the generic network. The effective number of images per station within a hyper redundant network might well be in the range of 10 to 20 or more. As is apparent when it is considered that a hyper redundant network may comprise hundreds of images, the concept is only applicable in practice to fully automatic vision metrology systems, where it proves to be a very effective means of enhancing measurement accuracy at the cost of minimal additional work in the image recording phase. This paper briefly reviews the network design and accuracy aspects of hyper redundancy and illustrates the technique by way of the photogrammetric measurement of surface deformation of a radio telescope of 26 m diameter. This project required an object point measurement accuracy of σ = 0·065 mm, or 1/400 000 of the diameter of the reflector. 相似文献
879.
A field experiment was conducted on wheat at New Delhi with five treatments of Nitrogen (N) fertilizer application (0, 30,
60, 90 and 120 kgha-1). Relationship has been established between observed leaf area index (LAI) and remotely sensed vegetation indices. These
relationships are inverted and used for predicting LAI from vegetation indices on different days after sowing. The “re-initialization”
strategy is implemented in model WTGROWS in which initial conditions of model are changed so that the model simulated LAI
match remote sensing predicted LAI. The model performance with re-initialization has been evaluated by comparing the simulated
grain yield and total above-ground dry matter (TDM) values with the actual observations. The results show that in-season re-initialization
is effective in model course correction by improving the simulated results of yield and TDM for different N treatments even
though the model was run with no N stress condition. Model re-initialization at different days shows that the closer is the
day of re-initialization to crop anthesis the more effective is model course correction. Also, the treatment showing maximum
error in yield simulation without re-initialization shows maximum reduction in error by re-initialization. The approach shows
that the remote sensing inputs can substitute for some of the inputs or errors in inputs required by crop models for yield
prediction. 相似文献
880.