全文获取类型
收费全文 | 52243篇 |
免费 | 630篇 |
国内免费 | 498篇 |
专业分类
测绘学 | 1128篇 |
大气科学 | 3884篇 |
地球物理 | 10423篇 |
地质学 | 19278篇 |
海洋学 | 4488篇 |
天文学 | 11483篇 |
综合类 | 108篇 |
自然地理 | 2579篇 |
出版年
2021年 | 566篇 |
2020年 | 566篇 |
2019年 | 643篇 |
2018年 | 1389篇 |
2017年 | 1315篇 |
2016年 | 1526篇 |
2015年 | 786篇 |
2014年 | 1397篇 |
2013年 | 2603篇 |
2012年 | 1593篇 |
2011年 | 2044篇 |
2010年 | 1875篇 |
2009年 | 2365篇 |
2008年 | 2030篇 |
2007年 | 2094篇 |
2006年 | 2039篇 |
2005年 | 1383篇 |
2004年 | 1333篇 |
2003年 | 1357篇 |
2002年 | 1366篇 |
2001年 | 1213篇 |
2000年 | 1151篇 |
1999年 | 949篇 |
1998年 | 910篇 |
1997年 | 940篇 |
1996年 | 811篇 |
1995年 | 774篇 |
1994年 | 689篇 |
1993年 | 586篇 |
1992年 | 607篇 |
1991年 | 579篇 |
1990年 | 599篇 |
1989年 | 569篇 |
1988年 | 520篇 |
1987年 | 633篇 |
1986年 | 586篇 |
1985年 | 650篇 |
1984年 | 738篇 |
1983年 | 738篇 |
1982年 | 674篇 |
1981年 | 648篇 |
1980年 | 595篇 |
1979年 | 604篇 |
1978年 | 592篇 |
1977年 | 527篇 |
1976年 | 476篇 |
1975年 | 478篇 |
1974年 | 520篇 |
1973年 | 535篇 |
1972年 | 324篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
281.
Phase center modeling for LEO GPS receiver antennas and its impact on precise orbit determination 总被引:7,自引:5,他引:7
Adrian Jäggi R. Dach O. Montenbruck U. Hugentobler H. Bock G. Beutler 《Journal of Geodesy》2009,83(12):1145-1162
Most satellites in a low-Earth orbit (LEO) with demanding requirements on precise orbit determination (POD) are equipped with
on-board receivers to collect the observations from Global Navigation Satellite systems (GNSS), such as the Global Positioning
System (GPS). Limiting factors for LEO POD are nowadays mainly encountered with the modeling of the carrier phase observations,
where a precise knowledge of the phase center location of the GNSS antennas is a prerequisite for high-precision orbit analyses.
Since 5 November 2006 (GPS week 1400), absolute instead of relative values for the phase center location of GNSS receiver
and transmitter antennas are adopted in the processing standards of the International GNSS Service (IGS). The absolute phase
center modeling is based on robot calibrations for a number of terrestrial receiver antennas, whereas compatible antenna models
were subsequently derived for the remaining terrestrial receiver antennas by conversion (from relative corrections), and for
the GNSS transmitter antennas by estimation. However, consistent receiver antenna models for space missions such as GRACE
and TerraSAR-X, which are equipped with non-geodetic receiver antennas, are only available since a short time from robot calibrations.
We use GPS data of the aforementioned LEOs of the year 2007 together with the absolute antenna modeling to assess the presently
achieved accuracy from state-of-the-art reduced-dynamic LEO POD strategies for absolute and relative navigation. Near-field
multipath and cross-talk with active GPS occultation antennas turn out to be important and significant sources for systematic
carrier phase measurement errors that are encountered in the actual spacecraft environments. We assess different methodologies
for the in-flight determination of empirical phase pattern corrections for LEO receiver antennas and discuss their impact
on POD. By means of independent K-band measurements, we show that zero-difference GRACE orbits can be significantly improved
from about 10 to 6 mm K-band standard deviation when taking empirical phase corrections into account, and assess the impact
of the corrections on precise baseline estimates and further applications such as gravity field recovery from kinematic LEO
positions. 相似文献
282.
Many regions around the world require improved gravimetric data bases to support very accurate geoid modeling for the modernization
of height systems using GPS. We present a simple yet effective method to assess gravity data requirements, particularly the
necessary resolution, for a desired precision in geoid computation. The approach is based on simulating high-resolution gravimetry
using a topography-correlated model that is adjusted to be consistent with an existing network of gravity data. Analysis of
these adjusted, simulated data through Stokes’s integral indicates where existing gravity data must be supplemented by new
surveys in order to achieve an acceptable level of omission error in the geoid undulation. The simulated model can equally
be used to analyze commission error, as well as model error and data inconsistencies to a limited extent. The proposed method
is applied to South Korea and shows clearly where existing gravity data are too scarce for precise geoid computation. 相似文献
283.
Ben K. H. Soon Steve Scheding Hyung-Kuen Lee Hung-Kyu Lee Hugh Durrant-Whyte 《GPS Solutions》2008,12(4):261-271
This paper presents a simple and effective approach that incorporates single-frequency, L1 time-differenced GPS carrier phase
(TDCP) measurements without the need of ambiguity resolution techniques and the complexity to accommodate the delayed-state
terms. Static trial results are included to illustrate the stochastic characteristics and effectiveness of the TDCP measurements
in controlling position error growth. The formulation of the TDCP observation model is also described in a 17-state tightly-coupled
GPS/INS iterative, extended Kalman filter (IEKF) approach. Preliminary land vehicle trial results are also presented to illustrate
the effectiveness of the TDCP which provides sub-meter positional accuracies when operating for more than 10 min. 相似文献
284.
D. Rieke-Zapp W. Tecklenburg J. Peipe H. Hastedt Claudia Haig 《ISPRS Journal of Photogrammetry and Remote Sensing》2009,64(3):248-258
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems–Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras. 相似文献
285.
E. H. Knickmeyer 《Journal of Geodesy》1990,64(2):161-163
Editor's comment: This letter has been received fromDr. E.H. Knickmeyer, University of Calgary, Dep. of Surveying Engineering, 2500 University Dr. N. Calgary, Alberta, Canada T2N 1N4, in April
1988. A response to this letter has been written byDr. C. Boucher, who represents the Central Bureau ofIERS andIAG SSG 5.123. Similar letters, dealing with matters of interest for geodesy and for IAG will in the future be published in Bulletin Géodésique
with an answer by the person or committee in IAG which is most closely related to or responsible for the matter being dealt
with in the letter. 相似文献
286.
Jane Bemigisha John Carranza rew K. Skidmore Mike McCall Chiara Polce Herbert H.T. Prins 《Transactions in GIS》2009,13(3):273-293
In a project to classify livestock grazing intensity using participatory geographic information systems (PGIS), we encountered the problem of how to synthesize PGIS-based maps of livestock grazing intensity that were prepared separately by local experts. We investigated the utility of evidential belief functions (EBFs) and Dempster's rule of combination to represent classification uncertainty and integrate the PGIS-based grazing intensity maps. These maps were used as individual sets of evidence in the application of EBFs to evaluate the proposition that " This area or pixel belongs to the high, medium, or low grazing intensity class because the local expert(s) says (say) so ". The class-area-weighted averages of EBFs based on each of the PGIS-based maps show that the lowest degree of classification uncertainty is associated with maps in which "vegetation species" was used as the mapping criterion. This criterion, together with local landscape attributes of livestock use may be considered as an appropriate standard measure for grazing intensity. The maps of integrated EBFs of grazing intensity show that classification uncertainty is high when the local experts apply at least two mapping criteria together. This study demonstrates the usefulness of EBFs to represent classification uncertainty and the possibility to use the EBF values in identifying and using criteria for PGIS-based mapping of livestock grazing intensity. 相似文献
287.
Background
Forest fuel treatments have been proposed as tools to stabilize carbon stocks in fire-prone forests in the Western U.S.A. Although fuel treatments such as thinning and burning are known to immediately reduce forest carbon stocks, there are suggestions that these losses may be paid back over the long-term if treatments sufficiently reduce future wildfire severity, or prevent deforestation. Although fire severity and post-fire tree regeneration have been indicated as important influences on long-term carbon dynamics, it remains unclear how natural variability in these processes might affect the ability of fuel treatments to protect forest carbon resources. We surveyed a wildfire where fuel treatments were put in place before fire and estimated the short-term impact of treatment and wildfire on aboveground carbon stocks at our study site. We then used a common vegetation growth simulator in conjunction with sensitivity analysis techniques to assess how predicted timescales of carbon recovery after fire are sensitive to variation in rates of fire-related tree mortality, and post-fire tree regeneration.Results
We found that fuel reduction treatments were successful at ameliorating fire severity at our study site by removing an estimated 36% of aboveground biomass. Treated and untreated stands stored similar amounts of carbon three years after wildfire, but differences in fire severity were such that untreated stands maintained only 7% of aboveground carbon as live trees, versus 51% in treated stands. Over the long-term, our simulations suggest that treated stands in our study area will recover baseline carbon storage 10?C35?years more quickly than untreated stands. Our sensitivity analysis found that rates of fire-related tree mortality strongly influence estimates of post-fire carbon recovery. Rates of regeneration were less influential on recovery timing, except when fire severity was high.Conclusions
Our ability to predict the response of forest carbon resources to anthropogenic and natural disturbances requires models that incorporate uncertainty in processes important to long-term forest carbon dynamics. To the extent that fuel treatments are able to ameliorate tree mortality rates or prevent deforestation resulting from wildfire, our results suggest that treatments may be a viable strategy to stabilize existing forest carbon stocks. 相似文献288.
In the Global Positioning System, there is no provision for real-time integrity information within the Standard Positioning Service, by design. However, in safety critical sectors like aviation, stringent integrity performance requirements must be met. This can be achieved using the special augmentation systems or RAIM (Receiver Autonomous Integrity Monitoring) or both. RAIM, the most cost-effective method relies on data consistency, and therefore requires redundant measurements for its operation. An external aid to provide this redundancy can be in the form of an Inertial Navigation system. This should enable continued performance even when no redundant satellite measurements are available. An algorithm presented in previous papers by the authors detects the rate of slowly growing errors. The algorithm was shown to be effective for early detection of slowly growing errors that belong to the class of most difficult to detect errors. Firstly, rate detector is tested for varying faults. Secondly, real data are used to validate the rate detector algorithm. The data are extensively analyzed to ascertain whether it is suitable for integrity and fault diagnostics. A modification to the original rate detector algorithm is suggested by addition of a bias state to the dynamic model. The performance is then compared with the existing techniques and substantial improvement is shown. 相似文献
289.
A new method is presented for the computation of the gravitational attraction of topographic masses when their height information is given on a regular grid. It is shown that the representation of the terrain relief by means of a bilinear surface not only offers a serious alternative to the polyhedra modeling, but also approaches even more smoothly the continuous reality. Inserting a bilinear approximation into the known scheme of deriving closed analytical expressions for the potential and its first-order derivatives for an arbitrarily shaped polyhedron leads to a one-dimensional integration with – apparently – no analytical solution. However, due to the high degree of smoothness of the integrand function, the numerical computation of this integral is very efficient. Numerical tests using synthetic data and a densely sampled digital terrain model in the Bavarian Alps prove that the new method is comparable to or even faster than a terrain modeling using polyhedra. 相似文献
290.
Geoid determination using adapted reference field, seismic Moho depths and variable density contrast 总被引:4,自引:0,他引:4
The traditional remove-restore technique for geoid computation suffers from two main drawbacks. The first is the assumption
of an isostatic hypothesis to compute the compensation masses. The second is the double consideration of the effect of the
topographic–isostatic masses within the data window through removing the reference field and the terrain reduction process.
To overcome the first disadvantage, the seismic Moho depths, representing, more or less, the actual compensating masses, have
been used with variable density anomalies computed by employing the topographic–isostatic mass balance principle. In order
to avoid the double consideration of the effect of the topographic–isostatic masses within the data window, the effect of
these masses for the used fixed data window, in terms of potential coefficients, has been subtracted from the reference field,
yielding an adapted reference field. This adapted reference field has been used for the remove–restore technique. The necessary
harmonic analysis of the topographic–isostatic potential using seismic Moho depths with variable density anomalies is given.
A wide comparison among geoids computed by the adapted reference field with both the Airy–Heiskanen isostatic model and seismic
Moho depths with variable density anomaly and a geoid computed by the traditional remove–restore technique is made. The results
show that using seismic Moho depths with variable density anomaly along with the adapted reference field gives the best relative
geoid accuracy compared to the GPS/levelling geoid.
Received: 3 October 2001 / Accepted: 20 September 2002
Correspondence to: H.A. Abd-Elmotaal 相似文献