首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   27216篇
  免费   727篇
  国内免费   1753篇
测绘学   1759篇
大气科学   2618篇
地球物理   5153篇
地质学   13072篇
海洋学   1308篇
天文学   1751篇
综合类   2451篇
自然地理   1584篇
  2024年   15篇
  2023年   49篇
  2022年   142篇
  2021年   196篇
  2020年   144篇
  2019年   139篇
  2018年   4891篇
  2017年   4181篇
  2016年   2739篇
  2015年   378篇
  2014年   281篇
  2013年   218篇
  2012年   1204篇
  2011年   2913篇
  2010年   2193篇
  2009年   2481篇
  2008年   2025篇
  2007年   2511篇
  2006年   166篇
  2005年   325篇
  2004年   504篇
  2003年   485篇
  2002年   336篇
  2001年   140篇
  2000年   150篇
  1999年   113篇
  1998年   141篇
  1997年   106篇
  1996年   74篇
  1995年   66篇
  1994年   75篇
  1993年   57篇
  1992年   46篇
  1991年   40篇
  1990年   25篇
  1989年   23篇
  1988年   22篇
  1987年   15篇
  1986年   16篇
  1985年   3篇
  1984年   5篇
  1983年   4篇
  1982年   4篇
  1981年   22篇
  1980年   22篇
  1979年   2篇
  1976年   6篇
  1958年   1篇
  1957年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
21.

当前,时空信息、定位导航服务已成为重要的新型基础设施,在通用人工智能的驱动下,大模型引领的智能时代已经到来,越来越强大的大模型将在测绘时空信息智能处理与应用中发挥越来越重要的作用。以大模型为研究范式,总结大模型在测绘时空信息智能处理的现状与进展,分析大模型在测绘时空信息智能处理面临的挑战,阐述多模态融合与理解架构设计、提示工程优化微调以及人在回路引导决策3个核心方面在时空信息测绘大模型中的关键作用。针对行业理解深度、数据安全隐患、内容可信度保障以及训练部署成本优化4个方面,展望时空信息测绘大模型面临的挑战与发展趋势。

  相似文献   
22.
In an elementary approach every geometrical height difference between the staff points of a levelling line should have a corresponding average g value for the determination of potential difference in the Earth’s gravity field. In practice this condition requires as many gravity data as the number of staff points if linear variation of g is assumed between them. Because of the expensive fieldwork, the necessary data should be supplied from different sources. This study proposes an alternative solution, which is proved at a test bed located in the Mecsek Mountains, Southwest Hungary, where a detailed gravity survey, as dense as the staff point density (~1 point/34 m), is available along a 4.3-km-long levelling line. In the first part of the paper the effect of point density of gravity data on the accuracy of potential difference is investigated. The average g value is simply derived from two neighbouring g measurements along the levelling line, which are incrementally decimated in the consecutive turns of processing. The results show that the error of the potential difference between the endpoints of the line exceeds 0.1 mm in terms of length unit if the sampling distance is greater than 2 km. Thereafter, a suitable method for the densification of the decimated g measurements is provided. It is based on forward gravity modelling utilising a high-resolution digital terrain model, the normal gravity and the complete Bouguer anomalies. The test shows that the error is only in the order of 10−3mm even if the sampling distance of g measurements is 4 km. As a component of the error sources of levelling, the ambiguity of the levelled height difference which is the Euclidean distance between the inclined equipotential surfaces is also investigated. Although its effect accumulated along the test line is almost zero, it reaches 0.15 mm in a 1-km-long intermediate section of the line.  相似文献   
23.
Many regions around the world require improved gravimetric data bases to support very accurate geoid modeling for the modernization of height systems using GPS. We present a simple yet effective method to assess gravity data requirements, particularly the necessary resolution, for a desired precision in geoid computation. The approach is based on simulating high-resolution gravimetry using a topography-correlated model that is adjusted to be consistent with an existing network of gravity data. Analysis of these adjusted, simulated data through Stokes’s integral indicates where existing gravity data must be supplemented by new surveys in order to achieve an acceptable level of omission error in the geoid undulation. The simulated model can equally be used to analyze commission error, as well as model error and data inconsistencies to a limited extent. The proposed method is applied to South Korea and shows clearly where existing gravity data are too scarce for precise geoid computation.  相似文献   
24.
A three-step hierarchical Semi Automated Empirical Methane Emission Model (SEMEM) has been used to estimate methane emission from wetlands and waterlogged areas in India using Moderate Resolution Imagine Spectroradiometer (MODIS) sensor data onboard Terra satellite. Wetland Surface Temperature (WST), methane emission fluxes and wetland extent have been incorporated as parameters in order to model the methane emission. Analysis of monthly MODIS data covering the whole of India from November 2004 to April 2006 was carried out and monthly methane emissions have been estimated. Interpolation techniques were adopted to fill the data gaps due to cloudy conditions during the monsoon period. AutoRegressive Integrated Moving Average (ARIMA) model has been fitted to estimate the emitted methane for the months of May 2006 to August 2006 using SPSS software.  相似文献   
25.
Interferometric Synthetic Aperture Radar (InSAR), nowadays, is a precise technique for monitoring and detecting ground deformation at a millimetric level over large areas using multi-temporal SAR images. Persistent Scatterer Interferometric SAR (PSInSAR), an advanced version of InSAR, is an effective tool for measuring ground deformation using temporally stable reference points or persistent scatterers. We have applied both PSInSAR and Small Baseline Subset (SBAS) methods, based on the spatial correlation of interferometric phase, to estimate the ground deformation and time-series analysis. In this study, we select Las Vegas, Nevada, USA as our test area to detect the ground deformation along satellite line-of-sight (LOS) during November 1992–September 2000 using 44 C-band SAR images of the European Remote Sensing (ERS-1 and ERS-2) satellites. We observe the ground displacement rate of Las Vegas is in the range of ?19 to 8 mm/year in the same period. We also cross-compare PSInSAR and SBAS using mean LOS velocity and time-series. The comparison shows a correlation coefficient of 0.9467 in the case of mean LOS velocity. Along this study, we validate the ground deformation results from the satellite with the ground water depth of Las Vegas using time-series analysis, and the InSAR measurements show similar patterns with ground water data.  相似文献   
26.
Single-frequency precise point positioning (SF-PPP) is a potential precise positioning technique due to the advantages of the high accuracy in positioning after convergence and the low cost in operation. However, there are still challenges limiting its applications at present, such as the long convergence time, the low reliability, and the poor satellite availability and continuity in kinematic applications. In recent years, the achievements in the dual-frequency PPP have confirmed that its performance can be significantly enhanced by employing the slant ionospheric delay and receiver differential code bias (DCB) constraint model, and the multi-constellation Global Navigation Satellite Systems (GNSS) data. Accordingly, we introduce the slant ionospheric delay and receiver DCB constraint model, and the multi-GNSS data in SF-PPP modular together. In order to further overcome the drawbacks of SF-PPP in terms of reliability, continuity, and accuracy in the signal easily blocking environments, the inertial measurements are also adopted in this paper. Finally, we form a new approach to tightly integrate the multi-GNSS single-frequency observations and inertial measurements together to ameliorate the performance of the ionospheric delay and receiver DCB-constrained SF-PPP. In such model, the inter-system bias between each two GNSS systems, the inter-frequency bias between each two GLONASS frequencies, the hardware errors of the inertial sensors, the slant ionospheric delays of each user-satellite pair, and the receiver DCB are estimated together with other parameters in a unique Kalman filter. To demonstrate its performance, the multi-GNSS and low-cost inertial data from a land-borne experiment are analyzed. The results indicate that visible positioning improvements in terms of accuracy, continuity, and reliability can be achieved in both open-sky and complex conditions while using the proposed model in this study compared to the conventional GPS SF-PPP.  相似文献   
27.
The Global Navigation Satellite System presents a plausible and cost-effective way of computing the total electron content (TEC). But TEC estimated value could be seriously affected by the differential code biases (DCB) of frequency-dependent satellites and receivers. Unlike GPS and other satellite systems, GLONASS adopts a frequency-division multiplexing access mode to distinguish different satellites. This strategy leads to different wavelengths and inter-frequency biases (IFBs) for both pseudo-range and carrier phase observations, whose impacts are rarely considered in ionospheric modeling. We obtained observations from four groups of co-stations to analyze the characteristics of the GLONASS receiver P1P2 pseudo-range IFB with a double-difference method. The results showed that the GLONASS P1P2 pseudo-range IFB remained stable for a period of time and could catch up to several meters, which cannot be absorbed by the receiver DCB during ionospheric modeling. Given the characteristics of the GLONASS P1P2 pseudo-range IFB, we proposed a two-step ionosphere modeling method with the priori IFB information. The experimental analysis showed that the new algorithm can effectively eliminate the adverse effects on ionospheric model and hardware delay parameters estimation in different space environments. During high solar activity period, compared to the traditional GPS + GLONASS modeling algorithm, the absolute average deviation of TEC decreased from 2.17 to 2.07 TECu (TEC unit); simultaneously, the average RMS of GPS satellite DCB decreased from 0.225 to 0.219 ns, and the average deviation of GLONASS satellite DCB decreased from 0.253 to 0.113 ns with a great improvement in over 55%.  相似文献   
28.
Precise positioning requires an accurate a priori troposphere model to enhance the solution quality. Several empirical models are available, but they may not properly characterize the state of troposphere, especially in severe weather conditions. Another possible solution is to use regional troposphere models based on real-time or near-real time measurements. In this study, we present the total refractivity and zenith total delay (ZTD) models based on a numerical weather prediction (NWP) model, Global Navigation Satellite System (GNSS) data and ground-based meteorological observations. We reconstruct the total refractivity profiles over the western part of Switzerland and the total refractivity profiles as well as ZTDs over Poland using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zürich. In these two case studies, profiles of the total refractivity and ZTDs are calculated from different data sets. For Switzerland, the data set with the best agreement with the reference radiosonde (RS) measurements is the combination of ground-based meteorological observations and GNSS ZTDs. Introducing the horizontal gradients does not improve the vertical interpolation, and results in slightly larger biases and standard deviations. For Poland, the data set based on meteorological parameters from the NWP Weather Research and Forecasting (WRF) model and from a combination of the NWP model and GNSS ZTDs shows the best agreement with the reference RS data. In terms of ZTD, the combined NWP-GNSS observations and GNSS-only data set exhibit the best accuracy with an average bias (from all stations) of 3.7 mm and average standard deviations of 17.0 mm w.r.t. the reference GNSS stations.  相似文献   
29.
GPS code pseudorange measurements exhibit group delay variations at the transmitting and the receiving antenna. We calibrated C1 and P2 delay variations with respect to dual-frequency carrier phase observations and obtained nadir-dependent corrections for 32 satellites of the GPS constellation in early 2015 as well as elevation-dependent corrections for 13 receiving antenna models. The combined delay variations reach up to 1.0 m (3.3 ns) in the ionosphere-free linear combination for specific pairs of satellite and receiving antennas. Applying these corrections to the code measurements improves code/carrier single-frequency precise point positioning, ambiguity fixing based on the Melbourne–Wübbena linear combination, and determination of ionospheric total electron content. It also affects fractional cycle biases and differential code biases.  相似文献   
30.
Synthetic aperture radar (SAR) is a day and night, all weather satellite imaging technology. Inherent property of SAR image is speckle noise which produces granular patterns in the image. Speckle noise occurs due to the interference of backscattered echo from earth’s rough surface. There are various speckle reduction techniques in spatial domain and transform domain. Non local means filtering (NLMF) is the technique used for denoising which uses Gaussian weights. In NLMF algorithm, the filtering is performed by taking the weighted mean of all the pixels in a selected search area. The weight given to the pixel is based on the similarity measure calculated as the weighted Euclidean distance over the two windows. Non local means filtering smoothes out homogeneous areas but edges are not preserved. So a discontinuity adaptive weight is used in order to preserve heterogeneous areas like edges. This technique is called as discontinuity adaptive non local means filtering and is well-adapted and robust in the case of Additive White Gaussian Noise (AWGN) model. But speckle is a multiplicative random noise and hence Euclidean distance is not a good choice. This paper presents evaluation results of using different distance measures for improving the accuracy of the Non local means filtering technique. The results are verified using real and synthetic images and from the results it can be concluded that the usage of Manhattan distance improves the accuracy of NLMF technique. Non local approach is used as a preprocessing or post processing technique for many denoising algorithms. So improving NLMF technique would help improving many of the existing denoising techniques.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号