首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   33085篇
  免费   533篇
  国内免费   404篇
测绘学   831篇
大气科学   2946篇
地球物理   6873篇
地质学   11657篇
海洋学   2576篇
天文学   7022篇
综合类   72篇
自然地理   2045篇
  2020年   213篇
  2019年   250篇
  2018年   565篇
  2017年   517篇
  2016年   754篇
  2015年   505篇
  2014年   736篇
  2013年   1516篇
  2012年   816篇
  2011年   1120篇
  2010年   956篇
  2009年   1341篇
  2008年   1156篇
  2007年   1037篇
  2006年   1138篇
  2005年   949篇
  2004年   929篇
  2003年   955篇
  2002年   943篇
  2001年   787篇
  2000年   839篇
  1999年   698篇
  1998年   666篇
  1997年   698篇
  1996年   603篇
  1995年   565篇
  1994年   512篇
  1993年   453篇
  1992年   452篇
  1991年   433篇
  1990年   447篇
  1989年   428篇
  1988年   404篇
  1987年   494篇
  1986年   456篇
  1985年   489篇
  1984年   590篇
  1983年   590篇
  1982年   526篇
  1981年   512篇
  1980年   476篇
  1979年   452篇
  1978年   460篇
  1977年   408篇
  1976年   365篇
  1975年   367篇
  1974年   415篇
  1973年   401篇
  1972年   248篇
  1971年   229篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
181.
Many regions around the world require improved gravimetric data bases to support very accurate geoid modeling for the modernization of height systems using GPS. We present a simple yet effective method to assess gravity data requirements, particularly the necessary resolution, for a desired precision in geoid computation. The approach is based on simulating high-resolution gravimetry using a topography-correlated model that is adjusted to be consistent with an existing network of gravity data. Analysis of these adjusted, simulated data through Stokes’s integral indicates where existing gravity data must be supplemented by new surveys in order to achieve an acceptable level of omission error in the geoid undulation. The simulated model can equally be used to analyze commission error, as well as model error and data inconsistencies to a limited extent. The proposed method is applied to South Korea and shows clearly where existing gravity data are too scarce for precise geoid computation.  相似文献   
182.
This paper presents a simple and effective approach that incorporates single-frequency, L1 time-differenced GPS carrier phase (TDCP) measurements without the need of ambiguity resolution techniques and the complexity to accommodate the delayed-state terms. Static trial results are included to illustrate the stochastic characteristics and effectiveness of the TDCP measurements in controlling position error growth. The formulation of the TDCP observation model is also described in a 17-state tightly-coupled GPS/INS iterative, extended Kalman filter (IEKF) approach. Preliminary land vehicle trial results are also presented to illustrate the effectiveness of the TDCP which provides sub-meter positional accuracies when operating for more than 10 min.  相似文献   
183.
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems–Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47  μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52  μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29  μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25  μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.  相似文献   
184.
Editor's comment: This letter has been received fromDr. E.H. Knickmeyer, University of Calgary, Dep. of Surveying Engineering, 2500 University Dr. N. Calgary, Alberta, Canada T2N 1N4, in April 1988. A response to this letter has been written byDr. C. Boucher, who represents the Central Bureau ofIERS andIAG SSG 5.123. Similar letters, dealing with matters of interest for geodesy and for IAG will in the future be published in Bulletin Géodésique with an answer by the person or committee in IAG which is most closely related to or responsible for the matter being dealt with in the letter.  相似文献   
185.
In a project to classify livestock grazing intensity using participatory geographic information systems (PGIS), we encountered the problem of how to synthesize PGIS-based maps of livestock grazing intensity that were prepared separately by local experts. We investigated the utility of evidential belief functions (EBFs) and Dempster's rule of combination to represent classification uncertainty and integrate the PGIS-based grazing intensity maps. These maps were used as individual sets of evidence in the application of EBFs to evaluate the proposition that " This area or pixel belongs to the high, medium, or low grazing intensity class because the local expert(s) says (say) so ". The class-area-weighted averages of EBFs based on each of the PGIS-based maps show that the lowest degree of classification uncertainty is associated with maps in which "vegetation species" was used as the mapping criterion. This criterion, together with local landscape attributes of livestock use may be considered as an appropriate standard measure for grazing intensity. The maps of integrated EBFs of grazing intensity show that classification uncertainty is high when the local experts apply at least two mapping criteria together. This study demonstrates the usefulness of EBFs to represent classification uncertainty and the possibility to use the EBF values in identifying and using criteria for PGIS-based mapping of livestock grazing intensity.  相似文献   
186.

Background

Forest fuel treatments have been proposed as tools to stabilize carbon stocks in fire-prone forests in the Western U.S.A. Although fuel treatments such as thinning and burning are known to immediately reduce forest carbon stocks, there are suggestions that these losses may be paid back over the long-term if treatments sufficiently reduce future wildfire severity, or prevent deforestation. Although fire severity and post-fire tree regeneration have been indicated as important influences on long-term carbon dynamics, it remains unclear how natural variability in these processes might affect the ability of fuel treatments to protect forest carbon resources. We surveyed a wildfire where fuel treatments were put in place before fire and estimated the short-term impact of treatment and wildfire on aboveground carbon stocks at our study site. We then used a common vegetation growth simulator in conjunction with sensitivity analysis techniques to assess how predicted timescales of carbon recovery after fire are sensitive to variation in rates of fire-related tree mortality, and post-fire tree regeneration.

Results

We found that fuel reduction treatments were successful at ameliorating fire severity at our study site by removing an estimated 36% of aboveground biomass. Treated and untreated stands stored similar amounts of carbon three years after wildfire, but differences in fire severity were such that untreated stands maintained only 7% of aboveground carbon as live trees, versus 51% in treated stands. Over the long-term, our simulations suggest that treated stands in our study area will recover baseline carbon storage 10?C35?years more quickly than untreated stands. Our sensitivity analysis found that rates of fire-related tree mortality strongly influence estimates of post-fire carbon recovery. Rates of regeneration were less influential on recovery timing, except when fire severity was high.

Conclusions

Our ability to predict the response of forest carbon resources to anthropogenic and natural disturbances requires models that incorporate uncertainty in processes important to long-term forest carbon dynamics. To the extent that fuel treatments are able to ameliorate tree mortality rates or prevent deforestation resulting from wildfire, our results suggest that treatments may be a viable strategy to stabilize existing forest carbon stocks.  相似文献   
187.
A new method is presented for the computation of the gravitational attraction of topographic masses when their height information is given on a regular grid. It is shown that the representation of the terrain relief by means of a bilinear surface not only offers a serious alternative to the polyhedra modeling, but also approaches even more smoothly the continuous reality. Inserting a bilinear approximation into the known scheme of deriving closed analytical expressions for the potential and its first-order derivatives for an arbitrarily shaped polyhedron leads to a one-dimensional integration with – apparently – no analytical solution. However, due to the high degree of smoothness of the integrand function, the numerical computation of this integral is very efficient. Numerical tests using synthetic data and a densely sampled digital terrain model in the Bavarian Alps prove that the new method is comparable to or even faster than a terrain modeling using polyhedra.  相似文献   
188.
 The traditional remove-restore technique for geoid computation suffers from two main drawbacks. The first is the assumption of an isostatic hypothesis to compute the compensation masses. The second is the double consideration of the effect of the topographic–isostatic masses within the data window through removing the reference field and the terrain reduction process. To overcome the first disadvantage, the seismic Moho depths, representing, more or less, the actual compensating masses, have been used with variable density anomalies computed by employing the topographic–isostatic mass balance principle. In order to avoid the double consideration of the effect of the topographic–isostatic masses within the data window, the effect of these masses for the used fixed data window, in terms of potential coefficients, has been subtracted from the reference field, yielding an adapted reference field. This adapted reference field has been used for the remove–restore technique. The necessary harmonic analysis of the topographic–isostatic potential using seismic Moho depths with variable density anomalies is given. A wide comparison among geoids computed by the adapted reference field with both the Airy–Heiskanen isostatic model and seismic Moho depths with variable density anomaly and a geoid computed by the traditional remove–restore technique is made. The results show that using seismic Moho depths with variable density anomaly along with the adapted reference field gives the best relative geoid accuracy compared to the GPS/levelling geoid. Received: 3 October 2001 / Accepted: 20 September 2002 Correspondence to: H.A. Abd-Elmotaal  相似文献   
189.
 It is suggested that a spherical harmonic representation of the geoidal heights using global Earth gravity models (EGM) might be accurate enough for many applications, although we know that some short-wavelength signals are missing in a potential coefficient model. A `direct' method of geoidal height determination from a global Earth gravity model coefficient alone and an `indirect' approach of geoidal height determination through height anomaly computed from a global gravity model are investigated. In both methods, suitable correction terms are applied. The results of computations in two test areas show that the direct and indirect approaches of geoid height determination yield good agreement with the classical gravimetric geoidal heights which are determined from Stokes' formula. Surprisingly, the results of the indirect method of geoidal height determination yield better agreement with the global positioning system (GPS)-levelling derived geoid heights, which are used to demonstrate such improvements, than the results of gravimetric geoid heights at to the same GPS stations. It has been demonstrated that the application of correction terms in both methods improves the agreement of geoidal heights at GPS-levelling stations. It is also found that the correction terms in the direct method of geoidal height determination are mostly similar to the correction terms used for the indirect determination of geoidal heights from height anomalies. Received: 26 July 2001 / Accepted: 21 February 2002  相似文献   
190.
Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression–structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号