首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2551篇
  免费   312篇
  国内免费   261篇
测绘学   878篇
大气科学   307篇
地球物理   632篇
地质学   589篇
海洋学   217篇
天文学   21篇
综合类   226篇
自然地理   254篇
  2024年   12篇
  2023年   29篇
  2022年   51篇
  2021年   69篇
  2020年   79篇
  2019年   131篇
  2018年   99篇
  2017年   126篇
  2016年   145篇
  2015年   128篇
  2014年   154篇
  2013年   191篇
  2012年   137篇
  2011年   156篇
  2010年   123篇
  2009年   133篇
  2008年   136篇
  2007年   131篇
  2006年   120篇
  2005年   116篇
  2004年   105篇
  2003年   96篇
  2002年   61篇
  2001年   67篇
  2000年   67篇
  1999年   56篇
  1998年   67篇
  1997年   55篇
  1996年   47篇
  1995年   37篇
  1994年   28篇
  1993年   36篇
  1992年   17篇
  1991年   19篇
  1990年   13篇
  1989年   13篇
  1988年   12篇
  1987年   9篇
  1986年   13篇
  1985年   11篇
  1984年   5篇
  1983年   1篇
  1982年   8篇
  1981年   4篇
  1980年   5篇
  1979年   2篇
  1978年   1篇
  1976年   1篇
  1973年   1篇
  1954年   1篇
排序方式: 共有3124条查询结果,搜索用时 31 毫秒
31.
Two different goals in fitting straight lines to data are to estimate a true linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating true straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal—predicting the dependent variable—OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples.  相似文献   
32.
There is a correspondence between flow in a reservoir and large scale permeability trends. This correspondence can be derived by constraining reservoir models using observed production data. One of the challenges in deriving the permeability distribution of a field using production data involves determination of the scale of resolution of the permeability. The Adaptive Multiscale Estimation (AME) seeks to overcome the problems related to choosing the resolution of the permeability field by a dynamic parameterisation selection. The standard AME uses a gradient algorithm in solving several optimisation problems with increasing permeability resolution. This paper presents a hybrid algorithm which combines a gradient search and a stochastic algorithm to improve the robustness of the dynamic parameterisation selection. At low dimension, we use the stochastic algorithm to generate several optimised models. We use information from all these produced models to find new optimal refinements, and start out new optimisations with several unequally suggested parameterisations. At higher dimensions we change to a gradient-type optimiser, where the initial solution is chosen from the ensemble of models suggested by the stochastic algorithm. The selection is based on a predefined criterion. We demonstrate the robustness of the hybrid algorithm on sample synthetic cases, which most of them were considered insolvable using the standard AME algorithm.  相似文献   
33.
Within the framework of recent research projects, basic tools for GIS-based seismic risk assessment technologies were developed and applied to the building stock and regional particularities of German earthquake regions. Two study areas are investigated, being comparable by the level of seismic hazard and the hazard-consistent scenario events (related to mean return periods of 475, 2475 and 10000 years). Significant differences exist with respect to the number of inhabitants, the grade and extent of urbanisation, the quality and quantity of building inventory: the case study of Schmölln in Eastern Thuringia seems to be representative for the majority of smaller towns in Germany, the case study of Cologne (Köln) stands for larger cities. Due to the similarities of hazard and scenario intensities, the considerable differences do not only require proper decisions concerning the appropriate methods and acceptable efforts, they enable conclusions about future research strategies and needs for disaster reduction management. Not least important, results can sharpen the focus of public interest. Seismic risk maps are prepared for different scenario intensities recognising the scatter and uncertainties of site-dependent ground motion and also of the applied vulnerability functions. The paper illustrates the impact of model assumptions and the step-wise refinements of input variables like site conditions, building stock or vulnerability functions on the distribution of expected building damage within the study areas. Furthermore, and in contrast to common research strategies, results support the conclusion that in the case of stronger earthquakes the damage will be of higher concentration within smaller cities like Schmölln due to the site-amplification potential and/or the increased vulnerability of the building stock. The extent of damage will be pronounced by the large number of masonry buildings for which lower vulnerability classes have to be assigned. Due to the effect of deep sedimentary layers and the composition of building types, the urban centre of Cologne will be less affected by an earthquake of comparable intensity.  相似文献   
34.
Histograms of observations from spatial phenomena are often found to be more heavy-tailed than Gaussian distributions, which makes the Gaussian random field model unsuited. A T-distributed random field model with heavy-tailed marginal probability density functions is defined. The model is a generalization of the familiar Student-T distribution, and it may be given a Bayesian interpretation. The increased variability appears cross-realizations, contrary to in-realizations, since all realizations are Gaussian-like with varying variance between realizations. The T-distributed random field model is analytically tractable and the conditional model is developed, which provides algorithms for conditional simulation and prediction, so-called T-kriging. The model compares favourably with most previously defined random field models. The Gaussian random field model appears as a special, limiting case of the T-distributed random field model. The model is particularly useful whenever multiple, sparsely sampled realizations of the random field are available, and is clearly favourable to the Gaussian model in this case. The properties of the T-distributed random field model is demonstrated on well log observations from the Gullfaks field in the North Sea. The predictions correspond to traditional kriging predictions, while the associated prediction variances are more representative, as they are layer specific and include uncertainty caused by using variance estimates.  相似文献   
35.
平差系统的模型误差及其识别方法研究   总被引:2,自引:0,他引:2  
论述了模型误差影响参数估值的一些理论问题,指出了随机模型误差和函数模型误差之间的相互作用和转化。为讨论平差系统最优模型的选取,给出了与现有文献将模型误差纳入平差系统的思路不同的一个估计和识别模型误差的理论基础公式,由此导出了相应的实用公式,给出了平差系统模型的优选方法。  相似文献   
36.
提出研究遥感立体像对的压缩问题。主要讨论了左右影像的视差补偿和辐射补偿。针对遥感立体像对视差分布不均以及左右影像存在辐射差的特点,提出了一种基于立体补偿的遥感立体像对压缩算法。该算法以左片为基准图像,采用自适应视差估计计算出右片的视差矢量,结合辐射校正和重叠块视差补偿技术得到平滑的右片的预测图像,以右片减去预测图像得到残差图像,然后采用小波压缩算法对残差图像进行压缩。实验结果表明,该算法能显著提高遥感立体像对的压缩性能。  相似文献   
37.
In this contribution, we introduce a new bootstrap-based method for Global Navigation Satellite System (GNSS) carrier-phase ambiguity resolution. Integer bootstrapping is known to be one of the simplest methods for integer ambiguity estimation with close-to-optimal performance. Its outcome is easy to compute due to the absence of an integer search, and its performance is close to optimal if the decorrelating Z-transformation of the LAMBDA method is used. Moreover, the bootstrapped estimator is presently the only integer estimator for which an exact and easy-to-compute expression of its fail-rate can be given. A possible disadvantage is, however, that the user has only a limited control over the fail-rate. Once the underlying mathematical model is given, the user has no freedom left in changing the value of the fail-rate. Here, we present an ambiguity estimator for which the user is given additional freedom. For this purpose, use is made of the class of integer aperture estimators as introduced in Teunissen (2003). This class is larger than the class of integer estimators. Integer aperture estimators are of a hybrid nature and can have integer outcomes as well as non-integer outcomes. The new estimator is referred to as integer aperture bootstrapping. This new estimator has all the advantages known from integer bootstrapping with the additional advantage that its fail-rate can be controlled by the user. This is made possible by giving the user the freedom over the aperture of the pull-in region. We also give an exact and easy-to-compute expression for its controllable fail-rate.  相似文献   
38.
Some theory problems affecting parameter estimation are discussed in this paper. Influence and transformation between errors of stochastic and functional models is pointed out as well. For choosing the best adjustment model, a formula, which is different from the literatures existing methods, for estimating and identifying the model error, is proposed. On the basis of the proposed formula, an effective approach of selecting the best model of adjustment system is given. Project supported by the Open Research Fund Program of the Key Laboratory of Geospace Environment and Geodesy, Ministry of Education, Wuhan University (No. 905276031-04-10).  相似文献   
39.
The propagation of unmodelled systematic errors into coordinate time series computed using least squares is investigated, to improve the understanding of unexplained signals and apparent noise in geodetic (especially GPS) coordinate time series. Such coordinate time series are invariably based on a functional model linearised using only zero and first-order terms of a (Taylor) series expansion about the approximate coordinates of the unknown point. The effect of such truncation errors is investigated through the derivation of a generalised systematic error model for the simple case of range observations from a single known reference point to a point which is assumed to be at rest by the least squares model but is in fact in motion. The systematic error function for a one pseudo-satellite two-dimensional case, designed to be as simple but as analogous to GPS positioning as possible, is quantified. It is shown that the combination of a moving reference point and unmodelled periodic displacement at the unknown point of interest, due to ocean tide loading, for example, results in an output coordinate time series containing many periodic terms when only zero and first-order expansion terms are used in the linearisation of the functional model. The amplitude, phase and period of these terms is dependent on the input amplitude, the locations of the unknown point and reference point, and the period of the reference point's motion. The dominant output signals that arise due to truncation errors match those found in coordinate time series obtained from both simulated data and real three-dimensional GPS data.  相似文献   
40.
Based on the fitting on paleoearthquake data of intra-plate regions in the northern part of Chi-na and giving a statistical model of time interdependence,the potential damage earthquakes in a definite future period and characteristics of present shocks along the Lingwu fault have been analyzed by using dangerous probability function and some new data concerned.We have in-ferred that the fault has entered a period that earthquakes will probably occur.There exists a potential danger that a strong earthquake with Ms7.0-7.5 will occur in 10-100a.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号