首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   227篇
  免费   35篇
  国内免费   24篇
测绘学   21篇
大气科学   26篇
地球物理   143篇
地质学   52篇
海洋学   24篇
天文学   8篇
综合类   11篇
自然地理   1篇
  2022年   4篇
  2021年   2篇
  2020年   7篇
  2019年   16篇
  2018年   6篇
  2017年   12篇
  2016年   7篇
  2015年   11篇
  2014年   14篇
  2013年   10篇
  2012年   5篇
  2011年   13篇
  2010年   9篇
  2009年   21篇
  2008年   21篇
  2007年   17篇
  2006年   6篇
  2005年   7篇
  2004年   13篇
  2003年   9篇
  2002年   7篇
  2001年   7篇
  2000年   10篇
  1999年   5篇
  1998年   9篇
  1997年   10篇
  1996年   5篇
  1995年   8篇
  1994年   2篇
  1993年   6篇
  1992年   2篇
  1991年   1篇
  1990年   2篇
  1989年   1篇
  1986年   1篇
排序方式: 共有286条查询结果,搜索用时 46 毫秒
111.
根据湿润地区蓄满产流的思想和“水箱原理”建立小流域 (以颍河支流汾河上游区为例 )产汇流模型 ,分析从降水到洪水形成过程 ,并对模型参数的意义和在军事上的应用做了简要阐述。  相似文献   
112.
从W-3风频模型的期望值及其拟合误差出发,分别用矩法和极值法导出关于Weibul参数的两组方程。结合实测风况资料用数值法求解,获得相应的W-3风频参数,通过几个气象台站的实测资料计算、比较,证明这两种方法给出的参数其拟合精度均高于W-2模型给出的结果。  相似文献   
113.
    
Baardas reliability measures for outliers, as well as sensitivity and separability measures for deformations, are functions of the lower bound of the non-centrality parameter (LBNP). This parameter, which is taken from Baardas well-known nomograms, is actually a non-centrality parameter of the cumulative distribution function (CDF) of the non-central 2-distribution yielding a complementary probability of the desired power of the test, i.e. probability of Type II error. It is investigated how the LBNP can be computed for desired probabilities (power of the test and significance level) and known degrees of freedom. Two recursive algorithms, namely bisection and the Newton algorithm, were applied to compute the LBNP after the definition of a stable and accurate algorithm for the computation of the corresponding CDF. Despite the fact that the recursive algorithms ensure some desired accuracy, it is presented numerically that the Newton algorithm has a faster convergence to the solution than the bisection algorithm.  相似文献   
114.
Robustness of large quantile estimates to the largest element in a sample of methods of moments (MOM) and L-moments (LMM) was evaluated and compared. Quantiles were estimated by log-logistic and log-Gumbel distributions. Both are lower bounded and two-parameter distributions, with the coefficient of variation (CV) serving as the shape parameter. In addition, the results of these two methods were compared with those of the maximum likelihood method (MLM). Since identification and elimination of the outliers in a single sample require the knowledge of the samples parent distribution which is unknown, one estimates it by using the parameter estimation method which is relatively robust to the largest element in the sample. In practice this means that the method should be robust to extreme elements (including outliers) in a sample.The effect of dropping the largest element of the series on the large quantile values was assessed for various coefficient of variation (CV) / sample size (N) combinations. To that end, Monte-Carlo sampling experiments were applied. The results were compared with those obtained from the single representative sample, (the first order approximation), i.e., consisting of both the average values (Exi) for every position (i) of an ordered sample and the theoretical quantiles based on the plotting formula (PP).The ML-estimates of large quantiles were found to be most robust to largest element of samples except for a small sample where MOM-estimates were more robust. Comparing the performance of two other methods in respect to the large quantiles estimation, MOM was found to be more robust for small and moderate samples drawn from distributions with zero lower bound as shown for log-Gumbel and log-logistic distributions. The results from representative samples were fairly compatible with the M-C simulation results. The Ex-sample results were closer to the M-C results for smaller CV-values, and to the PP-sample results for greater CV values.  相似文献   
115.
Relativity, or gravitational physics, has widely entered geodetic modelling and parameter determination. This concerns, first of all, the fundamental reference systems used. The Barycentric Celestial Reference System (BCRS) has to be distinguished carefully from the Geocentric Celestial Reference System (GCRS), which is the basic theoretical system for geodetic modelling with a direct link to the International Terrestrial Reference System (ITRS), simply given by a rotation matrix. The relation to the International Celestial Reference System (ICRS) is discussed, as well as various properties and relevance of these systems. Then the representation of the gravitational field is discussed when relativity comes into play. Presently, the so-called post-Newtonian approximation to GRT (general relativity theory) including relativistic effects to lowest order is sufficient for practically all geodetic applications. At the present level of accuracy, space-geodetic techniques like VLBI (Very Long Baseline Interferometry), GPS (Global Positioning System) and SLR/LLR (Satellite/Lunar Laser Ranging) have to be modelled and analysed in the context of a post-Newtonian formalism. In fact, all reference and time frames involved, satellite and planetary orbits, signal propagation and the various observables (frequencies, pulse travel times, phase and travel-time differences) are treated within relativity. This paper reviews to what extent the space-geodetic techniques are affected by such a relativistic treatment and where—vice versa—relativistic parameters can be determined by the analysis of geodetic measurements. At the end, we give a brief outlook on how new or improved measurement techniques (e.g., optical clocks, Galileo) may further push relativistic parameter determination and allow for refined geodetic measurements.  相似文献   
116.
The fully temperature-dependent model of the effective pressure of the solid matrix and its related overpressure has been derived from the pressure balance equation, mass conservation, and Darcy’s law, and is directly useful in basin modeling. Application of the model in the Kuqa Depression of the Tarim Basin in western China proves that this overpressure model is highly accurate. The case of the present-day values of the calculated overpressure histories of Wells Kela2 and Yinan2 approach the field-measured data with mean absolute relative residuals of 3% and 5%, respectively. This indicates that the overpressure simulation is a practical alternative to using rock mechanics experiments for effective pressure measurement. Since calculation of the overpressure history uses the geohistory model and geothermal history model simulation outcomes, the relevant data used and the output of the two models of the Kela2 well are given as examples. The case studies show that the pore fluid density and viscosity used in the calculation of overpressures should be temperature-dependent, otherwise the calculation results would deviate far from the field-measured pressure data. They also show that the most sensitive parameter governing overpressure is permeability, and permeability can be calculated by using either the Kozeny–Carman formula or the porosity–power function. The Kozeny–Carman formula is better if accurate data for the specific surface area of the solid matrix (S a ) exists, otherwise, the porosity–power function is used. Furthermore, it is vital for calculating an accurate overpressure history that one can calibrate S a in the Kozeny–Carman formula, or index m in the porosity–power function by using field-measured pressure data as a constraint. In these specific case studies, the outcome found by using the Kozeny–Carman formula approaches the outcome found by using the porosity–power function with m=4, and both approach the field-measured pressure data.  相似文献   
117.
Halphen laws have been proposed as a complete system of distributions with sufficient statistics that lead to estimation with minimum variance. The Halphen system provides a flexibility to fit a large variety of data sets from natural events. In this paper we present the method of moments (MM) to estimate the Halphen type B and IB distribution parameters. Their computation is very fast when compared to those given by the maximum likelihood method (ML). Furthermore, this estimation method is very easy to implement since the formulae are explicit. Some simulations show the equivalence of both methods when estimating the quantiles for finite sample size.  相似文献   
118.
地震预测模型优化方法研究   总被引:2,自引:0,他引:2  
王晓青  邵辉成  丁香 《地震》2004,24(2):53-58
在分析现有基于观测指标“异常-正常”的“二态”地震前兆模式不足的基础上, 提出了异常可表现为多种状态的“多态”前兆模式, 给出“多态”前兆模式下预测效能判定指标, 进一步提出了基于预测效能最优的单项预测模型参数的选择方法。 在广义(时间、 空间、 时-空联合的预测指标)“多态”前兆模式下, 将所建立的模型应用于我国华北和南北带地区, 分别确定了两地区活断层预测地震(MS≥6)空间分布的最优模型(分别为断层周围20 km、 30 km)及其预测效能(分别为0.42、 0.34)。  相似文献   
119.
Underwater photogrammetry provides an efficient means for documentation of environments which are complex and have limited accessibility. Yet the establishment of reference control networks in such settings is oftentimes difficult. In this regard, use of the coplanarity condition, which requires neither knowledge of object space coordinates nor setting a reference control network, seems to be an attractive solution. However, the coplanarity relation does not hold in such environments because of the refraction effect, and methods that have been proposed thus far for geometrical modeling of its effect require knowledge of object-space quantities. Thus, this paper proposes a geometrically-driven approach which fulfills the coplanarity condition and thereby requires no knowledge of object space data. Such an approach may prove useful not only for object space reconstruction but also as a preparatory step for application of bundle block adjustment and for outlier detection. All are key features in photogrammetric practices. Results show that no unique setup is needed for estimating the relative orientation parameters using the model and that high levels of accuracy can be achieved.  相似文献   
120.
SCE-UA算法优化土壤湿度方程中参数的性能研究   总被引:2,自引:2,他引:0  
借助于一维土壤湿度模型,分别将土壤成份和土壤性质相关参数作为待优化的参数,通过观测系统模拟试验的方式,评估SCE-UA (Shuffled Complex Evolution Algorithm) 优化算法对这些参数的优化效果。结果表明:优化的效果不仅依赖于参数的取值范围,还依赖于参数的敏感性,敏感的参数通过优化算法易得到最优值;不敏感的参数存在“不敏感区间”,在“不敏感区间”中易陷入次优,通过缩小参数优化分布区间和增加优化的次数可以部分提高优化的效果。此外,模型的超定性也可能导致参数次优值的出现,而通过恰当地给出参数之间的约束条件和优化判据,可以提高参数优化的效果。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号