首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6359篇
  免费   860篇
  国内免费   630篇
测绘学   471篇
大气科学   731篇
地球物理   2152篇
地质学   2851篇
海洋学   549篇
天文学   362篇
综合类   307篇
自然地理   426篇
  2024年   7篇
  2023年   32篇
  2022年   79篇
  2021年   98篇
  2020年   79篇
  2019年   101篇
  2018年   524篇
  2017年   455篇
  2016年   311篇
  2015年   239篇
  2014年   199篇
  2013年   244篇
  2012年   776篇
  2011年   555篇
  2010年   227篇
  2009年   233篇
  2008年   229篇
  2007年   204篇
  2006年   213篇
  2005年   895篇
  2004年   931篇
  2003年   688篇
  2002年   214篇
  2001年   103篇
  2000年   65篇
  1999年   22篇
  1998年   10篇
  1997年   19篇
  1996年   13篇
  1995年   3篇
  1992年   3篇
  1991年   9篇
  1990年   10篇
  1989年   5篇
  1987年   4篇
  1983年   2篇
  1980年   3篇
  1976年   3篇
  1975年   4篇
  1973年   2篇
  1969年   2篇
  1968年   2篇
  1965年   3篇
  1963年   2篇
  1961年   2篇
  1959年   2篇
  1955年   2篇
  1954年   4篇
  1951年   2篇
  1948年   2篇
排序方式: 共有7849条查询结果,搜索用时 93 毫秒
991.
Drought is one of the most devastating climate disasters. Hence, drought forecasting plays an important role in mitigating some of the adverse effects of drought. Data-driven models are widely used for drought forecasting such as ARIMA model, artificial neural network (ANN) model, wavelet neural network (WANN) model, support vector regression model, grey model and so on. Three data-driven models (ARIMA model; ANN model; WANN model) are used in this study for drought forecasting based on standard precipitation index of two time scales (SPI; SPI-6 and SPI-12). The optimal data-driven model and time scale of SPI are then selected for effective drought forecasting in the North of Haihe River Basin. The effectiveness of the three data-models is compared by Kolmogorov–Smirnov (K–S) test, Kendall rank correlation, and the correlation coefficients (R2). The forecast results shows that the WANN model is more suitable and effective for forecasting SPI-6 and SPI-12 values in the north of Haihe River Basin.  相似文献   
992.
Several risk factors associated with the increased likelihood of healthcare-associated Clostridium difficile infection (CDI) have been identified in the literature. These risk factors are mainly related to age, previous CDI, antimicrobial exposure, and prior hospitalization. No model is available in the published literature that can be used to predict the CDI incidence using healthcare administration data. However, the administrative data can be imprecise and may challenge the building of classical statistical models. Fuzzy set theory can deal with the imprecision inherent in such data. This research aimed to develop a model based on deterministic and fuzzy mathematical techniques for the prediction of hospital-associated CDI by using the explanatory variables controllable by hospitals and health authority administration. Retrospective data on CDI incidence and other administrative data obtained from 22 hospitals within a regional health authority in British Columbia were used to develop a decision tree (deterministic technique based) and a fuzzy synthetic evaluation model (fuzzy technique based). The decision tree model had a higher prediction accuracy than that of the fuzzy based model. However, among the common results predicted by two models, 72 % were correct. Therefore, this relationship was used to combine their results to increase the precision and the strength of evidence of the prediction. These models were further used to develop an Excel-based tool called C. difficile Infection Incidence Prediction in Hospitals (CDIIPH). The tool can be utilized by health authorities and hospitals to predict the magnitude of CDI incidence in the following quarter.  相似文献   
993.
Regional frequency analysis is an important tool to properly estimate hydrological characteristics at ungauged or partially gauged sites in order to prevent hydrological disasters. The delineation of homogeneous groups of sites is an important first step in order to transfer information and obtain accurate quantile estimates at the target site. The Hosking–Wallis homogeneity test is usually used to test the homogeneity of the selected sites. Despite its usefulness and good power, it presents some drawbacks including the subjective choice of a parametric distribution for the data and a poorly justified rejection threshold. The present paper addresses these drawbacks by integrating nonparametric procedures in the L-moment homogeneity test. To assess the rejection threshold, three resampling methods (permutation, bootstrap and Pólya resampling) are considered. Results indicate that permutation and bootstrap methods perform better than the parametric Hosking–Wallis test in terms of power as well as in time and procedure simplicity. A real-world case study shows that the nonparametric tests agree with the HW test concerning the homogeneity of the volume and the bivariate case while they disagree for the peak case, but that the assumptions of the HW test are not well respected.  相似文献   
994.
Large observed datasets are not stationary and/or depend on covariates, especially, in the case of extreme hydrometeorological variables. This causes the difficulty in estimation, using classical hydrological frequency analysis. A number of non-stationary models have been developed using linear or quadratic polynomial functions or B-splines functions to estimate the relationship between parameters and covariates. In this article, we propose regularised generalized extreme value model with B-splines (GEV-B-splines models) in a Bayesian framework to estimate quantiles. Regularisation is based on penalty and aims to favour parsimonious model especially in the case of large dimension space. Penalties are introduced in a Bayesian framework and the corresponding priors are detailed. Five penalties are considered and the corresponding priors are developed for comparison purpose as: Least absolute shrinkage and selection (Lasso and Ridge) and smoothing clipped absolute deviations (SCAD) methods (SCAD1, SCAD2 and SCAD3). Markov chain Monte Carlo (MCMC) algorithms have been developed for each model to estimate quantiles and their posterior distributions. Those approaches are tested and illustrated using simulated data with different sample sizes. A first simulation was made on polynomial B-splines functions in order to choose the most efficient model in terms of relative mean biais (RMB) and the relative mean-error (RME) criteria. A second simulation was performed with the SCAD1 penalty for sinusoidal dependence to illustrate the flexibility of the proposed approach. Results show clearly that the regularized approaches leads to a significant reduction of the bias and the mean square error, especially for small sample sizes (n < 100). A case study has been considered to model annual peak flows at Fort-Kent catchment with the total annual precipitations as covariates. The conditional quantile curves were given for the regularized and the maximum likelihood methods.  相似文献   
995.
This study describes the parametric uncertainty of artificial neural networks (ANNs) by employing the generalized likelihood uncertainty estimation (GLUE) method. The ANNs are used to forecast daily streamflow for three sub-basins of the Rhine Basin (East Alpine, Main, and Mosel) having different hydrological and climatological characteristics. We have obtained prior parameter distributions from 5000 ANNs in the training period to capture the parametric uncertainty and subsequently 125,000 correlated parameter sets were generated. These parameter sets were used to quantify the uncertainty in the forecasted streamflow in the testing period using three uncertainty measures: percentage of coverage, average relative length, and average asymmetry degree. The results indicated that the highest uncertainty was obtained for the Mosel sub-basin and the lowest for the East Alpine sub-basin mainly due to hydro-climatic differences between these basins. The prediction results and uncertainty estimates of the proposed methodology were compared to the direct ensemble and bootstrap methods. The GLUE method successfully captured the observed discharges with the generated prediction intervals, especially the peak flows. It was also illustrated that uncertainty bands are sensitive to the selection of the threshold value for the Nash–Sutcliffe efficiency measure used in the GLUE method by employing the Wilcoxon–Mann–Whitney test.  相似文献   
996.
Modelling glacier discharge is an important issue in hydrology and climate research. Glaciers represent a fundamental water resource when melting of ice and snow contributes to runoff. Glaciers are also studied as natural global warming sensors. GLACKMA association has implemented one of their Pilot Experimental Catchment areas at the King George Island in the Antarctica which records values of the liquid discharge from Collins glacier. In this paper, we propose the use of time-varying copula models for analyzing the relationship between air temperature and glacier discharge, which is clearly non constant and non linear through time. A seasonal copula model is defined where both the marginal and copula parameters vary periodically along time following a seasonal dynamic. Full Bayesian inference is performed such that the marginal and copula parameters are estimated in a one single step, in contrast with the usual two-step approach. Bayesian prediction and model selection is also carried out for the proposed model such that Bayesian credible intervals can be obtained for the conditional glacier discharge given a value of the temperature at any given time point. The proposed methodology is illustrated using the GLACKMA real data where there is, in addition, a hydrological year of missing discharge data which were not possible to measure accurately due to problems in the sounding.  相似文献   
997.
The present study attempts to investigate potential impacts of climate change on floods frequency in Bazoft Basin which is located in central part of Iran. A combination of four general circulation models is used through a weighting approach to assess uncertainty in the climate projections. LARS-WG model is applied to downscale large scale atmospheric data to local stations. The resulting data is in turn used as input of the hydrological model Water and Energy Transfer between Soil, plants and atmosphere (WetSpa) to simulate runoff for present (1971–2000), near future (2020–2049) and far future (2071–2100) conditions. Results demonstrate good performance of both WetSpa and LARS-WG models. In addition in this paper instantaneous peak flow (IPF) is estimated using some empirical equations including Fuller, Sangal and Fill–Steiner methods. Comparison of estimated and observed IPF shows that Fill–Steiner is better than other methods. Then different probability distribution functions are fit to IPF series. Results of flood frequency analysis indicate that Pearson III is the best distribution fitted to IPF data. It is also indicated that flood magnitude will decrease in future for all return periods.  相似文献   
998.
Regional bivariate modeling of droughts using L-comoments and copulas   总被引:1,自引:0,他引:1  
The regional bivariate modeling of drought characteristics using the copulas provides valuable information for water resources management and drought risk assessment. The regional frequency analysis (RFA) can specify the similar sites within a region using L-comoments approach. One of the important steps in the RFA is estimating regional parameters of the copula function. In the present study, an optimization-based method along with the adjusted charged system search are introduced and applied to estimate the regional parameters of the copula models. The capability of the proposed methodology is illustrated by copula functions on drought events. Three commonly used copulas containing Clayton, Frank and Gumbel are employed to derive the joint distribution of drought severity and duration. The result of the new method are compared to the method of moments and after applying several goodness-of-fit tests, the results indicate that the new method provides higher accuracy than the classic one. Furthermore, the results of the upper tail dependence coefficient indicate that the Gumbel copula is the best-fitted copula among the other ones for modeling drought characteristics.  相似文献   
999.
Parameter uncertainty in hydrologic modeling is crucial to the flood simulation and forecasting. The Bayesian approach allows one to estimate parameters according to prior expert knowledge as well as observational data about model parameter values. This study assesses the performance of two popular uncertainty analysis (UA) techniques, i.e., generalized likelihood uncertainty estimation (GLUE) and Bayesian method implemented with the Markov chain Monte Carlo sampling algorithm, in evaluating model parameter uncertainty in flood simulations. These two methods were applied to the semi-distributed Topographic hydrologic model (TOPMODEL) that includes five parameters. A case study was carried out for a small humid catchment in the southeastern China. The performance assessment of the GLUE and Bayesian methods were conducted with advanced tools suited for probabilistic simulations of continuous variables such as streamflow. Graphical tools and scalar metrics were used to test several attributes of the simulation quality of selected flood events: deterministic accuracy and the accuracy of 95 % prediction probability uncertainty band (95PPU). Sensitivity analysis was conducted to identify sensitive parameters that largely affect the model output results. Subsequently, the GLUE and Bayesian methods were used to analyze the uncertainty of sensitive parameters and further to produce their posterior distributions. Based on their posterior parameter samples, TOPMODEL’s simulations and the corresponding UA results were conducted. Results show that the form of exponential decline in conductivity and the overland flow routing velocity were sensitive parameters in TOPMODEL in our case. Small changes in these two parameters would lead to large differences in flood simulation results. Results also suggest that, for both UA techniques, most of streamflow observations were bracketed by 95PPU with the containing ratio value larger than 80 %. In comparison, GLUE gave narrower prediction uncertainty bands than the Bayesian method. It was found that the mode estimates of parameter posterior distributions are suitable to result in better performance of deterministic outputs than the 50 % percentiles for both the GLUE and Bayesian analyses. In addition, the simulation results calibrated with Rosenbrock optimization algorithm show a better agreement with the observations than the UA’s 50 % percentiles but slightly worse than the hydrographs from the mode estimates. The results clearly emphasize the importance of using model uncertainty diagnostic approaches in flood simulations.  相似文献   
1000.
Model performance evaluation for real-time flood forecasting has been conducted using various criteria. Although the coefficient of efficiency (CE) is most widely used, we demonstrate that a model achieving good model efficiency may actually be inferior to the naïve (or persistence) forecasting, if the flow series has a high lag-1 autocorrelation coefficient. We derived sample-dependent and AR model-dependent asymptotic relationships between the coefficient of efficiency and the coefficient of persistence (CP) which form the basis of a proposed CECP coupled model performance evaluation criterion. Considering the flow persistence and the model simplicity, the AR(2) model is suggested to be the benchmark model for performance evaluation of real-time flood forecasting models. We emphasize that performance evaluation of flood forecasting models using the proposed CECP coupled criterion should be carried out with respect to individual flood events. A single CE or CP value derived from a multi-event artifactual series by no means provides a multi-event overall evaluation and may actually disguise the real capability of the proposed model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号