首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5322篇
  免费   550篇
  国内免费   157篇
测绘学   242篇
大气科学   604篇
地球物理   1961篇
地质学   2160篇
海洋学   275篇
天文学   349篇
综合类   187篇
自然地理   251篇
  2023年   4篇
  2022年   7篇
  2021年   21篇
  2020年   8篇
  2019年   18篇
  2018年   443篇
  2017年   375篇
  2016年   254篇
  2015年   155篇
  2014年   115篇
  2013年   121篇
  2012年   653篇
  2011年   431篇
  2010年   122篇
  2009年   141篇
  2008年   121篇
  2007年   112篇
  2006年   131篇
  2005年   833篇
  2004年   876篇
  2003年   653篇
  2002年   177篇
  2001年   71篇
  2000年   48篇
  1999年   15篇
  1998年   8篇
  1997年   18篇
  1996年   11篇
  1992年   2篇
  1991年   9篇
  1990年   9篇
  1989年   5篇
  1987年   4篇
  1980年   4篇
  1978年   2篇
  1977年   2篇
  1976年   3篇
  1975年   5篇
  1973年   2篇
  1969年   2篇
  1968年   2篇
  1965年   3篇
  1963年   2篇
  1961年   2篇
  1959年   2篇
  1955年   2篇
  1954年   2篇
  1951年   2篇
  1948年   2篇
  1934年   2篇
排序方式: 共有6029条查询结果,搜索用时 734 毫秒
741.
Precipitation is an important part of the hydrologic cycle, and its complexity is closely related to surface runoff and changing groundwater dynamics, which in turn influences the accuracy of precipitation forecasts. In this study, we used the Lempel–Ziv algorithm (LZA) and a multi-scaling approach to assess precipitation complexity for 1958–2011 by analyzing time series data from 28 gauging stations located throughout Jilin province, China. The spatial distribution of normalized precipitation complexity was measured by LZA, a symbolic dynamics algorithm, and by a multi-scaling approach, which is described by fractals. In addition, the advantages and limitations of these two methods were investigated. The results indicate that both methods are applicable and consistent for calculating precipitation complexity, and that the degree of relief is a primary factor controlling precipitation complexity in the mountainous area; in the plain terrain, however, the prominent influencing factor is climate.  相似文献   
742.
We introduce a density regression model for the spectral density of a bivariate extreme value distribution, that allows us to assess how extremal dependence can change over a covariate. Inference is performed through a double kernel estimator, which can be seen as an extension of the Nadaraya–Watson estimator where the usual scalar responses are replaced by mean constrained densities on the unit interval. Numerical experiments with the methods illustrate their resilience in a variety of contexts of practical interest. An extreme temperature dataset is used to illustrate our methods.  相似文献   
743.
Producing accurate spatial predictions for wind power generation together with a quantification of uncertainties is required to plan and design optimal networks of wind farms. Toward this aim, we propose spatial models for predicting wind power generation at two different time scales: for annual average wind power generation, and for a high temporal resolution (typically wind power averages over 15-min time steps). In both cases, we use a spatial hierarchical statistical model in which spatial correlation is captured by a latent Gaussian field. We explore how such models can be handled with stochastic partial differential approximations of Matérn Gaussian fields together with Integrated Nested Laplace Approximations. We demonstrate the proposed methods on wind farm data from Western Denmark, and compare the results to those obtained with standard geostatistical methods. The results show that our method makes it possible to obtain fast and accurate predictions from posterior marginals for wind power generation. The proposed method is applicable in scientific areas as diverse as climatology, environmental sciences, earth sciences and epidemiology.  相似文献   
744.
The optimal operation of dam reservoirs can be programmed and managed by predicting the inflow to these structures more accurately. To this end, there are various linear and nonlinear models. However, some hydrological problems like inflow with extreme seasonal variation are not purely linear or nonlinear. To improve the forecasting accuracy of this phenomenon, a linear Seasonal Auto Regressive Integrated Moving Average (SARIMA) model is combined with a nonlinear Artificial Neural Network (ANN) model. This new model is used to predict the monthly inflow to the Jamishan dam reservoir in West Iran. A comparison of the SARIMA and ANN models with the proposed hybrid model’s results is provided accordingly. More specifically, the models’ performance in forecasting base and flood flows is evaluated. The effect of changing the forecasting period length on the models’ accuracy is studied. The results of increasing the number of SARIMA model parameters up to five are investigated to achieve more accurate forecasting. The hybrid model predicts peak flood flows much better than the individual models, but SARIMA outperforms the other models in predicting base flow. The obtained results indicate that the hybrid model reduces the overall forecast error more than the ANN and SARIMA models. The coefficient of determination of the hybrid, ANN and SARIMA models were 0.72, 0.64 and 0.58, and the root mean squared error values were 1.02, 1.16 and 1.27 respectively, during the forecast period. Changing the forecasting length also indicated that these models can be used in the long term without increasing the forecast error.  相似文献   
745.
The classic univariate risk measure in environmental sciences is the Return Period (RP). The RP is traditionally defined as “the average time elapsing between two successive realizations of a prescribed event”. The notion of design quantile related with RP is also of great importance. The design quantile represents the “value of the variable(s) characterizing the event associated with a given RP”. Since an individual risk may strongly be affected by the degree of dependence amongst all risks, the need for the provision of multivariate design quantiles has gained ground. In contrast to the univariate case, the design quantile definition in the multivariate setting presents certain difficulties. In particular, Salvadori, G., De Michele, C. and Durante F. define in the paper called “On the return period and design in a multivariate framework” (Hydrol Earth Syst Sci 15:3293–3305, 2011) the design realization as the vector that maximizes a weight function given that the risk vector belongs to a given critical layer of its joint multivariate distribution function. In this paper, we provide the explicit expression of the aforementioned multivariate risk measure in the Archimedean copula setting. Furthermore, this measure is estimated by using Extreme Value Theory techniques and the asymptotic normality of the proposed estimator is studied. The performance of our estimator is evaluated on simulated data. We conclude with an application on a real hydrological data-set.  相似文献   
746.
A challenge when working with multivariate data in a geostatistical context is that the data are rarely Gaussian. Multivariate distributions may include nonlinear features, clustering, long tails, functional boundaries, spikes, and heteroskedasticity. Multivariate transformations account for such features so that they are reproduced in geostatistical models. Projection pursuit as developed for high dimensional data exploration can also be used to transform a multivariate distribution into a multivariate Gaussian distribution with an identity covariance matrix. Its application within a geostatistical modeling context is called the projection pursuit multivariate transform (PPMT). An approach to incorporate exhaustive secondary variables in the PPMT is introduced. With this approach the PPMT can incorporate any number of secondary variables with any number of primary variables. A necessary alteration to the approach to make this numerically practical was the implementation of a continuous probability estimator that relies on Bernstein polynomials for the transformation that takes place in the projections. Stopping criteria were updated to incorporate a bootstrap t test that compares data sampled from a multivariate Gaussian distribution with the data undergoing transformation.  相似文献   
747.
Drought is one of the most devastating climate disasters. Hence, drought forecasting plays an important role in mitigating some of the adverse effects of drought. Data-driven models are widely used for drought forecasting such as ARIMA model, artificial neural network (ANN) model, wavelet neural network (WANN) model, support vector regression model, grey model and so on. Three data-driven models (ARIMA model; ANN model; WANN model) are used in this study for drought forecasting based on standard precipitation index of two time scales (SPI; SPI-6 and SPI-12). The optimal data-driven model and time scale of SPI are then selected for effective drought forecasting in the North of Haihe River Basin. The effectiveness of the three data-models is compared by Kolmogorov–Smirnov (K–S) test, Kendall rank correlation, and the correlation coefficients (R2). The forecast results shows that the WANN model is more suitable and effective for forecasting SPI-6 and SPI-12 values in the north of Haihe River Basin.  相似文献   
748.
Several risk factors associated with the increased likelihood of healthcare-associated Clostridium difficile infection (CDI) have been identified in the literature. These risk factors are mainly related to age, previous CDI, antimicrobial exposure, and prior hospitalization. No model is available in the published literature that can be used to predict the CDI incidence using healthcare administration data. However, the administrative data can be imprecise and may challenge the building of classical statistical models. Fuzzy set theory can deal with the imprecision inherent in such data. This research aimed to develop a model based on deterministic and fuzzy mathematical techniques for the prediction of hospital-associated CDI by using the explanatory variables controllable by hospitals and health authority administration. Retrospective data on CDI incidence and other administrative data obtained from 22 hospitals within a regional health authority in British Columbia were used to develop a decision tree (deterministic technique based) and a fuzzy synthetic evaluation model (fuzzy technique based). The decision tree model had a higher prediction accuracy than that of the fuzzy based model. However, among the common results predicted by two models, 72 % were correct. Therefore, this relationship was used to combine their results to increase the precision and the strength of evidence of the prediction. These models were further used to develop an Excel-based tool called C. difficile Infection Incidence Prediction in Hospitals (CDIIPH). The tool can be utilized by health authorities and hospitals to predict the magnitude of CDI incidence in the following quarter.  相似文献   
749.
Regional frequency analysis is an important tool to properly estimate hydrological characteristics at ungauged or partially gauged sites in order to prevent hydrological disasters. The delineation of homogeneous groups of sites is an important first step in order to transfer information and obtain accurate quantile estimates at the target site. The Hosking–Wallis homogeneity test is usually used to test the homogeneity of the selected sites. Despite its usefulness and good power, it presents some drawbacks including the subjective choice of a parametric distribution for the data and a poorly justified rejection threshold. The present paper addresses these drawbacks by integrating nonparametric procedures in the L-moment homogeneity test. To assess the rejection threshold, three resampling methods (permutation, bootstrap and Pólya resampling) are considered. Results indicate that permutation and bootstrap methods perform better than the parametric Hosking–Wallis test in terms of power as well as in time and procedure simplicity. A real-world case study shows that the nonparametric tests agree with the HW test concerning the homogeneity of the volume and the bivariate case while they disagree for the peak case, but that the assumptions of the HW test are not well respected.  相似文献   
750.
Large observed datasets are not stationary and/or depend on covariates, especially, in the case of extreme hydrometeorological variables. This causes the difficulty in estimation, using classical hydrological frequency analysis. A number of non-stationary models have been developed using linear or quadratic polynomial functions or B-splines functions to estimate the relationship between parameters and covariates. In this article, we propose regularised generalized extreme value model with B-splines (GEV-B-splines models) in a Bayesian framework to estimate quantiles. Regularisation is based on penalty and aims to favour parsimonious model especially in the case of large dimension space. Penalties are introduced in a Bayesian framework and the corresponding priors are detailed. Five penalties are considered and the corresponding priors are developed for comparison purpose as: Least absolute shrinkage and selection (Lasso and Ridge) and smoothing clipped absolute deviations (SCAD) methods (SCAD1, SCAD2 and SCAD3). Markov chain Monte Carlo (MCMC) algorithms have been developed for each model to estimate quantiles and their posterior distributions. Those approaches are tested and illustrated using simulated data with different sample sizes. A first simulation was made on polynomial B-splines functions in order to choose the most efficient model in terms of relative mean biais (RMB) and the relative mean-error (RME) criteria. A second simulation was performed with the SCAD1 penalty for sinusoidal dependence to illustrate the flexibility of the proposed approach. Results show clearly that the regularized approaches leads to a significant reduction of the bias and the mean square error, especially for small sample sizes (n < 100). A case study has been considered to model annual peak flows at Fort-Kent catchment with the total annual precipitations as covariates. The conditional quantile curves were given for the regularized and the maximum likelihood methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号