首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   504篇
  免费   88篇
  国内免费   111篇
测绘学   97篇
大气科学   142篇
地球物理   121篇
地质学   169篇
海洋学   59篇
天文学   7篇
综合类   54篇
自然地理   54篇
  2024年   1篇
  2023年   7篇
  2022年   21篇
  2021年   28篇
  2020年   24篇
  2019年   38篇
  2018年   25篇
  2017年   22篇
  2016年   20篇
  2015年   21篇
  2014年   46篇
  2013年   35篇
  2012年   36篇
  2011年   32篇
  2010年   32篇
  2009年   36篇
  2008年   29篇
  2007年   29篇
  2006年   22篇
  2005年   30篇
  2004年   21篇
  2003年   23篇
  2002年   17篇
  2001年   15篇
  2000年   18篇
  1999年   11篇
  1998年   8篇
  1997年   4篇
  1996年   8篇
  1995年   5篇
  1994年   10篇
  1993年   4篇
  1992年   13篇
  1991年   3篇
  1990年   2篇
  1988年   2篇
  1986年   1篇
  1985年   2篇
  1971年   1篇
  1954年   1篇
排序方式: 共有703条查询结果,搜索用时 0 毫秒
671.
Abstract

The effect of data pre-processing while developing artificial intelligence (AI) -based data-driven techniques, such as artificial neural networks (ANN), model trees (MT) and linear genetic programming (LGP), is studied for Pawana Reservoir in Maharashtra, India. The daily one-step-ahead inflow forecasts are compared with flows generated from a univariate autoregressive integrated moving average (ARIMA) model. For the full-year data series, a large error is found mainly due to the occurrence of zero values, since the reservoir is located in an intermittent river. Hence, all the techniques are evaluated using two data series: 18 years of daily full-year inflow data (from 1 January to 31 December); and 18 years of daily monsoon season inflow data (from 1 June to 31 October) to take into account the intermittent nature of the data. The relevant range of inputs for each category is selected based on autocorrelation and partial autocorrelation analyses of the inflow series. Conventional pre-processing methods, such as transformation and/or normalization of data, do not perform well because of the large variation in magnitudes, as well as the many zero values (65% of the full-year data series). Therefore, the input data are pre-processed into un-weighted moving average (MA) series of 3 days, 5 days and 7 days. The 3-day MA series performs better, maintaining the peak inflow pattern as in the actual data series, while the coarser-scale (5-day and 7-day) MA series reduce the peak inflow pattern, leading to more errors in peak inflow prediction. The results indicate that AI methods are powerful tools for modelling the daily flow time series with appropriate data pre-processing, in spite of the presence of many zero values. The time-lagged recurrent network (TLRN) ANN modelling technique applied in this study maps the inflow forecasting in a better way than the standard multilayer perceptron (MLP) neural networks, especially in the case of the seasonal data series. The MT technique performs equally well for low and medium inflows, but fails to predict the peak inflows. However, LGP outperforms the other AI models, and also the ARIMA model, for all inflow magnitudes. In the LGP model, the daily full-year data series with more zero inflow values performs better than the daily seasonal models.

Citation Jothiprakash, V. & Kote, A. S. (2011) Improving the performance of data-driven techniques through data pre-processing for modelling daily reservoir inflow. Hydrol. Sci. J. 56(1), 168–186.  相似文献   
672.
为研究地震子波相位对反射系数序列反演的影响,在自回归滑动平均(ARMA)模型描述子波的基础上,提出采用z域对称映射ARMA模型零极点的方法构造了一系列相同振幅谱、不同相位谱的地震子波,并结合谱除法对人工合成地震记录进行反射系数序列反演.理论分析表明,子波相位估计不准时反射系数序列反演结果中残留一个纯相位滤波器,该纯相位滤波器的相位谱为真实子波和构造子波的相位谱之差.采用丰度和变分作为评价方法,在反演结果中确定出真实的或准确的反射系数序列.仿真实验和实际数据处理结果也验证了子波相位对反射系数序列反演的影响规律和评价方法的有效性,为进一步提高反射系数序列反演结果精度指明了研究方向.  相似文献   
673.
云南地区简易房屋的震害指数研究   总被引:2,自引:1,他引:2  
周光全 《地震研究》2011,34(1):88-95
1992~2005年,云南地区共发生56次破坏性地震,积累了丰富的震害资料.2005年10月,<地震现场工作第4部分:灾害直接损失评估>(GB/T 18208.4-2005)发布实施,提出简易房屋的概念,并将原有5个破坏等级归并为3个破坏等级.针对这种变化,统计了云南56次地震不同烈度区简易房屋的破坏比,按破坏比对原规...  相似文献   
674.
地震阈值监测技术能够实现对台网监测能力的实时评估,该方法利用短时平均值(STA)代替A/T来计算震级.为了使STA计算的震级跟传统震级计算结果一致,需要对利用STA计算的震级进行校正.本文通过分析台站检测到的历史事件,选择最优的滤波频带计算log(A/T)与log(STA)之差,得到利用STA计算震级的校正系数.利用新疆地震台网部分台站的数据,分析了阈值监测技术计算的台网监测能力,结果跟实际值基本一致.  相似文献   
675.
Spectral multi-scaling postulates a power-law type of scaling of spectral distribution functions of stationary processes of spatial averages, over nested and geometrically similar sub-regions of the spatial parameter space of a given spatio-temporal random field. Presently a new framework is formulated for down-scaling processes of spatial averages, following naturally from the postulate of spectral multi-scaling, and key ingredients required for its implementation are described. Moreover, results from an extensive diagnostic study are presented, seeking statistical evidence supportive of spectral multi-scaling. Such evidence emerges from two sources of data. One is a 13 year long historical record of radar observations of rainfall in southeastern UK (Chenies radar), with high spatial (2 km) and temporal (5 min) resolution. The other is an ensemble of rain rate fields simulated by a spatio-temporal random pulse model fitted to the historical data. The results are consistent between historical and simulated rainfall data, indicating frequency-dependent scaling relationships interpreted as evidence of spectral multi-scaling across a range of spatial scales.  相似文献   
676.
The energy approach is used to theoretically verify that the average acceleration method (AAM), which is unconditionally stable for linear dynamic systems, is also unconditionally stable for structures with typical nonlinear damping, including the special case of velocity power type damping with a bilinear restoring force model. Based on the energy approach, the stability of the AAM is proven for SDOF structures using the mathematical features of the velocity power function and for MDOF structures by applying the virtual displacement theorem. Finally, numerical examples are given to demonstrate the accuracy of the theoretical analysis.  相似文献   
677.
The prediction of possible future losses from earthquakes, which in many cases affect structures that are spatially distributed over a wide area, is of importance to national authorities, local governments, and the insurance and reinsurance industries. Generally, it is necessary to estimate the effects of many, or even all, potential earthquake scenarios that could impact upon these urban areas. In such cases, the purpose of the loss calculations is to estimate the annual frequency of exceedance (or the return period) of different levels of loss due to earthquakes: so-called loss exceedance curves. An attractive option for generating loss exceedance curves is to perform independent probabilistic seismic hazard assessment calculations at several locations simultaneously and to combine the losses at each site for each annual frequency of exceedance. An alternative method involves the use of multiple earthquake scenarios to generate ground motions at all sites of interest, defined through Monte–Carlo simulations based on the seismicity model. The latter procedure is conceptually sounder but considerably more time-consuming. Both procedures are applied to a case study loss model and the loss exceedance curves and average annual losses are compared to ascertain the influence of using a more theoretically robust, though computationally intensive, procedure to represent the seismic hazard in loss modelling.An erratum to this article can be found at  相似文献   
678.
RandomfieldcharacteristicsofearthquakeoccurrenceandtestofearthquakeoccurrencerateMeng-TanGAO(高孟潭)(InstituteofGeophysics,State...  相似文献   
679.
Detrending is a key step in the study of the scaling behaviors using Detrended Fluctuation Analysis (DFA) to explore the long‐range correlation of hydrological series. However, the irregular periodicity and various trends within hydrological series as a result of integrated influences of human activities such as construction of water reservoirs and human withdrawal of freshwater and climate changes such as alterations of precipitation changes in both space and time make difficult the selection of detrending methods. In this study, we attempt to address the detrending problem due to the important theoretical and practical merits of detrending in DFA‐based scaling analysis. In this case, with focus on the irregularity of the periodic trends, a modified DFA, varying parameter DFA (VPDFA), and its combination with adaptive detrending algorithm (ADA) are employed to eliminate the influences of irregular cycles on DFA‐based scaling results. The results indicate that, for streamflow series with no more than 20 cycles, VPDFA is recommended; otherwise, the combined method has to be employed. Comparison study indicates that the scaling behavior of the detrended observed streamflow series by average removed method, when compared to those by DFA, VPDFA, and ADA, is the one of the periodic residues around the averaged annual cycle for the entire series rather than that excluding all annual cycles. However, although the result by VPDFA for short observed streamflow record can well correspond to that for numerically simulated series, the scaling behavior obtained by combined method analyzing long record looks strange and is different from that by numerical analysis. We attribute this difference to the complicated hydrological structure and the possible hydrological alternation due to the increasing integrated impacts of human activities and human activities with the extending record. How to include the most of the important factors into the detrending procedure is still a challenging task for further study in the analysis of the scaling behavior of hydrological processes. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
680.
王发信  柏菊 《地下水》2014,(5):51-53
利用淮北平原180个浅层地下水观测点实测埋深资料,在绘制区域多年平均地下水埋深、枯水年、丰水年地下水埋深等值线图的基础上,分析各年型浅层地下水埋深分布特点,同时对大埋深站点分布情况进行专项分析。得出淮北平原浅层地下水多年平均为2.48 m,且西北深、东南浅的分布结论,可为合理高效开发利用淮北平原浅层地下水提供一定参考。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号