首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6493篇
  免费   612篇
  国内免费   170篇
测绘学   266篇
大气科学   687篇
地球物理   2272篇
地质学   2571篇
海洋学   450篇
天文学   517篇
综合类   190篇
自然地理   322篇
  2023年   8篇
  2022年   22篇
  2021年   40篇
  2020年   31篇
  2019年   31篇
  2018年   498篇
  2017年   408篇
  2016年   326篇
  2015年   206篇
  2014年   165篇
  2013年   196篇
  2012年   689篇
  2011年   495篇
  2010年   205篇
  2009年   214篇
  2008年   203篇
  2007年   168篇
  2006年   177篇
  2005年   869篇
  2004年   919篇
  2003年   693篇
  2002年   210篇
  2001年   104篇
  2000年   71篇
  1999年   28篇
  1998年   25篇
  1997年   35篇
  1996年   27篇
  1995年   16篇
  1994年   6篇
  1993年   9篇
  1992年   8篇
  1991年   18篇
  1990年   14篇
  1989年   10篇
  1988年   5篇
  1987年   12篇
  1986年   4篇
  1985年   7篇
  1984年   8篇
  1983年   8篇
  1982年   10篇
  1981年   6篇
  1980年   5篇
  1978年   5篇
  1976年   5篇
  1975年   4篇
  1973年   4篇
  1956年   4篇
  1954年   4篇
排序方式: 共有7275条查询结果,搜索用时 46 毫秒
991.
We propose a stochastic methodology for risk assessment of a large earthquake when a long time has elapsed from the last large seismic event. We state an approximate probability distribution for the occurrence time of the next large earthquake, by knowing that the last large seismic event occurred a long time ago. We prove that, under reasonable conditions, such a distribution is exponential with a rate depending on the asymptotic slope of the cumulative intensity function corresponding to a nonhomogeneous Poisson process. As it is not possible to obtain an empirical cumulative distribution function of the waiting time for the next large earthquake, an estimator of its cumulative distribution function based on existing data is derived. We conduct a simulation study for detecting scenario in which the proposed methodology would perform well. Finally, a real-world data analysis is carried out to illustrate its potential applications, including a homogeneity test for the times between earthquakes.  相似文献   
992.
Changing climate and precipitation patterns make the estimation of precipitation, which exhibits two-dimensional and sometimes chaotic behavior, more challenging. In recent decades, numerous data-driven methods have been developed and applied to estimate precipitation; however, these methods suffer from the use of one-dimensional approaches, lack generality, require the use of neighboring stations and have low sensitivity. This paper aims to implement the first generally applicable, highly sensitive two-dimensional data-driven model of precipitation. This model, named frequency based imputation (FBI), relies on non-continuous monthly precipitation time series data. It requires no determination of input parameters and no data preprocessing, and it provides multiple estimations (from the most to the least probable) of each missing data unit utilizing the series itself. A total of 34,330 monthly total precipitation observations from 70 stations in 21 basins within Turkey were used to assess the success of the method by removing and estimating observation series in annual increments. Comparisons with the expectation maximization and multiple linear regression models illustrate that the FBI method is superior in its estimation of monthly precipitation. This paper also provides a link to the software code for the FBI method.  相似文献   
993.
The highest seismic activity in Vietnam is observed in the northwest of the country, hence the practical significance of more accurate assessment of the earthquake hazard for the area. The worldwide experience of seismicity, in particular, the recent Tohoku mega-earthquake (March 11, 2011, M w = 9.0, Japan) shows that instrumental and historical data alone are insufficient to reliably estimate earthquake hazard. This is all the more relevant in relation to Vietnam where the period of instrumental observation is short and historical evidence is nearly lacking. In this connection we made an attempt to construct maps of earthquake hazard based on known seismicity data using the available geological and geophysical data and the method of G.I. Reisner and his associates for classification of areas by seismic potential. Since the question of what geological and geophysical parameters are to be used and with what weights remains unresolved, we developed a program package to estimate Mmax based on different options in the use of geological and geophysical data. In this paper we discuss the first results and the promise held by this program package.  相似文献   
994.
Reservoirs are the most important constructions for water resources management and flood control. Great concern has been paid to the effects of reservoir on downstream area and the differences between inflows and dam site floods due to the changes of upstream flow generation and concentration conditions after reservoir’s impoundment. These differences result in inconsistency between inflow quantiles and the reservoir design criteria derived by dam site flood series, which can be a potential risk and must be quantificationally evaluated. In this study, flood frequency analysis (FFA) and flood control risk analysis (FCRA) methods are used with the long reservoir inflow series derived from a multiple inputs and single output model and a copula-based inflow estimation model. The results of FFA and FCRA are compared and the influences on reservoir flood management are also discussed. The Three Gorges Reservoir (TGR) in China is selected as a case study. Results show that the differences between the TGR inflow and dam site floods are significant which result in changes on its flood control risk rates. The mean values of TGR’s annual maximum inflow peak discharge and 3 days flood volume have increased 5.58 and 3.85% than the dam site ones, while declined by 1.82 and 1.72% for the annual maximum 7 and 15 days flood volumes. The flood control risk rates of middle and small flood events are increased while extreme flood events are declined. It is shown that the TGR can satisfy the flood control task under current hydrologic regime and the results can offer references for better management of the TGR.  相似文献   
995.
996.
Daily rainfall is a complex signal exhibiting alternation of dry and wet states, seasonal fluctuations and an irregular behavior at multiple scales that cannot be preserved by stationary stochastic simulation models. In this paper, we try to investigate some of the strategies devoted to preserve these features by comparing two recent algorithms for stochastic rainfall simulation: the first one is the modified Markov model, belonging to the family of Markov-chain based techniques, which introduces non-stationarity in the chain parameters to preserve the long-term behavior of rainfall. The second technique is direct sampling, based on multiple-point statistics, which aims at simulating a complex statistical structure by reproducing the same data patterns found in a training data set. The two techniques are compared by first simulating a synthetic daily rainfall time-series showing a highly irregular alternation of two regimes and then a real rainfall data set. This comparison allows analyzing the efficiency of different elements characterizing the two techniques, such as the application of a variable time dependence, the adaptive kernel smoothing or the use of low-frequency rainfall covariates. The results suggest, under different data availability scenarios, which of these elements are more appropriate to represent the rainfall amount probability distribution at different scales, the annual seasonality, the dry-wet temporal pattern, and the persistence of the rainfall events.  相似文献   
997.
Performing a comprehensive risk analysis is primordial to ensure a reliable and sustainable water supply. Though the general framework of risk analysis is well established, specific adaptation seems needed for systems such as water distribution networks (WDN). Understanding of vulnerabilities of WDN against deliberate contamination and consumers’ sensitivity against contaminated water use is very vital to inform decision-maker. This paper presents an innovative step-by-step methodology for developing comprehensive indicators to perform sensitivity, vulnerability and criticality analyses in case of absence of early warning system (EWS). The assessment and the aggregation of these indicators with specific fuzzy operators allow identifying the most critical points in a WDN. Intentional intrusion of contaminants at these points can potentially harm both the consumers as well as water infrastructure. The implementation of the developed methodology has been demonstrated through a case study of a French WDN unequipped with sensors.  相似文献   
998.
Successful modeling of stochastic hydro-environmental processes widely relies on quantity and quality of accessible data and noisy data might effect on the functioning of the modeling. On the other hand in training phase of any Artificial Intelligence based model, each training data set is usually a limited sample of possible patterns of the process and hence, might not show the behavior of whole population. Accordingly in the present article first, wavelet-based denoising method was used in order to smooth hydrological time series and then small normally distributed noises with the mean of zero and various standard deviations were generated and added to the smoothed time series to form different denoised-jittered training data sets, for Artificial Neural Network (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) modeling of daily and multi-step-ahead rainfall–runoff process of the Milledgeville station of the Oconee River and the Pole Saheb station of the Jighatu River watersheds, respectively located in USA and Iran. The proposed hybrid data pre-processing approach in the present study is used for the first time in modeling of time series and especially in modeling of hydrological processes. Furthermore, the impacts of denoising (smoothing) and noise injection (jittering) have been simultaneously investigated neither in hydrology nor in any other engineering fields. To evaluate the modeling performance, the outcomes were compared with the results of multi linear regression and Auto Regressive Integrated Moving Average models. Comparing the achieved results via the trained ANN and ANFIS models using denoised-jittered data showed that the proposed data pre-processing approach which serves both denoising and jittering techniques could improve performance of the ANN and ANFIS based single-step-ahead rainfall–runoff modeling of the Milledgeville station up to 14 and 12% and of the Pole Saheb station up to 22 and 16% in the verification phase. Also the results of multi-step-ahead modeling using the proposed data pre-processing approach showed improvement of modeling for both watersheds.  相似文献   
999.
A minimum 1-D seismic velocity model for routine seismic event location purposes was determined for the area of the western Barents Sea, using a modified version of the VELEST code. The resulting model, BARENTS16, and corresponding station corrections were produced using data from stations at regional distances, the vast majority located in the periphery of the recorded seismic activity, due to the unfavorable land–sea distribution. Recorded seismicity is approached through the listings of a joint bulletin, resulting from the merging of several international and regional bulletins for the region, as well as additional parametric data from temporary deployments. We discuss the challenges posed by this extreme network-seismicity geometry in terms of velocity estimation resolution and result stability. Although the conditions do not facilitate the estimation of meaningful station corrections at the farthermost stations, and even well-resolved corrections do not have a convincing contribution, we show that the process can still converge to a stable velocity average for the crust and upper mantle, in good agreement with a priori information about the regional structure and geology, which reduces adequately errors in event location estimates.  相似文献   
1000.
Water resources provide the foundation for human development and environmental sustainability. Water shortage occurs more or less in some regions, which usually causes sluggish economic activities, degraded ecology, and even conflicts and disputes over water use sectors. Game theory can better reflect the behaviors of involved stakeholders and has been increasingly employed in water resources management. This paper presents a framework for the allocation of river basin water in a cooperative way. The proposed framework applies the TOPSIS model combined with the entropy weight to determine stakeholders’ initial water share, reallocating water and net benefit by using four solution concepts for crisp and fuzzy games. Finally, the Fallback bargaining model was employed to achieve unanimous agreement over the four solution concepts. The framework was demonstrated with an application to the Dongjiang River Basin, South China. The results showed that, overall, the whole basin gained more total benefits when the players participated in fuzzy coalitions rather than in crisp coalitions, and \(\left\{ {NHS_{Fuzzy} \,and\, SV_{Crisp} } \right\}\) could better distribute the total benefit of the whole basin to each player. This study tested the effectiveness of this framework for the water allocation decision-making in the context of water management in river basins. The results provide technical support for water right trade among the stakeholders at basin scale and have the potential to relieve water use conflicts of the entire basin.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号