首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   859篇
  免费   177篇
  国内免费   123篇
测绘学   121篇
大气科学   65篇
地球物理   220篇
地质学   407篇
海洋学   211篇
天文学   5篇
综合类   50篇
自然地理   80篇
  2024年   3篇
  2023年   11篇
  2022年   21篇
  2021年   31篇
  2020年   34篇
  2019年   28篇
  2018年   23篇
  2017年   37篇
  2016年   44篇
  2015年   44篇
  2014年   38篇
  2013年   90篇
  2012年   56篇
  2011年   50篇
  2010年   52篇
  2009年   48篇
  2008年   61篇
  2007年   49篇
  2006年   50篇
  2005年   38篇
  2004年   51篇
  2003年   37篇
  2002年   24篇
  2001年   27篇
  2000年   24篇
  1999年   10篇
  1998年   19篇
  1997年   16篇
  1996年   17篇
  1995年   12篇
  1994年   15篇
  1993年   15篇
  1992年   11篇
  1991年   11篇
  1990年   12篇
  1989年   6篇
  1988年   6篇
  1987年   5篇
  1986年   2篇
  1985年   6篇
  1984年   2篇
  1983年   2篇
  1982年   4篇
  1980年   3篇
  1979年   2篇
  1978年   2篇
  1977年   2篇
  1975年   1篇
  1972年   3篇
  1971年   3篇
排序方式: 共有1159条查询结果,搜索用时 531 毫秒
171.
Probabilistic seismic risk assessment for spatially distributed lifelines is less straightforward than for individual structures. While procedures such as the ‘PEER framework’ have been developed for risk assessment of individual structures, these are not easily applicable to distributed lifeline systems, due to difficulties in describing ground‐motion intensity (e.g. spectral acceleration) over a region (in contrast to ground‐motion intensity at a single site, which is easily quantified using Probabilistic Seismic Hazard Analysis), and since the link between the ground‐motion intensities and lifeline performance is usually not available in closed form. As a result, Monte Carlo simulation (MCS) and its variants are well suited for characterizing ground motions and computing resulting losses to lifelines. This paper proposes a simulation‐based framework for developing a small but stochastically representative catalog of earthquake ground‐motion intensity maps that can be used for lifeline risk assessment. In this framework, Importance Sampling is used to preferentially sample ‘important’ ground‐motion intensity maps, and K‐Means Clustering is used to identify and combine redundant maps in order to obtain a small catalog. The effects of sampling and clustering are accounted for through a weighting on each remaining map, so that the resulting catalog is still a probabilistically correct representation. The feasibility of the proposed simulation framework is illustrated by using it to assess the seismic risk of a simplified model of the San Francisco Bay Area transportation network. A catalog of just 150 intensity maps is generated to represent hazard at 1038 sites from 10 regional fault segments causing earthquakes with magnitudes between five and eight. The risk estimates obtained using these maps are consistent with those obtained using conventional MCS utilizing many orders of magnitudes more ground‐motion intensity maps. Therefore, the proposed technique can be used to drastically reduce the computational expense of a simulation‐based risk assessment, without compromising the accuracy of the risk estimates. This will facilitate computationally intensive risk analysis of systems such as transportation networks. Finally, the study shows that the uncertainties in the ground‐motion intensities and the spatial correlations between ground‐motion intensities at various sites must be modeled in order to obtain unbiased estimates of lifeline risk. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
172.
???????????????????????VTEC????????????????????????????????????????????????????е?GPS??????????3С???????????????????????????????????????Ч??С????????  相似文献   
173.
Tree ring dating plays an important role in obtaining past climate information.The fundamental study of obtaining tree ring samples in typical climate regions is particularly essential.The optimum distribution of tree ring sampling sites based on climate information from the Climate Observation Network(ORPOM model) is presented in this article.In this setup,the tree rings in a typical region are used for surface representation,by applying excellent correlation with the climate information as the main principle.Taking the Horqin Sandy Land in the cold and arid region of China as an example,the optimum distribution range of the tree ring sampling sites was obtained through the application of the ORPOM model,which is considered a reasonably practical scheme.  相似文献   
174.
The accuracy of the Mexican National Forest Inventory (NFI) map is derived in four distinct ecogeographical areas, using an assessment design tailored for the project. A main achievement of the design was to integrate the high diversity of classes encompassed at the most detailed subcommunity level of the classification scheme within a cost‐controlled statistically sound assessment. A hybrid double sampling strategy was applied to the 2.5 million‐ha study area. A total of 5955 reference sites were verified against their NFI map label. The availability of detailed quasi‐synchronous reference data for the 2000 Landsat‐derived NFI and the high diversity of mapped classes allowed a careful thematic analysis on the selected regions, relevant for national extrapolation. Global accuracy estimates of 64–78 per cent were registered among the four ecogeographical areas (two with mainly temperate climate and the other two with mainly tropical climate), with the lower accuracy levels found in areas more densely covered with forests. According to the estimates, the NFI map tends to underestimate the presence of temperate forest (especially oak) and overestimate the presence of tropical forest in the areas investigated. The analysis of confusions reveals difficulties in unambiguously interpreting or labelling forests with secondary vegetation, herbaceous and/or shrub‐like vegetation as well as distinguishing between aquatic vegetation types. The design proved useful from the perspective of accuracy assessments of regional maps in biodiverse regions.  相似文献   
175.
LHS方法在边坡可靠度分析中的应用   总被引:8,自引:0,他引:8  
吴振君  王水林  葛修润 《岩土力学》2010,31(4):1047-1054
Monte Carlo(MC)法在目前边坡可靠度分析中是一种相对精确的方法,应用广泛,受问题限制的影响较小,适应性很强,其误差仅与标准差和样本容量有关。但其精度受随机抽样的可靠性和模拟次数制约,收敛速度慢,影响了实际使用。在极限平衡方法的基础上,用拉丁超立方抽样(Latin hypercube sampling,LHS)方法代替MC法的随机抽样,考虑边坡参数的变异性和相关性进行边坡可靠度分析。讨论了LHS法、MC法中可靠指标的各种计算方法,建议以破坏概率、安全系数均值和标准差作为评价指标。算例显示LHS法较MC法效率上有很大改善:较少的抽样样本就能反映参数的概率分布,可靠度分析收敛快,不需要大量的模拟,因此,值得在边坡可靠度分析中推广应用。也将工程上常用的均匀设计和正交设计用于边坡可靠度分析,结果表明,正交设计结果和中心点法比较接近,而均匀设计得到的结果则是不可靠的。  相似文献   
176.
LN-3A水位仪记录的高采样率大震水震波特点及仪器改进   总被引:1,自引:1,他引:0  
分析了海南前兆台网水位仪对苏门答腊M_S 8.7地震和我国四川汶川Ms 8.0地震的高采样率水震波记录情况,从中可较明显地看到地震波引起的井水位详细变化,但也存在水震波的丢头、仪器时间存在误差,第一代仪器水震波相互覆盖等现象,提出了在水位仪中增加在线缓存记录、自动定时校时功能等改进建议。  相似文献   
177.
周洪生 《铀矿地质》2009,25(1):37-39
采用AutoLISP语言开发了一套辐射取样图件自动生成程序,该程序完全自主开发,能实现取样图自动生成、自动解释功能,大大提高了辐射取样图件绘制的效率,且避免了手工绘图所带来的误差。  相似文献   
178.
This paper investigates the use of strip transect sampling to estimate object abundance when the underlying spatial distribution is assumed to be Poisson. A design-based rather than model-based approach to estimation is investigated through computer simulation, with both homogeneous and non-homogeneous fields representing individual realizations of spatial point processes being considered. Of particular interest are the effects of changing the number of transects and transect width (or alternatively, coverage percent or fraction) on the quality of the estimate. A specific application to the characterization of unexploded ordnance (UXO) in the subsurface at former military firing ranges is discussed. The results may be extended to the investigation of outcrop characteristics as well as subsurface geological features.  相似文献   
179.
In case of a nuclear accident, decision makers rely on high-resolution and accurate information about the spatial distribution of radioactive contamination surrounding the accident site. However, the static nuclear monitoring networks of many European countries are generally too coarse to provide the desired level of spatial accuracy. In the Netherlands, authorities are considering a strategy in which measurement density is increased during an emergency using complementary mobile measuring devices. This raises the question, where should these mobile devices be placed? This article proposes a geostatistical methodology to optimize the allocation of mobile measurement devices, such that the expected weighted sum of false-positive and false-negative areas (i.e. false classification into safe and unsafe zones) is minimized. Radioactivity concentration is modelled as the sum of a deterministic trend and a zero-mean spatially correlated stochastic residual. The trend is defined as the outcome of a physical atmospheric dispersion model, NPK-PUFF. The residual is characterized by a semivariogram of differences between the outputs of various NPK-PUFF model runs, designed to reflect the effect of uncertainty in NPK-PUFF meteorological inputs (e.g. wind speed, wind direction). Spatial simulated annealing is used to obtain the optimal monitoring design, in which accessibility of sampling sites (e.g. distance to roads) is also considered. Although the methodology is computationally demanding, results are promising and the computational load may be considerably reduced to compute optimal mobile monitoring designs in nearly real time.  相似文献   
180.
Suspended sediments in fluvial systems originate from a myriad of diffuse and point sources, with the relative contribution from each source varying over time and space. The process of sediment fingerprinting focuses on developing methods that enable discrete sediment sources to be identified from a composite sample of suspended material. This review identifies existing methodological steps for sediment fingerprinting including fluvial and source sampling, and critically compares biogeochemical and physical tracers used in fingerprinting studies. Implications of applying different mixing models to the same source data are explored using data from 41 catchments across Europe, Africa, Australia, Asia, and North and South America. The application of seven commonly used mixing models to two case studies from the US (North Fork Broad River watershed) and France (Bldone watershed) with local and global (genetic algorithm) optimization methods identified all outputs remained in the acceptable range of error defined by the original authors. We propose future sediment fingerprinting studies use models that combine the best explanatory parameters provided by the modified Collins (using correction factors) and Hughes (relying on iterations involving all data, and not only their mean values) models with optimization using genetic algorithms to best predict the relative contribution of sediment sources to fluvial systems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号