首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10944篇
  免费   2096篇
  国内免费   2239篇
测绘学   787篇
大气科学   3687篇
地球物理   2725篇
地质学   4051篇
海洋学   1015篇
天文学   205篇
综合类   446篇
自然地理   2363篇
  2024年   70篇
  2023年   194篇
  2022年   344篇
  2021年   489篇
  2020年   493篇
  2019年   510篇
  2018年   482篇
  2017年   573篇
  2016年   577篇
  2015年   576篇
  2014年   753篇
  2013年   1125篇
  2012年   706篇
  2011年   705篇
  2010年   598篇
  2009年   781篇
  2008年   845篇
  2007年   781篇
  2006年   679篇
  2005年   571篇
  2004年   513篇
  2003年   444篇
  2002年   407篇
  2001年   314篇
  2000年   318篇
  1999年   244篇
  1998年   233篇
  1997年   215篇
  1996年   160篇
  1995年   147篇
  1994年   97篇
  1993年   71篇
  1992年   69篇
  1991年   37篇
  1990年   41篇
  1989年   25篇
  1988年   21篇
  1987年   18篇
  1986年   17篇
  1985年   10篇
  1984年   3篇
  1983年   3篇
  1982年   5篇
  1980年   2篇
  1979年   1篇
  1978年   6篇
  1977年   2篇
  1976年   1篇
  1954年   3篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
31.
Jens-Uwe Klügel   《Earth》2008,88(1-2):1-32
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies.

Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making.  相似文献   

32.
Anders Schomacker   《Earth》2008,90(3-4):103-113
In the geological record, hummocky dead-ice moraines represent the final product of the melt-out of dead-ice. Processes and rates of dead-ice melting in ice-cored moraines and at debris-covered glaciers are commonly believed to be governed by climate and debris-cover properties. Here, backwasting rates from 14 dead-ice areas are assessed in relation to mean annual air temperature, mean summer air temperature, mean annual precipitation, mean summer precipitation, and annual sum of positive degree days. The highest correlation was found between backwasting rate and mean annual air temperature. However, the correlation between melt rates and climate parameters is low, stressing that processes and topography play a major role in governing the rates of backwasting. The rates of backwasting from modern glacial environments should serve as input to de-icing models for ancient dead-ice areas in order to assess the mode and duration of deposition.A challenge for future explorations of dead-ice environments is to obtain long-term records of field-based monitoring of melt progression. Furthermore, many modern satellite-borne sensors have high potentials for recordings of multi-temporal Digital Elevation Models (DEMs) for detection and quantification of changes in dead-ice environments. In recent years, high-accuracy DEMs from airborne laser scanning altimetry (LiDAR) are emerging as an additional data source. However, time series of high-resolution aerial photographs remain essential for both visual inspection and high-resolution stereographic DEM production.  相似文献   
33.
Maps showing the potential for soil erosion at 1:100,000 scale are produced in a study area within Lebanon that can be used for evaluating erosion of Mediterranean karstic terrain with two different sets of impact factors built into an erosion model. The first set of factors is: soil erodibility, morphology, land cover/use and rainfall erosivity. The second is obtained by the first adding a fifth factor, rock infiltration. High infiltration can reflect high recharge, therefore decreasing the potential of surface runoff and hence the quantity of transported materials. Infiltration is derived as a function of lithology, lineament density, karstification and drainage density, all of which can be easily extracted from satellite imagery. The influence of these factors is assessed by a weight/rate approach sharing similarities between quantitative and qualitative methods and depending on pair-wise comparison matrix.The main outcome was the production of factorial maps and erosion susceptibility maps (scale 1:100,000). Spatial and attribute comparison of erosion maps indicates that the model that includes a measure of rock infiltration better represents erosion potential. Field investigation of rills and gullies shows 87.5% precision of the model with rock infiltration. This is 17.5% greater than the precision of the model without rock infiltration. These results indicate the necessity and importance of integrating information on infiltration of rock outcrops to assess soil erosion in Mediterranean karst landscapes.  相似文献   
34.
With recent changes in the ways that state agencies are implementing their environmental policies, the line between public and private is becoming increasingly blurred. This includes shifts from state-led implementation of environmental policies to conservation plans that are implemented and managed by multi-sectoral networks of governments, the private sector and environmental non-governmental organizations (ENGOs). This paper examines land trusts as private conservation initiatives that become part of neoliberal governance arrangements and partnerships that challenge our conceptions of environmental preservation and democratic participation. The paper starts with an examination of the concept of neoliberalized environmental governance. Next, it addresses the shifting social constructions of property and land in the context of protecting large scale ecosystems. Through a case study of the extension of new environmental governance arrangements on the Oak Ridges Moraine in Ontario, we examine the relationships that have formed between different levels of the state and environmental non-governmental organizations. Finally, we analyze the expansion of land trusts and private conservation initiatives that are predicated on private land ownership and the commodification of nature, the emerging discourses and practices of private conservation, and how these are implicated in the privatization and neoliberalization of nature.  相似文献   
35.
We describe empirical results from a multi-disciplinary project that support modeling complex processes of land-use and land-cover change in exurban parts of Southeastern Michigan. Based on two different conceptual models, one describing the evolution of urban form as a consequence of residential preferences and the other describing land-cover changes in an exurban township as a consequence of residential preferences, local policies, and a diversity of development types, we describe a variety of empirical data collected to support the mechanisms that we encoded in computational agent-based models. We used multiple methods, including social surveys, remote sensing, and statistical analysis of spatial data, to collect data that could be used to validate the structure of our models, calibrate their specific parameters, and evaluate their output. The data were used to investigate this system in the context of several themes from complexity science, including have (a) macro-level patterns; (b) autonomous decision making entities (i.e., agents); (c) heterogeneity among those entities; (d) social and spatial interactions that operate across multiple scales and (e) nonlinear feedback mechanisms. The results point to the importance of collecting data on agents and their interactions when producing agent-based models, the general validity of our conceptual models, and some changes that we needed to make to these models following data analysis. The calibrated models have been and are being used to evaluate landscape dynamics and the effects of various policy interventions on urban land-cover patterns.  相似文献   
36.
按矿化特征将金川X矿区主矿体分为致密块状特富矿、海绵状富矿和浸染状贫矿.应用MicroMine软件对3类矿化体分别建立了数学模型,并应用距离平方反比法对模型单元块进行了镍、铜品位估值.通过分析和对比矿床数学模型900m至1300m不同水平剖面上的Ni、Cu品位及Ni/Cu比值的变化,得出在X矿区的中部西侧是一个Ni和Cu的矿化中心,并且向深部Cu的矿化强度高于Ni.  相似文献   
37.
Two algorithms for in-situ detection and identification of vertical free convective and double-diffusive flows in groundwater monitoring wells or boreholes are proposed. With one algorithm the causes (driving forces) and with the other one the effects (convection or double-diffusion) of vertical transport processes can be detected based on geophysical borehole measurements in the water column. Five density-driven flow processes are identified: thermal, solutal, and thermosolutal convection leading to an equalization, as well as saltfingers and diffusive layering leading to an intensification of a vertical density gradient. The occurrence of density-driven transport processes could be proven in many groundwater monitoring wells and boreholes; especially shallow sections of boreholes or groundwater monitoring wells are affected dramatically by such vertical flows. Deep sections are also impaired as the critical threshold for the onset of a density-driven flow is considerably low. In monitoring wells or boreholes, several sections with different types of density-driven vertical flows may exist at the same time. Results from experimental investigations in a medium-scale testing facility with high aspect ratio (height/radius = 19) and from numerical modeling of a water column agree well with paramters of in-situ detected convection cells.  相似文献   
38.
气候变化对塔里木河来自天山的地表径流影响   总被引:21,自引:10,他引:11  
塔里木河水资源主要来自天山南坡两条源流,选择西段阿克苏河和中段开都河-孔雀河作为研究区.1956-2003年研究河源山区气温呈持续升温且降水波动增加的趋势,其中1995-2003年升温强劲,升温速率高出48 a期间平均的3倍以上;降水自1986年后持续增加,20世纪90年代较80年代增幅达18%,并显示出河源山区湿岛向塔里木盆地扩展.因高山缺少气象观测,出山径流过程变化可以综合反映中高山带的气候变化.塔里木河来自天山的地表径流在1986-2003年间持续增长,以冰川融水补给为主的库玛拉克河,1994年以来年径流量增加已在前期平均值基础上提升了一个台阶;开都河以降水径流补给为主,1986-2002年出现了观测记录以来的丰水期,并使1986年后博斯腾湖水位快速上升,恢复到1958年记录的最高水位以上.两河年径流变化趋势基本相似,但也显示有西、中段的气候变化局部差异,出现丰枯水期的不一致;然而,在近16 a升温过程中,年径流增长幅度和快慢相近.  相似文献   
39.
The time evolution of a two-dimensional line thermal-a turbulent flow produced by an initial element with signifi-cant buoyancy released in a large water body, is numerically studied with the two-equation k - e model for turbulence closure. The numerical results show that the thermal is characterized by a vortex pair flow and a kidney shaped concentra-tion structure with double peak maxima; the computed flow details and scalar mixing characteristics can be described by self-similar relations beyond a dimensionless time around 10. There are two regions in the flow field of a line thermal: a mixing region where the concentration of tracer fluid is high and the flow is turbulent and rotational with a pair of vortex eyes, and an ambient region where the concentration is zero and the flow is potential and well-described by a model of doublet with strength very close to those given by early experimental and analytical studies. The added virtual mass coeffi-cient of the thermal motion is found to be approximat  相似文献   
40.
A coding error in the s-Coordinate Primitive Equation Model (SPEM) has led to misleading statements about the behaviour of the Mellor–Yamada level 2 parameterization of vertical mixing. It has been claimed that the scheme removes static instability only very slowly and preserves statically unstable stratifications for an unrealistic long time. This note corrects this statement by demonstrating that the Mellor–Yamada mixing scheme, if implemented correctly, tends to overestimate rather than underestimate vertical mixing in seasonally ice-covered seas. Similar to other mixing schemes with the same behaviour, this leads to spurious open ocean deep convection, an unrealistic homogenization of the water column, and a significant reduction of sea ice volume.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号