首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   404篇
  免费   12篇
测绘学   38篇
大气科学   39篇
地球物理   62篇
地质学   113篇
海洋学   27篇
天文学   89篇
自然地理   48篇
  2021年   2篇
  2020年   6篇
  2019年   11篇
  2018年   10篇
  2017年   5篇
  2016年   9篇
  2015年   14篇
  2014年   13篇
  2013年   21篇
  2012年   4篇
  2011年   9篇
  2010年   12篇
  2009年   22篇
  2008年   18篇
  2007年   28篇
  2006年   22篇
  2005年   17篇
  2004年   15篇
  2003年   13篇
  2002年   12篇
  2001年   13篇
  2000年   11篇
  1999年   10篇
  1998年   8篇
  1997年   3篇
  1996年   5篇
  1995年   6篇
  1994年   5篇
  1993年   5篇
  1992年   2篇
  1991年   4篇
  1990年   5篇
  1989年   5篇
  1988年   4篇
  1987年   3篇
  1986年   3篇
  1985年   5篇
  1984年   5篇
  1983年   6篇
  1982年   3篇
  1981年   10篇
  1980年   4篇
  1978年   2篇
  1977年   4篇
  1976年   2篇
  1975年   5篇
  1974年   3篇
  1973年   3篇
  1972年   3篇
  1859年   1篇
排序方式: 共有416条查询结果,搜索用时 656 毫秒
401.
The common occurrence of tree and pole blow-down from pyroclastic currents provides an opportunity to estimate properties of the currents. Blow-down may occur by uprooting (root zone rupture), or flexure or shear at some point on the object. If trees are delimbed before blow-down, each tree or pole can be simulated by a cylinder perpendicular to the current. The force acting on a cylinder is a function of flow dynamic pressure, cylinder geometry, and drag coefficient. Treated as a cantilever of circular cross-section, the strength for the appropriate failure mode (rupture, uprooting or flexure) can then be used to estimate the minimum necessary current dynamic pressure. In some cases, larger or stronger standing objects can provide upper bounds on the dynamic pressure. This analysis was treated in two ways: (1) assuming that the current properties are vertically constant; and (2) allowing current velocity and density to vary vertically according to established models for turbulent boundary layers and stratified flow. The two methods produced similar results for dynamic pressure. The second, along with a method to approximate average whole-current density, offers a means to estimate average velocity and density over the height of the failed objects. The method is applied to several example cases, including Unzen, Mount St. Helens, Lamington, and Merapi volcanoes. Our results compare reasonably well with independent estimates. For several cases, we found that it is possible to use the dynamic pressure equations developed for vertically uniform flow, along with the average cloud density multiplied by a factor of 2–5, to determine average velocity over the height of the failed object.  相似文献   
402.
403.
We consider the conditions required for a cluster core to shrink, by adiabatic accretion of gas from the surrounding cluster, to densities such that stellar collisions are a likely outcome. We show that the maximum densities attained, and hence the viability of collisions, depend on the balance between core shrinkage (driven by accretion) and core puffing up (driven by relaxation effects). The expected number of collisions scales as     , where N core is the number of stars in the cluster core and     is the free-fall velocity of the parent cluster (gas reservoir). Thus, whereas collisions are very unlikely in a relatively low-mass, low-internal-velocity system such as the Orion Nebula Cluster, they become considerably more important at the mass and velocity scales characteristic of globular clusters. Thus, stellar collisions in response to accretion-induced core shrinkage remain a viable prospect in more massive clusters, and may contribute to the production of intermediate-mass black holes in these systems.  相似文献   
404.
Since the 1970s, there has been both continuing and growing interest in developing accurate estimates of the annual fluvial transport (fluxes and loads) of suspended sediment and sediment‐associated chemical constituents. This study provides an evaluation of the effects of manual sample numbers (from 4 to 12 year?1) and sample scheduling (random‐based, calendar‐based and hydrology‐based) on the precision, bias and accuracy of annual suspended sediment flux estimates. The evaluation is based on data from selected US Geological Survey daily suspended sediment stations in the USA and covers basins ranging in area from just over 900 km2 to nearly 2 million km2 and annual suspended sediment fluxes ranging from about 4 Kt year?1 to about 200 Mt year?1. The results appear to indicate that there is a scale effect for random‐based and calendar‐based sampling schemes, with larger sample numbers required as basin size decreases. All the sampling schemes evaluated display some level of positive (overestimates) or negative (underestimates) bias. The study further indicates that hydrology‐based sampling schemes are likely to generate the most accurate annual suspended sediment flux estimates with the fewest number of samples, regardless of basin size. This type of scheme seems most appropriate when the determination of suspended sediment concentrations, sediment‐associated chemical concentrations, annual suspended sediment and annual suspended sediment‐associated chemical fluxes only represent a few of the parameters of interest in multidisciplinary, multiparameter monitoring programmes. The results are just as applicable to the calibration of autosamplers/suspended sediment surrogates currently used to measure/estimate suspended sediment concentrations and ultimately, annual suspended sediment fluxes, because manual samples are required to adjust the sample data/measurements generated by these techniques so that they provide depth‐integrated and cross‐sectionally representative data. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.  相似文献   
405.
Given an initial spatial sampling campaign, it is often of importance to conduct a second, more targeted campaign based on the properties of the first. Here a network re-design modifies the first one by adding and/or removing sites so that maximum information is preserved. Commonly, this optimisation is constrained by limited sampling funds and a reduced sample network is sought. To this extent, we demonstrate the use of geographically weighted methods combined with a location-allocation algorithm, as a means to design a second-phase sampling campaign in univariate, bivariate and multivariate contexts. As a case study, we use a freshwater chemistry data set covering much of Great Britain. Applying the two-stage procedure enables the optimal identification of a pre-specified number of sites, providing maximum spatial and univariate/bivariate/multivariate water chemistry information for the second campaign. Network re-designs that account for the buffering capacity of a freshwater site to acidification are also conducted. To complement the use of basic methods, robust alternatives are used to reduce the effect of anomalous observations on the re-designs. Our non-stationary re-design framework is general and provides a relatively simple and a viable alternative to geostatistical re-design procedures that are commonly adopted. Particularly in the multivariate case, it represents an important methodological advance.  相似文献   
406.
407.
408.
409.
Toward Optimal Calibration of the SLEUTH Land Use Change Model   总被引:3,自引:0,他引:3  
SLEUTH is a computational simulation model that uses adaptive cellular automata to simulate the way cities grow and change their surrounding land uses. It has long been known that models are of most value when calibrated, and that using back‐casting (testing against known prior data) is an effective calibration method. SLEUTH's calibration uses the brute force method: every possible combination and permutation of its control parameters is tried, and the outcomes tested for their success at replicating prior data. Of the SLEUTH calibration approaches tried so far, there have been several suggested rules to follow during the brute force procedure to deal with problems of tractability, most of which leave out many of the possible parameter combinations. In this research, we instead attempt to create the complete set of possible outcomes with the goal of examining them to select the optimum from among the millions of possibilities. The self‐organizing map (SOM) was used as a data reduction method to pursue the isolation of the best parameter sets, and to indicate which of the existing 13 calibration metrics used in SLEUTH are necessary to arrive at the optimum. As a result, a new metric is proposed that will be of value in future SLEUTH applications. The new measure combines seven of the current measures, eight if land use is modeled, and is recommended as a way to make SLEUTH applications more directly comparable, and to give superior modeling and forecasting results.  相似文献   
410.
To overcome the shortcomings of the traditional multibeam survey and data processing, a new method is presented for the precise determination of the instantaneous height at the multibeam transducer by ...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号