首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   153篇
  免费   1篇
  国内免费   1篇
测绘学   40篇
大气科学   23篇
地球物理   19篇
地质学   37篇
海洋学   7篇
天文学   14篇
自然地理   15篇
  2020年   1篇
  2019年   1篇
  2016年   2篇
  2015年   7篇
  2014年   6篇
  2013年   41篇
  2012年   7篇
  2011年   10篇
  2010年   5篇
  2009年   6篇
  2008年   5篇
  2007年   3篇
  2006年   1篇
  2005年   4篇
  2004年   5篇
  2003年   4篇
  2002年   3篇
  2001年   1篇
  2000年   4篇
  1999年   2篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
  1993年   3篇
  1992年   1篇
  1991年   1篇
  1989年   1篇
  1988年   2篇
  1985年   4篇
  1983年   3篇
  1981年   1篇
  1980年   3篇
  1979年   2篇
  1978年   1篇
  1977年   1篇
  1976年   2篇
  1975年   2篇
  1974年   2篇
  1973年   1篇
  1971年   1篇
  1970年   1篇
  1968年   1篇
  1958年   1篇
排序方式: 共有155条查询结果,搜索用时 218 毫秒
81.
82.
83.
84.
Abstract

The current research focuses upon the development of a methodology for undertaking real-time spatial analysis in a supercomputing environment, specifically using massively parallel SIMD computers. Several approaches that can be used to explore the parallelization characteristics of spatial problems are introduced. Within the focus of a methodology directed toward spatial data parallelism, strategies based on both location-based data decomposition and object-based data decomposition are proposed and a programming logic for spatial operations at local, neighborhood and global levels is also recommended. An empirical study of real-time traffic flow analysis shows the utility of the suggested approach for a complex, spatial analysis situation. The empirical example demonstrates that the proposed methodology, especially when combined with appropriate programming strategies, is preferable in situations where critical, real-time, spatial analysis computations are required. The implementation of this example in a parallel environment also points out some interesting theoretical questions with respect to the theoretical basis underlying the analysis of large networks.  相似文献   
85.
Considering the important role played today by unconventional gas resources in North America and their enormous potential for the future around the world, it is vital to both policy makers and industry that the volumes of these resources and the impact of technology on these resources be assessed. To provide for optimal decision making regarding energy policy, research funding, and resource development, it is necessary to reliably quantify the uncertainty in these resource assessments. Since the 1970s, studies to assess potential unconventional gas resources have been conducted by various private and governmental agencies, the most rigorous of which was by the United States Geological Survey (USGS). The USGS employed a cell-based, probabilistic methodology which used analytical equations to calculate distributions of the resources assessed. USGS assessments have generally produced distributions for potential unconventional gas resources that, in our judgment, are unrealistically narrow for what are essentially undiscovered, untested resources. In this article, we present an improved methodology to assess potential unconventional gas resources. Our methodology is a stochastic approach that includes Monte Carlo simulation and correlation between input variables. Application of the improved methodology to the Uinta–Piceance province of Utah and Colorado with USGS data validates the means and standard deviations of resource distributions produced by the USGS methodology, but reveals that these distributions are not right skewed, as expected for a natural resource. Our investigation indicates that the unrealistic shape and width of the gas resource distributions are caused by the use of narrow triangular input parameter distributions. The stochastic methodology proposed here is more versatile and robust than the USGS analytic methodology. Adoption of the methodology, along with a careful examination and revision of input distributions, should allow a more realistic assessment of the uncertainty surrounding potential unconventional gas resources.  相似文献   
86.
Simulations of late 20th and 21st century Arctic cloud amount from 20 global climate models (GCMs) in the Coupled Model Intercomparison Project phase 3 (CMIP3) dataset are synthesized and assessed. Under recent climatic conditions, GCMs realistically simulate the spatial distribution of Arctic clouds, the magnitude of cloudiness during the warmest seasons (summer–autumn), and the prevalence of low clouds as the predominant type. The greatest intermodel spread and most pronounced model error of excessive cloudiness coincides with the coldest seasons (winter–spring) and locations (perennial ice pack, Greenland, and the Canadian Archipelago). Under greenhouse forcing (SRES A1B emissions scenario) the Arctic is expected to become cloudier, especially during autumn and over sea ice, in tandem with cloud decreases in middle latitudes. Projected cloud changes for the late 21st century depend strongly on the simulated modern (late 20th century) annual cycle of Arctic cloud amount: GCMs that correctly simulate more clouds during summer than winter at present also tend to simulate more clouds in the future. The simulated Arctic cloud changes display a tripole structure aloft, with largest increases concentrated at low levels (below 700 hPa) and high levels (above 400 hPa) but little change in the middle troposphere. The changes in cloud radiative forcing suggest that the cloud changes are a positive feedback annually but negative during summer. Of potential explanations for the simulated Arctic cloud response, local evaporation is the leading candidate based on its high correlation with the cloud changes. The polar cloud changes are also significantly correlated with model resolution: GCMs with higher spatial resolution tend to produce larger future cloud increases.  相似文献   
87.
Geoceiver observations of Navy Navigational Satellites are generally reduced by the Method of Independent Point Positioning using the NSWC Precise Ephemeris. With this approach RMS Errors on the order of one meter have been demonstrated for relative geodetic positioning based on reduction of approximately 40 passes per station. Under the same circumstances, but with the Broadcast Ephemeris being used in place of the Precise Ephemeris, accuracies on the order of 3 to 5 meters are normally to be expected. Here, the more rigorous Short Arc Method can be used to significant advantage to overcome the shortcomings of the Broadcast Ephemeris. Results of a recent field test involving a net occupied by four JMR-1 Doppler Receivers show that the Short Arc Method can produce relative accuracies of better than one meter from as few as 25 passes per station when the Broadcast Ephemeris is used.  相似文献   
88.
The potential of applying shifting level (SL) models to hydrologic processes is discussed in light of observed statistical characteristics of hydrologic data. An SL model and an ARMA (1, 1) model are fitted to an actual hydrologic series. Computer simulation experiments with these models are carried out to compare maximum accumulated deficit and run properties. Results obtained indicate that the mean maximum accumulated deficit, mean longest negative run length, and mean largest negative run sum for both models are similar while there are differences in their corresponding variances.  相似文献   
89.
The late Cenozoic deposits of central Yukon contain numerous distal tephra beds, derived from vents in the Wrangell Mountains and Aleutian arc–Alaska Peninsula region. We use a few of these tephra beds to gain a better understanding on the timing of extensive Pleistocene glaciations that affected this area. Exposures at Fort Selkirk show that the Cordilleran Ice Sheet advanced close to the outer limit of glaciation about 1.5 myr ago. At the Midnight Dome Terrace, near Dawson City, exposed outwash gravel, aeolian sand, and loess, related to valley glaciers in the adjacent Ogilvie Mountains, are of the same age. Reid glacial deposits at Ash Bend on the Stewart River are older than oxygen isotope stage (OIS) 6 and likely of OIS 8 age, that is, about 250,000 yr B.P. Supporting evidence for this chronology comes from major peaks in the rates of terrigeneous sediment input into the Gulf of Alaska at 1.5 and 0.25 myr B.P.  相似文献   
90.
Dispersion Modelling of the Kilauea Plume   总被引:1,自引:0,他引:1  
Emissions from the Kilauea volcano pose significant environmental and health risksto the Hawaiian community. This paper describes progress toward simulating theconcentration and dispersion of plumes of volcanic aerosol after they emanate from thePu'u O'o vent of the Kilauea volcano.In order to produce an accurate regional forecast of the concentration and dispersionof volcanic aerosol, the Hybrid Single-Particle Lagrangian Integrated Trajectory(HY-SPLIT) model was used. Wind fields and thermodynamic data from the non-hydrostatic Mesoscale Spectral Model (MSM) were employed as input for theHY-SPLIT model. A combination of satellite remote sensing, aircraft, and ground-based observations collected during a field experiment was used to validate the model simulation of aerosol distribution.The HY-SPLIT model shows skill in reproducing the plume shape, orientation, and concentration gradients as deduced from satellite images of aerosol optical depth.Comparison of the modelled and observed values suggests that the model was able to produce reasonable plume concentrations and spatial gradients downwind of the source. Model concentrations were generally less than those observed on the leeward side of the Island of Hawaii. This deficiency may be explained by a lack of (i) background concentrations, (ii) local sources of pollution and/or (iii) sea-breeze circulation in the prognostic input wind field. These results represent early progress toward the goal of future operational application of the HY-SPLIT model to predict volcanic aerosol concentrations in Hawaii. This may help mitigate their negative impacts of plumes respiratory health, agriculture, and general aviation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号