首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   179篇
  免费   4篇
  国内免费   2篇
测绘学   10篇
大气科学   19篇
地球物理   30篇
地质学   59篇
海洋学   16篇
天文学   37篇
自然地理   14篇
  2022年   1篇
  2020年   2篇
  2019年   4篇
  2018年   7篇
  2017年   5篇
  2016年   6篇
  2015年   6篇
  2014年   2篇
  2013年   13篇
  2012年   8篇
  2011年   4篇
  2010年   13篇
  2009年   5篇
  2008年   9篇
  2007年   14篇
  2006年   6篇
  2005年   10篇
  2004年   12篇
  2003年   8篇
  2002年   6篇
  2001年   7篇
  2000年   7篇
  1999年   1篇
  1998年   2篇
  1997年   1篇
  1996年   3篇
  1995年   3篇
  1994年   2篇
  1993年   1篇
  1992年   1篇
  1990年   3篇
  1989年   1篇
  1985年   1篇
  1981年   1篇
  1979年   2篇
  1977年   1篇
  1976年   1篇
  1975年   1篇
  1973年   1篇
  1971年   1篇
  1970年   1篇
  1967年   1篇
  1961年   1篇
排序方式: 共有185条查询结果,搜索用时 15 毫秒
11.
Abstract– We studied the mineralogy, petrology, and bulk, trace element, oxygen, and noble gas isotopic compositions of a composite clast approximately 20 mm in diameter discovered in the Larkman Nunatak (LAR) 04316 aubrite regolith breccia. The clast consists of two lithologies: One is a quench‐textured intergrowth of troilite with spottily zoned metallic Fe,Ni which forms a dendritic or cellular structure. The approximately 30 μm spacings between the Fe,Ni arms yield an estimated cooling rate of this lithology of approximately 25–30 °C s?1. The other is a quench‐textured enstatite‐forsterite‐diopside‐glass vitrophyre lithology. The composition of the clast suggests that it formed at an exceptionally high degree of partial melting, perhaps approaching complete melting, and that the melts from which the composite clast crystallized were quenched from a temperature of approximately 1380–1400 °C at a rate of approximately 25–30 °C s?1. The association of the two lithologies in a composite clast allows, for the first time, an estimation of the cooling rate of a silicate vitrophyre in an aubrite of approximately 25–30 °C s?1. While we cannot completely rule out an impact origin of the clast, we present what we consider is very strong evidence that this composite clast is one of the elusive pyroclasts produced during pyroclastic volcanism on the aubrite parent body ( Wilson and Keil 1991 ). We further suggest that this clast was not ejected into space but retained on the aubrite parent body by virtue of the relatively large size of the clast of approximately 20 mm. Our modeling, taking into account the size of the clast, suggests that the aubrite parent body must have been between approximately 40 and 100 km in diameter, and that the melt from which the clast crystallized must have contained an estimated maximum range of allowed volatile mass fractions between approximately 500 and approximately 4500 ppm.  相似文献   
12.
Recent investigations of a limestone solution cave on the Queen Charlotte Islands (Haida Gwaii) have yielded skeletal remains of fauna including late Pleistocene and early Holocene bears, one specimen of which dates to ca. 14,400 14C yr B.P. This new fossil evidence sheds light on early postglacial environmental conditions in this archipelago, with implications for the timing of early human migration into the Americas.  相似文献   
13.
14.
In the assessment of potentially contaminated land, the number of samples and the uncertainty of the measurements (including that from sampling) are both important factors in the planning and implementation of an investigation. Both parameters also effect the interpretation of the measurements produced, and the process of making decisions based upon those measurements. However, despite their importance, previously there has been no method for assessing if an investigation is fit‐for‐purpose with respect to both of these parameters. The Whole Site Optimised Contaminated Land Investigation (WSOCLI) method has been developed to address this issue, and to allow the optimisation of an investigation with respect to both the number of samples and the measurement uncertainty, using an economic loss function. This function was developed to calculate an ‘expectation of (financial) loss’, incorporating costs of the investigation itself, subsequent land remediation, and potential consequential costs. To allow the evaluation of the WSOCLI method a computer program ‘OCLISIM’ has been developed to produce sample data from simulated contaminated land investigations. One advantage of such an approach is that as the ‘true’ contaminant concentrations are created by the program, these values are known, which is not the case in a real contaminated land investigation. This enables direct comparisons between functions of the ‘true’ concentrations and functions of the simulated measurements. A second advantage of simulation for this purpose is that the WSOCLI method can be tested on many different patterns and intensities of contamination. The WSOCLI method performed particularly well at high sampling densities producing expectations of financial loss that approximated to the true costs, which were also calculated by the program. WSOCLI was shown to produce notable trends in the relationship between the overall cost (i.e., expectation of loss) and both the number of samples and the measurement uncertainty, which are: (a) low measurement uncertainty was optimal when the decision threshold was between the mean background and the mean hot spot concentrations. (b) When the hot spot mean concentration is equal to or near the decision threshold, then mid‐range measurement uncertainties were optimal. (c) When the decision threshold exceeds the mean of the hot spot, mid‐range measurement uncertainties were optimal. The trends indicate that the uncertainty may continue to rise if the difference between hot spot mean and the decision threshold increases further. (d) In any of the above scenarios, the optimal measurement uncertainty was lower if there is a large geochemical variance (i.e., heterogeneity) within the hot spot. (e) The optimal number of samples for each scenario was indicated by the WSOCLI method, and was between 50 and 100 for the scenarios considered generally; although there was significant noise in the predictions, which needs to be addressed in future work to allow such conclusions to be clearer.  相似文献   
15.
16.
Ted Munn founded Boundary-Layer Meteorology in 1970 and served as Editor for 75 volumes over a 25 year period. This short article briefly reviews Ted's scientific career with the Atmospheric Environment Service (of Canada), the International Institute for Applied Systems Analysis in Austria and with the Institute of Environmental Studies at the University of Toronto, and as editor of this journal.  相似文献   
17.
We present the first climate prediction of the coming decade made with multiple models, initialized with prior observations. This prediction accrues from an international activity to exchange decadal predictions in near real-time, in order to assess differences and similarities, provide a consensus view to prevent over-confidence in forecasts from any single model, and establish current collective capability. We stress that the forecast is experimental, since the skill of the multi-model system is as yet unknown. Nevertheless, the forecast systems used here are based on models that have undergone rigorous evaluation and individually have been evaluated for forecast skill. Moreover, it is important to publish forecasts to enable open evaluation, and to provide a focus on climate change in the coming decade. Initialized forecasts of the year 2011 agree well with observations, with a pattern correlation of 0.62 compared to 0.31 for uninitialized projections. In particular, the forecast correctly predicted La Niña in the Pacific, and warm conditions in the north Atlantic and USA. A similar pattern is predicted for 2012 but with a weaker La Niña. Indices of Atlantic multi-decadal variability and Pacific decadal variability show no signal beyond climatology after 2015, while temperature in the Niño3 region is predicted to warm slightly by about 0.5 °C over the coming decade. However, uncertainties are large for individual years and initialization has little impact beyond the first 4 years in most regions. Relative to uninitialized forecasts, initialized forecasts are significantly warmer in the north Atlantic sub-polar gyre and cooler in the north Pacific throughout the decade. They are also significantly cooler in the global average and over most land and ocean regions out to several years ahead. However, in the absence of volcanic eruptions, global temperature is predicted to continue to rise, with each year from 2013 onwards having a 50 % chance of exceeding the current observed record. Verification of these forecasts will provide an important opportunity to test the performance of models and our understanding and knowledge of the drivers of climate change.  相似文献   
18.
There are two main approaches for dealing with model biases in forecasts made with initialized climate models. In full-field initialization, model biases are removed during the assimilation process by constraining the model to be close to observations. Forecasts drift back towards the model’s preferred state, thereby re-establishing biases which are then removed with an a posterior lead-time dependent correction diagnosed from a set of historical tests (hindcasts). In anomaly initialization, the model is constrained by observed anomalies and deviates from its preferred climatology only by the observed variability. In theory, the forecasts do not drift, and biases may be removed based on the difference between observations and independent model simulations of a given period. Both approaches are currently in use, but their relative merits are unclear. Here we compare the skill of each approach in comprehensive decadal hindcasts starting each year from 1960 to 2009, made using the Met Office decadal prediction system. Both approaches are more skilful than climatology in most regions for temperature and some regions for precipitation. On seasonal timescales, full-field initialized hindcasts of regional temperature and precipitation are significantly more skilful on average than anomaly initialized hindcasts. Teleconnections associated with the El Niño Southern Oscillation are stronger with the full-field approach, providing a physical basis for the improved precipitation skill. Differences in skill on multi-year timescales are generally not significant. However, anomaly initialization provides a better estimate of forecast skill from a limited hindcast set.  相似文献   
19.
The GPS Toolbox     
The GPS Toolbox is dedicated to highlighting algorithms utilized by GPS engineers and scientists. If you have an interesting algorithm you would like to share with our readers or if you have a topic you would like to see covered in a future column, contact us at gps-toolbox@ngs.noaa.gov. To comment on the algorithms presented here, or to leave a request for an algorithm you may be looking for, visit our Web site (http://www.ngs.noaa.gov/gps-toolbox). ? 2000 John Wiley & Sons, Inc.  相似文献   
20.
This paper explores some of the newer techniques for acquiring and inverting electromagnetic data. Attention is confined primarily to the 2d magnetotelluric (MT) problem but the inverse methods are applicable to all areas of EM induction. The basis of the EMAP technique of Bostick is presented along with examples to illustrate the efficacy of that method in structural imaging and in overcoming the deleterious effects of near-surface distortions of the electric field. Reflectivity imaging methods and the application of seismic migration techniques to EM problems are also explored as imaging tools. Two new approaches to the solution of the inverse problem are presented. The AIM (Approximate Inverse Mapping) inversion of Oldenburg and Ellis uses a new way to estimate a perturbation in an iterative solution which does not involve linearization of the equations. The RRI (Rapid Relaxation Inverse) of Smith and Booker shows how approximate Fréchet derivatives and sequences of 1d inversions can be used to develop a practical inversion algorithm. The overview is structured to provide insight about the latest inversion techniques and also to touch upon most areas of the inverse problem that must be considered to carry out a practical inversion. These include model parameterization, methods of calculating first order sensitivities, and methods for setting up a linearized inversion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号