首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   131篇
  免费   11篇
  国内免费   2篇
测绘学   4篇
大气科学   10篇
地球物理   49篇
地质学   41篇
海洋学   1篇
天文学   17篇
综合类   1篇
自然地理   21篇
  2022年   1篇
  2021年   2篇
  2020年   3篇
  2018年   8篇
  2017年   2篇
  2016年   6篇
  2015年   3篇
  2014年   9篇
  2013年   11篇
  2012年   7篇
  2011年   8篇
  2010年   6篇
  2009年   7篇
  2008年   10篇
  2007年   5篇
  2006年   10篇
  2005年   1篇
  2004年   4篇
  2003年   3篇
  2002年   5篇
  2001年   2篇
  2000年   4篇
  1999年   2篇
  1996年   3篇
  1995年   1篇
  1994年   1篇
  1992年   1篇
  1991年   3篇
  1990年   1篇
  1989年   1篇
  1988年   1篇
  1985年   2篇
  1981年   1篇
  1980年   2篇
  1979年   3篇
  1978年   2篇
  1976年   2篇
  1975年   1篇
排序方式: 共有144条查询结果,搜索用时 31 毫秒
101.
Numerical models with fine discretization normally demand large computational time and space, which lead to computational burden for state estimations or model parameter inversion calculation. This article presented a reduced implicit finite difference scheme that based on proper orthogonal decomposition (POD) for two-dimensional transient mass transport in heterogeneous media. The reduction of the original full model was achieved by projecting the high-dimension full model to a low-dimension space created by POD bases, and the bases are derived from the snapshots generated from the model solutions of the forward simulations. The POD bases were extracted from the ensemble of snapshots by singular value decomposition. The dimension of the Jacobian matrix was then reduced after Galerkin projection. Thus, the reduced model can accurately reproduce and predict the original model’s transport process with significantly decreased computational time. This scheme is practicable with easy implementation of the partial differential equations. The POD method is illustrated and validated through synthetic cases with various heterogeneous permeability field scenarios. The accuracy and efficiency of the reduced model are determined by the optimal selection of the snapshots and POD bases.  相似文献   
102.
The applicability of elevation-regression based interpolation methods for long-term temperature normals, for example the Parameter-elevation Regressions on Independent Slopes Model (PRISM), becomes increasingly limited in data sparse, complex terrain such as that found in mountainous British Columbia (BC), Canada. Recent methods to improve both the resolution and accuracy of interpolation models have focused on the development of “up-sampling” algorithms based on local lapse rate adjustments to the original interpolated surfaces. Lapse rates can be derived from statistical models (e.g., elevation-based polynomial regression equations) or dynamical models (e.g., vertical temperature profiles from numerical weather prediction (NWP) models). This study compares a widely used statistical up-sampling algorithm, ClimateBC, with two NWP reanalysis products, the National Centers for Environmental Prediction/National Corporation for Atmospheric Research, Reanalysis 1 (NCEP1) and the more modern European Centre for Medium-range Weather Forecasts (ECMWF) Reanalysis Interim (ERA-Interim). Thirty-year climate normals for maximum and minimum temperatures were calculated using statistical up-sampling and NWP lapse rate adjustments to existing PRISM-based climate normals at a subset of stations in BC. Specifically, up-sampling model evaluation was performed using 1951–80 climate normals from an independent set of 54 surface stations (1 m to 2347 m) which were not included in the PRISM interpolation or assimilated into the NWP reanalysis products. All models performed similarly for minimum temperature, which showed only a slight improvement over PRISM. For maximum temperature, ClimateBC, NCEP1 and ERA-Interim all performed significantly better than PRISM, in particular during spring and summer. The ERA-Interim reanalysis outperformed NCEP1 in almost all months. The results suggest that lapse rate adjustment algorithms based on reanalysis products will have greater potential as progress continues on developing NWP components.

R ésumé ?[Traduit par la rédaction] L'application des techniques d'interpolation par régression en fonction de l'altitude pour les normales de température à long terme, comme le Parameter-elevation Regressions on Independent Slopes Model (PRISM), devient très difficile dans les régions accidentées pour lesquelles on dispose de données insuffisantes, par exemple les secteurs montagneux de la Colombie-Britannique (C.-B.) au Canada. Les toutes dernières méthodes destinées à augmenter le degré de résolution des modèles d'interpolation et leur précision reposent sur la conception d'algorithmes d’échantillonnage vertical fondés sur l'ajustement des surfaces interpolées originales au moyen du gradient vertical local. Nous pouvons établir les gradients verticaux à partir de modèles statistiques (p. ex., des équations de régression polynomiales en fonction de l'altitude) ou de modèles dynamiques (p. ex., des profils verticaux de température à partir de modèles de prévision numérique du temps (PNT)). Dans la présente étude, nous comparons un algorithme d’échantillonnage vertical statistique communément utilisé, le programme ClimateBC, à deux produits de réanalyse de PNT, celle des National Centres for Environmental Prediction/National Corporation for Atmospheric Research Reanalysis 1 (NCEP1), et la réanalyse provisoire (ERA-Interim) du Centre européen pour les prévisions météorologiques à moyen terme (ECMWF). Les normales climatiques de trente ans pour les températures maximums et minimums ont été calculées en appliquant la méthode d’échantillonnage vertical statistique et l'ajustement du gradient obtenu par PNT aux normales climatiques établies à partir du PRISM pour un sous-ensemble de stations en Colombie-Britannique. Plus particulièrement, nous avons procédé à l’évaluation du modèle d’échantillonnage vertical en nous servant des normales climatiques (1951–1980), pour un ensemble de 54 stations d'observation en surface indépendantes (1?m à 2347?m), exclues du modèle d'interpolation PRISM et des produits de réanalyse de PNT. Pour tous les modèles, nous avons obtenu des résultats comparables pour la température minimum, soit une légère amélioration seulement par rapport au PRISM. Pour la température maximum, nous avons obtenu avec ClimateBC, NCEP1 et ERA-Interim, des résultats nettement plus probants qu'avec PRISM, notamment au printemps et en été. Les réanalyses ERA-Interim ont donné de meilleurs résultats que NCEP1 pour pratiquement tous les mois. D'après ces résultats, le potentiel des algorithmes d'ajustements des gradients verticaux de température, établis à partir de produits de réanalyse se renforcera à mesure que les composantes de PNT se développeront.  相似文献   
103.
Dot mapping is a cartographic representation method to visualise discrete absolute values and their spatial distribution. To achieve this, dots equal in size and represented value are used. According to the dot value, a certain number of dots are used to depict a data value. These dots usually form dot clusters. The data value needs to be rounded to a multiple of the dot value. It is possible to roughly determine the visualised data value by counting the dots and multiplying this number with the dot value. As there are many parameters – dot size, dot value, map scale – to consider when designing a dot map, the manual way is very complex and time consuming. This paper presents a method to automatically create a dot representation of a dot map from given statistical data that needs no cartographic expertise. The dot representation may be combined with other elements, such as a topographic background, to form a complete map. So the algorithm can easily be integrated into the map design process. The paper refines the basic approach of automated dot mapping published earlier. The dot placement and arrangement have been improved compared to the basic method.  相似文献   
104.
We report new photometric observations of the transiting exoplanetary system WASP-32 made by using CCD cameras at Yunnan Observatories and Ho Koon Nature Education cum Astronomical Centre, China from 2010 to 2012. Following our usual procedure, the observed data are corrected for systematic errors according to the coarse decorrelation and SYSREM algorithms so as to enhance the signal of the transit events. Combined with radial velocity data presented in the literature, our newly observed data and earlier photometric data in the literature are simultaneously analyzed to derive the physical parameters describing the system by employing the Markov chain Monte Carlo technique. The derived parameters are consistent with the result published in the original paper about WASP-32b, but the uncertainties of the new parameters are smaller than those in the original paper. Moreover, our modeling result supports a circular orbit for WASP-32b. Through the analysis of all available mid-transit times, we have refined the orbital period of WASP-32b; no evident transit timing variation is found in these transit events.  相似文献   
105.
A one‐dimensional, two‐layer solute transport model is developed to simulate chemical transport process in an initially unsaturated soil with ponding water on the soil surface before surface runoff starts. The developed mathematical model is tested against a laboratory experiment. The infiltration and diffusion processes are mathematically lumped together and described by incomplete mixing parameters. Based on mass conservation and water balance equations, the model is developed to describe solute transport in a two‐zone layer, a ponding runoff zone and a soil mixing zone. The two‐zone layer is treated as one system to avoid describing the complicated chemical transport processes near the soil surface in the mixing zone. The proposed model was analytically solved, and the solutions agreed well with the experimental data. The developed experimental method and mathematical model were used to study the effect of the soil initial moisture saturation on chemical concentration in surface runoff. The study results indicated that, when the soil was initially saturated, chemical concentration in surface runoff was significantly (two orders of magnitude) higher than that with initially unsaturated soil, while the initial chemical concentrations at the two cases were of the same magnitude. The soil mixing depth for the initially unsaturated soil was much larger than that for the initially saturated soil, and the incomplete runoff mixing parameter was larger for the initially unsaturated soil. The higher the infiltration rate of the soil, the greater the infiltration‐related incomplete mixing parameter. According to the quantitative analysis, the soil mixing depth was found to be sensitive for both initially unsaturated and saturated soils, and the incomplete runoff mixing parameter was sensitive for initially saturated soil but not for the initially unsaturated soil; the incomplete infiltration mixing parameter behaved just the opposite. Some suggestions are made for reducing chemical loss from runoff. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
106.
107.
This paper examines different concepts of a ‘warming commitment’ which is often used in various ways to describe or imply that a certain level of warming is irrevocably committed to over time frames such as the next 50 to 100 years, or longer. We review and quantify four different concepts, namely (1) a ‘constant emission warming commitment’, (2) a ‘present forcing warming commitment’, (3) a‘zero emission (geophysical) warming commitment’ and (4) a ‘feasible scenario warming commitment’. While a ‘feasible scenario warming commitment’ is probably the most relevant one for policy making, it depends centrally on key assumptions as to the technical, economic and political feasibility of future greenhouse gas emission reductions. This issue is of direct policy relevance when one considers that the 2002 global mean temperatures were 0.8± 0.2 °C above the pre-industrial (1861–1890) mean and the European Union has a stated goal of limiting warming to 2 °C above the pre-industrial mean: What is the risk that we are committed to overshoot 2 °C? Using a simple climate model (MAGICC) for probabilistic computations based on the conventional IPCC uncertainty range for climate sensitivity (1.5 to 4.5 °C), we found that (1) a constant emission scenario is virtually certain to overshoot 2 °C with a central estimate of 2.0 °C by 2100 (4.2 °C by 2400). (2) For the present radiative forcing levels it seems unlikely that 2 °C are overshoot. (central warming estimate 1.1 °C by 2100 and 1.2 °C by 2400 with ~10% probability of overshooting 2 °C). However, the risk of overshooting is increasing rapidly if radiative forcing is stabilized much above 400 ppm CO2 equivalence (1.95 W/m2) in the long-term. (3) From a geophysical point of view, if all human-induced emissions were ceased tomorrow, it seems ‘exceptionally unlikely’ that 2 °C will be overshoot (central estimate: 0.7 °C by 2100; 0.4 °C by 2400). (4) Assuming future emissions according to the lower end of published mitigation scenarios (350 ppm CO2eq to 450 ppm CO2eq) provides the central temperature projections are 1.5 to 2.1 °C by 2100 (1.5 to 2.0 °C by 2400) with a risk of overshooting 2 °C between 10 and 50% by 2100 and 1–32% in equilibrium. Furthermore, we quantify the ‘avoidable warming’ to be 0.16–0.26 °C for every 100 GtC of avoided CO2 emissions – based on a range of published mitigation scenarios.  相似文献   
108.
Access to fresh water is one of the major issues of northern and sub-Saharan Africa. The majority of the fresh water used for drinking and irrigation is obtained from large ground water basins where there is minor contemporary recharge and the aquifers cross national borders. These aquifers include the Nubian Aquifer System shared by Chad, Egypt, Libya, and Sudan; the Iullemeden Aquifer System, extending over Niger, Nigeria, Mali, Benin, and Algeria; and the Northwest Sahara Aquifer System shared by Algeria, Libya, and Tunisia. These resources are subject to increased exploitation and may be severely stressed if not managed properly as witnessed already by declining water levels. In order to make appropriate decisions for the sustainable management of these shared water resources, planners and managers in different countries need an improved knowledge base of hydrological information. Three technical cooperation projects related to aquifer systems will be implemented by the International Atomic Energy Agency, in collaboration with the United Nations Educational, Scientific and Cultural Organization and United Nations Development Programme/Global Environmental Facility. These projects focus on isotope hydrology studies to better quantify ground water recharge and dynamics. The multiple isotope approach combining commonly used isotopes 18O and 2H together with more recently developed techniques (chlorofluorocarbons, 36Cl, noble gases) will be applied to improve the conceptual model to study stratification and ground water flows. Moreover, the isotopes will be an important indicator of changes in the aquifer due to water abstraction, and therefore they will assist in the effort to establish a sustainable ground water management.  相似文献   
109.
110.
Natural colored fluorites were studied by means of optical absorption and electron paramagnetic resonance (EPR). Complex centers involving rare-earth ions and/or oxygen give rise to the various colors observed. These include yttrium-associated F centers (blue), coexisting yttrium and cerium-associated F centers (yellowish-green), the (YO2) center (rose) and the O 3 ? molecule ion (yellow). Divalent rare-earth ions also contribute to the colorations, as for instance Sm3+ (green fluorites), or they are at the origin of strong fluorescence observed (Eu2+). Strong irradiation of the crystals with ionizing radiation leads to coagulation of color centers, and to precipitation of metallic calcium colloids. There is probably no simple relation connecting the coloration and the growth process of the crystal. Thermal stability studies, however, have allowed to partially classify the colors as being of primary or secondary origin.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号