全文获取类型
收费全文 | 211篇 |
免费 | 14篇 |
国内免费 | 5篇 |
专业分类
测绘学 | 1篇 |
大气科学 | 13篇 |
地球物理 | 53篇 |
地质学 | 49篇 |
海洋学 | 27篇 |
天文学 | 52篇 |
自然地理 | 35篇 |
出版年
2022年 | 2篇 |
2021年 | 2篇 |
2020年 | 6篇 |
2019年 | 2篇 |
2018年 | 3篇 |
2017年 | 9篇 |
2016年 | 5篇 |
2015年 | 5篇 |
2014年 | 18篇 |
2013年 | 9篇 |
2012年 | 10篇 |
2011年 | 17篇 |
2010年 | 15篇 |
2009年 | 15篇 |
2008年 | 10篇 |
2007年 | 10篇 |
2006年 | 6篇 |
2005年 | 14篇 |
2004年 | 8篇 |
2003年 | 14篇 |
2002年 | 4篇 |
2001年 | 5篇 |
2000年 | 7篇 |
1999年 | 1篇 |
1998年 | 3篇 |
1997年 | 2篇 |
1996年 | 1篇 |
1995年 | 1篇 |
1994年 | 3篇 |
1993年 | 1篇 |
1990年 | 2篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1985年 | 1篇 |
1984年 | 1篇 |
1983年 | 1篇 |
1980年 | 2篇 |
1979年 | 1篇 |
1976年 | 2篇 |
1974年 | 2篇 |
1973年 | 3篇 |
1972年 | 1篇 |
1971年 | 2篇 |
1969年 | 1篇 |
1963年 | 1篇 |
排序方式: 共有230条查询结果,搜索用时 15 毫秒
181.
Methods for exploring management options to reduce greenhouse gas emissions from tropical grazing systems 总被引:1,自引:0,他引:1
S. Mark Howden David H. White Greg M. Mckeon Joe C. Scanlan John O. Carter 《Climatic change》1994,27(1):49-70
Increasing atmospheric concentrations of greenhouse gases are expected to result in global climatic changes over the next decades. Means of evaluating and reducing greenhouse gas emissions are being sought. In this study an existing simulation model of a tropical savanna woodland grazing system was adapted to account for greenhouse gas emissions. This approach may be able to be used in identifying ways to assess and limit emissions from other rangeland, agricultural and natural ecosystems.GRASSMAN, an agricultural decision-support model, was modified to include sources, sinks and storages of greenhouse gases in the tropical and sub-tropical savanna woodlands of northern Australia. The modified model was then used to predict the changes in emissions and productivity resulting from changes in stock and burning management in a hypothetical grazing system in tropical northeastern Queensland. The sensitivity of these results to different Global Warming Potentials (GWPs) and emission definitions was then tested.Management options to reduce greenhouse gas emissions from the tropical grazing system investigated were highly sensitive to the GWPs used, and to the emission definition adopted. A recommendation to reduce emissions by changing burning management would be toreduce fire frequency if both direct and indirect GWPs of CO2, CH4, N2O, CO and NO are used in evaluating emissions, but toincrease fire frequency if only direct GWPs of CO2, CH4 and N2O are used. The ability to reduce greenhouse gas emissions from these systems by reducing stocking rates was also sensitive to the GWPs used. In heavily grazed systems, the relatively small reductions in stocking rate needed to reduce emissions significantly should also reduce the degradation of soils and vegetation, thereby improving the sustainability of these enterprises.The simulation studies indicate that it is possible to alter management to maximise beef cattle production per unit greenhouse gases or per unit methane emitted, but that this is also dependent upon the emission definition used. High ratios of liveweight gain per unit net greenhouse gas emission were found in a broadly defined band covering the entire range of stocking rates likely to be used. In contrast, high values of liveweight gain per unit anthropogenic greenhouse gas emission were found only at very low stocking rates that are unlikely to be economically viable.These results suggest that policy initiatives to reduce greenhouse gas emissions from tropical grazing systems should be evaluated cautiously until the GWPs have been further developed and the implications of emission definitions more rigorously determined. 相似文献
182.
183.
Arvid M. Johnson Kaj M. Johnson Joe Durdella Mete Sözen Türel Gür 《Journal of Seismology》2002,6(3):329-346
The fault trace of the 12 November 1999 earthquake in theDüzce-Bolu region in Anatolia crossed the alignment of a 2.4 kmviaduct in Kaynali that had been carefully surveyed. The builders of theviaduct, the ASTALDI-BAYINDIR Co., resurveyed the viaduct after theearthquake. We repeated the survey for approximately one kilometre of theeastern end of the viaduct and obtained essentially identical results. Thoughit was unfortunate that the earthquake damaged the new structure, the piersdid produce a very rare record of ground deformation of an earthquake.In effect, the viaduct was a giant strain gage that yielded reliable data aboutground movement and distortion near a fault. This paper describes thesurvey data and their evaluation leading to convincing evidence that (a) thefault trace must be considered, not as a fault line or plane, but as a faultzone with a finite width and that (b) the structural damage within the zonewas caused, not primarily by ground acceleration, but by ground distortion.Along the right-lateral fault at Kaynali, the fault zone consists ofright-lateral movement at the main trace, a zone of right-lateral distortionnear the trace, bounded by left-lateral distortion. The 12 November 1999event in Turkey, like the ground deformation and fracturing at Landers,California (Johnson et al., 1994, 1996), thus affirmed a forgottenconclusion from the studies by Lawson (1908), Gilbert and Reid (1910)of the 1906 San Francisco earthquake that earthquake ruptures typicallyoccur throughout zones or belts, rather than along linear traces or planes. 相似文献
184.
Abstract In 1985, the first Doppler weather radar to operate in Canada was established by the Research Directorate of the Atmospheric Environment Service (AES) at a site in King City, north of Toronto, Ontario. The initial thrust of the research program immediately recognized the advances Doppler observations would make in the operational sector and a system was devised to satisfactorily meet both needs. This paper describes the radar system, the techniques, the data processing and innovations developed to provide immediate intelligence to the data. The system was developed by adapting mainly commercial hardware and in‐house software. The system factors that are significant for operational meteorological surveillance and analysis, the display form and formats, and the sample cases illustrating the impact of Doppler observations on both synoptic and mesocale analysis in all seasons complete the discussion. Significant factors, in both system parameters and meteorology, that impinge on the success of the Doppler radar program and its applications are summarized. There is sufficient maturity in the technology, display capabilities and meteorological knowledge to warrant network implementation. However, research and development is still needed to interpret and synthesize the voluminous amounts of available information. In particular, conceptual models of the kinematics of mesoscale systems need considerable development. 相似文献
185.
We present a new analytical method for U-series isotopes using the SHRIMP RG (Sensitive High mass Resolution Ion MicroProbe) mass spectrometer that utilizes the preconcentration of the U-series isotopes from a sample onto a single ion-exchange bead. Ion-microprobe mass spectrometry is capable of producing Th ionization efficiencies in excess of 2%. Analytical precision is typically better than alpha spectroscopy, but not as good as thermal ionization mass spectroscopy (TIMS) and inductively coupled plasma multicollector mass spectrometry (ICP-MS). Like TIMS and ICP-MS the method allows analysis of small samples sizes, but also adds the advantage of rapidity of analysis. A major advantage of ion-microprobe analysis is that U and Th isotopes are analyzed in the same bead, simplifying the process of chemical separation. Analytical time on the instrument is ∼60 min per sample, and a single instrument-loading can accommodate 15-20 samples to be analyzed in a 24-h day. An additional advantage is that the method allows multiple reanalyses of the same bead and that samples can be archived for reanalysis at a later time. Because the ion beam excavates a pit only a few μm deep, the mount can later be repolished and reanalyzed numerous times. The method described of preconcentrating a low concentration sample onto a small conductive substrate to allow ion-microprobe mass spectrometry is potentially applicable to many other systems. 相似文献
186.
This paper presents a study on the statistical forecasts of typhoon tracks. Numerical models have their own systematic errors, like a bias. In order to improve the accuracy of track forecasting, a statistical model called DLM (dynamic linear model) is applied to remove the systematic error. In the analysis of typhoons occurring over the western North Pacific in 1997 and 2000, DLM is useful as an adaptive model for the prediction of typhoon tracks. 相似文献
187.
Ruping Mo Paul I. Joe Chris Doyle Paul H. Whitfield 《Pure and Applied Geophysics》2014,171(1-2):323-336
A brief review of the anomalous weather conditions during the Vancouver 2010 Winter Olympic and Paralympic Games and the efforts to predict these anomalies based on some preceding El Niño–Southern Oscillation (ENSO) signals are presented. It is shown that the Olympic Games were held under extraordinarily warm conditions in February 2010, with monthly mean temperature anomalies of +2.2 °C in Vancouver and +2.8 °C in Whistler, ranking respectively as the highest and the second highest in the past 30 years (1981–2010). The warm conditions continued, but became less anomalous, in March 2010 for the Paralympic Games. While the precipitation amounts in the area remained near normal through this winter, the lack of snow due to warm conditions created numerous media headlines and practical problems for the alpine competitions. A statistical model was developed on the premise that February and March temperatures in the Vancouver area could be predicted using an ENSO signal with considerable lead time. This model successfully predicted the warmer-than-normal, lower-snowfall conditions for the Vancouver 2010 Winter Olympics and Paralympics. 相似文献
188.
Crew variability in topographic surveys for monitoring wadeable streams: a case study from the Columbia River Basin 下载免费PDF全文
Sara Bangen Joe Wheaton Nicolaas Bouwes Chris Jordan Carol Volk Michael B. Ward 《地球表面变化过程与地形》2014,39(15):2070-2086
Digital elevation models (DEMs) derived from ground‐based topographic surveys have become ubiquitous in the field of fluvial geomorphology. Their wide application in spatially explicit analysis includes hydraulic modeling, habitat modeling, and morphological sediment budgeting. However, there is a lack of understanding regarding the repeatability and precision of DEMs derived from ground‐based surveys conducted by different, and inherently subjective, observers. This is of particular concern when we consider the proportion of studies and monitoring programs that are implemented across multiple sites and over time by different observers. We used a case study from the Columbia Habitat Monitoring Program (CHaMP), where seven field crews sampled the same six sites, to quantify the magnitude and effect of observer variability on DEMs interpolated from total station surveys. We quantified the degree to which DEM‐derived metrics and measured geomorphic change were repeatable. Across all six sites, we found an average elevation standard deviation of 0.05 m among surveys, and a mean total range of 0.16 m. A variance partition between site, crew, and unexplained errors for several topographically derived metrics showed that crew variability never accounted for > 1.5% of the total variability. We calculated minor geomorphic changes at one site following a relatively dry flow year between 2012 and 2011. Calculated changes were minimal (unthresholded net changes ±1–3 cm) with six crews detecting an indeterminate sediment budget and one crew detecting a minor net erosional sediment budget. While crew variability does influence the quality of topographic surveys, this study highlights that when consistent surveying methods are employed, the data sets are still sufficient to support derivation of topographic metrics and conduct basic geomorphic change detection. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
189.
190.
Calvin W. Rose Jon M. Olley Arman Haddadchi Andrew P. Brooks Joe McMahon 《地球表面变化过程与地形》2018,43(3):735-742
The jet erosion test (JET) is a widely applied method for deriving the erodibility of cohesive soils and sediments. There are suggestions in the literature that further examination of the method widely used to interpret the results of these erosion tests is warranted. This paper presents an alternative approach for such interpretation based on the principle of energy conservation. This new approach recognizes that evaluation of erodibility using the jet tester should involve the mass of soil eroded, so determination of this eroded mass (or else scour volume and bulk density) is required. The theory partitions jet kinetic energy flux into that involved in eroding soil, the remainder being dissipated in a variety of mechanisms. The energy required to erode soil is defined as the product of the eroded mass and a resistance parameter which is the energy required to entrain unit mass of soil, denoted J (in J/kg), whose magnitude is sought. An effective component rate of jet energy consumption is defined which depends on depth of scour penetration by the jet, but not on soil type, or the uniformity of the soil type being investigated. Application of the theory depends on experimentally determining the spatial form of jet energy consumption displayed in erosion of a uniform body of soil, an approach of general application. The theory then allows determination of the soil resistance parameter J as a function of depth of scour penetration into any soil profile, thus evaluating such profile variation in erodibility as may exist. This parameter J has been used with the same meaning in soil and gully erosion studies for the last 25 years. Application of this approach will appear in a companion publication as part 2. Copyright © 2017 John Wiley & Sons, Ltd. 相似文献