首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   686篇
  免费   16篇
  国内免费   3篇
测绘学   47篇
大气科学   46篇
地球物理   160篇
地质学   218篇
海洋学   35篇
天文学   118篇
综合类   7篇
自然地理   74篇
  2023年   7篇
  2022年   6篇
  2021年   12篇
  2020年   15篇
  2019年   22篇
  2018年   20篇
  2017年   20篇
  2016年   29篇
  2015年   23篇
  2014年   24篇
  2013年   53篇
  2012年   39篇
  2011年   21篇
  2010年   32篇
  2009年   42篇
  2008年   25篇
  2007年   32篇
  2006年   25篇
  2005年   34篇
  2004年   24篇
  2003年   21篇
  2002年   26篇
  2001年   16篇
  2000年   18篇
  1999年   13篇
  1998年   8篇
  1997年   9篇
  1996年   8篇
  1995年   8篇
  1994年   2篇
  1993年   6篇
  1992年   2篇
  1991年   3篇
  1990年   4篇
  1989年   3篇
  1988年   2篇
  1987年   6篇
  1986年   2篇
  1985年   2篇
  1984年   6篇
  1982年   4篇
  1980年   5篇
  1979年   5篇
  1978年   5篇
  1977年   2篇
  1975年   3篇
  1974年   2篇
  1973年   3篇
  1972年   2篇
  1970年   1篇
排序方式: 共有705条查询结果,搜索用时 15 毫秒
141.
A study on the dependency of GNSS pseudorange biases on correlator spacing   总被引:2,自引:0,他引:2  
We provide a comprehensive overview of pseudorange biases and their dependency on receiver front-end bandwidth and correlator design. Differences in the chip shape distortions among GNSS satellites are the cause of individual pseudorange biases. The different biases must be corrected for in a number of applications, such as positioning with mixed signals or PPP with ambiguity resolution. Current state-of-the-art is to split the pseudorange bias into a receiver- and a satellite-dependent part. As soon as different receivers with different front-end bandwidths or correlator designs are involved, the satellite biases differ between the receivers and this separation is no longer practicable. A test with a special receiver firmware, which allows tracking a satellite with a range of different correlator spacings, has been conducted with live signals as well as a signal simulator. In addition, the variability of satellite biases is assessed through zero-baseline tests with different GNSS receivers using live satellite signals. The receivers are operated with different settings for multipath mitigation, and the changes in the satellite-dependent biases depending on the receivers’ configuration are observed.  相似文献   
142.
Most systematic research on large rock-slope failures is geographically biased towards reports from Europe, the Americas, the Himalayas and China. Although reports exist on large rockslides and rock avalanches in the territory of the former Soviet Union, they are not readily available, and few translations have been made. To begin closing this gap, we describe here preliminary data from field reconnaissance, remote sensing and geomorphometry of nine extremely large rock-slope failures in the Tien Shan Mountains of central Kyrgyzstan. Each of these catastrophic and prehistoric failures exceeds an estimated 1 km3 in volume, and two of them involve about 10 km3. Failure of rock slopes in wide valleys favoured the emplacement of hummocky long-runout deposits, often spreading out over >10 km2, blocking major rivers. Most of these gigantic slope failures are located on or near active faults. Their spatial clustering and the high seismic activity in the Tien Shan support the hypothesis that strong seismic shaking caused or triggered most of these large-scale rock-slope failures. Nevertheless detailed field studies and laboratory analyses will be necessary to exclude hydroclimatic trigger mechanisms (precipitation, fluvial undercutting, permafrost degradation), and to determine their absolute ages, frequency and the large-landslide hazard of central Kyrgyzstan.  相似文献   
143.
Ensemble-based data assimilation methods have recently become popular for solving reservoir history matching problems, but because of the practical limitation on ensemble size, using localization is necessary to reduce the effect of sampling error and to increase the degrees of freedom for incorporating large amounts of data. Local analysis in the ensemble Kalman filter has been used extensively for very large models in numerical weather prediction. It scales well with the model size and the number of data and is easily parallelized. In the petroleum literature, however, iterative ensemble smoothers with localization of the Kalman gain matrix have become the state-of-the-art approach for ensemble-based history matching. By forming the Kalman gain matrix row-by-row, the analysis step can also be parallelized. Localization regularizes updates to model parameters and state variables using information on the distance between the these variables and the observations. The truncation of small singular values in truncated singular value decomposition (TSVD) at the analysis step provides another type of regularization by projecting updates to dominant directions spanned by the simulated data ensemble. Typically, the combined use of localization and TSVD is necessary for problems with large amounts of data. In this paper, we compare the performance of Kalman gain localization to two forms of local analysis for parameter estimation problems with nonlocal data. The effect of TSVD with different localization methods and with the use of iteration is also analyzed. With several examples, we show that good results can be obtained for all localization methods if the localization range is chosen appropriately, but the optimal localization range differs for the various methods. In general, for local analysis with observation taper, the optimal range is somewhat shorter than the optimal range for other localization methods. Although all methods gave equivalent results when used in an iterative ensemble smoother, the local analysis methods generally converged more quickly than Kalman gain localization when the amount of data is large compared to ensemble size.  相似文献   
144.
145.
In this study, we model the geothermal potential of deep geological formations located in the Berlin region in Germany. Berlin is situated in a sedimentary geological setting (northeastern German basin), comprising low-enthalpic aquifers at horizons down to 4–5 km depth. In the Berlin region, the temperature increases almost linearly with depth by about 30 K per kilometer, thus allowing for direct heating from deep aquifer reservoirs in principle. Our model incorporates eight major sedimentary units (Jurassic, Keuper, Muschelkalk, Upper/Middle/Lower Buntsandstein, Zechstein Salt and Sedimentary Rotliegend). Owing to lack of available petro-physical rock data for the Berlin region, we have evaluated literature data for the larger northeastern German basin to develop a thermodynamic field model which regards depth-corrected equations of state within statistical intervals of confidence. Integration over the thicknesses of the respective structural units yields their “heat in place”—energy densities associated with the pore fluid and the rock matrix under local conditions in Joule per unit area at the surface. The model predicts that aquifers in the Middle Buntsandstein and in the Sedimentary Rotliegend may well exhibit energy densities about 10 GJ m?2 for the pore fluids and 20 GJ m?2 to 40 GJ m?2 for the rock matrices on average. Referring these figures to the city area of Berlin (about 892 km2), a significant hydrothermal potential results, which however remained undeveloped until today for the reason of present development risks. The model accounts for these risks through statistical intervals of confidence which are in the order of ±60 to ±80 % of the trend figures. To minimize these uncertainties, scientific field explorations were required in order to assess the petro-physical aquifer properties locally.  相似文献   
146.
This study assesses the changes in surface area of Manzala Lake, the largest coastal lake in Egypt, with respect to changes in land use and land cover based on a multi-temporal classification process. A regression model is provided to predict the temporal changes in the different detected classes and to assess the sustainability of the lake waterbody. Remote sensing is an effective method for detecting the impact of anthropogenic activities on the surface area of a lagoon such as Manzala Lake. The techniques used in this study include unsupervised classification, Mahalanobis distance supervised classification, minimum distance supervised classification, maximum likelihood supervised classification, and normalized difference water index. Data extracted from satellite images are used to predict the future temporal change in each class, using a statistical regression model and considering calibration, validation, and prediction phases. It was found that the maximum likelihood classification technique has the highest overall accuracy of 93.33%. This technique is selected to observe the changes in the surface area of the lake for the period from 1984 to 2015. Study results show that the waterbody surface area of the lake declined by 46% and the area of floating vegetation, islands, and land agriculture increased by 153.52, 42.86, and 42.35% respectively during the study period. Linear regression model prediction indicates that the waterbody surface area of the lake will decrease by 25.24% during the period from 2015 to 2030, which reflects the negative impact of human activities on lake sustainability represented by a severe reduction of the waterbody area.  相似文献   
147.
148.
The hazard of any natural process can be expressed as a function of its magnitude and the annual probability of its occurrence in a particular region. Here we expand on the hypothesis that natural hazards have size–frequency relationships that in parts resemble inverse power laws. We illustrate that these trends apply to extremely large events, such as mega-landslides, huge volcanic debris avalanches, and outburst flows from failures of natural dams. We review quantitative evidence that supports the important contribution of extreme events to landscape development in mountains throughout the world, and propose that their common underreporting in the Quaternary record may lead to substantial underestimates of mean process rates. We find that magnitude–frequency relationships provide a link between Quaternary science and natural hazard research, with a degree of synergism and societal importance that neither discipline alone can deliver. Quaternary geomorphology, stratigraphy, and geochronology allow the reconstruction of times, magnitudes, and frequencies of extreme events, whereas natural hazard research raises public awareness of the importance of reconstructing events that have not happened historically, but have the potential to cause extreme destruction and loss of life in the future.  相似文献   
149.
For single-phase flow through a network model of a porous medium, we report (1) solutions of the Navier–Stokes equation for the flow, (2) micro-particle imaging velocimetry (PIV) measurements of local flow velocity vectors in the “pores throats” and “pore bodies,” and (3) comparisons of the computed and measured velocity vectors. A “two-dimensional” network of cylindrical pores and parallelepiped connecting throats was constructed and used for the measurements. All pore bodies had the same dimensions, but three-different (square cross-section) pore-throat sizes were randomly distributed throughout the network. An unstructured computational grid for flow through an identical network was developed and used to compute the local pressure gradients and flow vectors for several different (macroscopic) flow rates. Numerical solution results were compared with the experimental data, and good agreement was found. Cross-over from Darcy flow to inertial flow was observed in the computational results, and the permeability and inertia coefficients of the network were estimated. The development of inertial flow was seen as a “two-step” process: (1) recirculation zones appeared in more and more pore bodies as the flow rate was increased, and (2) the strengths of individual recirculation zones increased with flow rate. Because each pore-throat and pore-body dimension is known, in this approach an experimental (and/or computed) local Reynolds number is known for every location in the porous medium at which the velocity has been measured (and/or computed).  相似文献   
150.
We study the question of what difference it makes for the derived field-aligned conductance (K) values if one uses Maxwellian or kappa distributions for the fitting of low-orbiting satellite electron flux spectra in the auroral region. This question has arisen because sometimes a high-energy tail is seen in the spectra. In principle, the kappa fits should always be better, because the kappa distribution is a generalization of the Maxwellian. However, the physical meaning of the parameters appearing in the Maxwellian is clearer. It therefore makes sense to study under which circumstances it is appropriate to use a Maxwellian. We use Freja electron data (TESP and MATE) from two events. One of the events represents quiet magnetospheric conditions (stable arc) and the other represents disturbed conditions (surge). In these Freja events, at least, using kappa rather than Maxwellian fitting gives a better fit to the observed distribution, but the difference in K values is not large (usually less than 20%). The difference can be of either sign. However, sometimes even the kappa distribution does not provide a good fit, and one needs a more complicated distribution such as two Maxwellians. We investigate the relative contributions of the two Maxwellians to the total field-aligned conductance value in these cases. We find that the contribution of the high-energy population is insignificant (usually much less than 20%). This is because K is proportional to n/Ec Ec, where n is the source plasma density and Ec is the characteristic energy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号