首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   324篇
  免费   16篇
  国内免费   7篇
测绘学   8篇
大气科学   18篇
地球物理   61篇
地质学   176篇
海洋学   21篇
天文学   32篇
综合类   8篇
自然地理   23篇
  2023年   3篇
  2022年   15篇
  2021年   15篇
  2020年   10篇
  2019年   12篇
  2018年   21篇
  2017年   16篇
  2016年   25篇
  2015年   13篇
  2014年   27篇
  2013年   35篇
  2012年   16篇
  2011年   24篇
  2010年   8篇
  2009年   14篇
  2008年   11篇
  2007年   8篇
  2006年   9篇
  2005年   1篇
  2004年   6篇
  2003年   3篇
  2002年   5篇
  2001年   9篇
  2000年   3篇
  1999年   3篇
  1998年   1篇
  1997年   1篇
  1995年   4篇
  1994年   2篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1987年   4篇
  1985年   3篇
  1984年   1篇
  1983年   2篇
  1982年   2篇
  1981年   3篇
  1980年   2篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
  1971年   1篇
  1968年   1篇
排序方式: 共有347条查询结果,搜索用时 768 毫秒
341.
The currently adopted rainfall-based design flood estimation method in Australia, known as design event approach (DEA), has a flaw that is widely criticized by the hydrologists. The DEA is based on the assumption that a rainfall depth of a certain frequency can be transformed to a flood peak of the same frequency by adopting the ‘representative values’ of other model input variables, such as temporal patterns and losses. To overcome the limitation associated with the DEA, this paper develops stochastic model inputs to apply Monte Carlo simulation technique (MCST) for design flood estimation. This uses data from 86 pluviograph stations and six catchments from eastern New South Wales (NSW), Australia, to regionalize the distributions of various input variables (e.g., rainfall duration, inter-event duration, intensity and temporal patterns and loss and routing characteristics) to simulate thousands of flood hydrographs using a nonlinear runoff routing model. The regionalized stochastic inputs are then applied with the MCST to two catchments in eastern NSW. The results indicate that the developed MCST provide more accurate flood quantile estimates than the DEA for the two test catchments. The particular advantage of the new MCST and stochastic design input variables is that it reduces the subjectivity in the selection of model input values in flood modeling. The developed MCST can be adapted to other parts of Australia and other countries.  相似文献   
342.
In present study a turbocharged, medium duty compression ignition engine was alternatively fuelled with biodiesel to investigate the changes in particulate matter composition, relative to that taken with diesel fuel. The engine was operated on an AC electrical dynamometer in accordance with an 8-mode, steady-state cycle. The numbers of particles were estimated through electrical low pressure impactor, while sulfates and trace metals were analyzed by ion chromatography and inductively coupled plasma-atomic emission spectroscopy, respectively. Nitric oxides and nitrogen dioxides were measured separately using SEMTECH-DS. Experimental results revealed that, on account of elevated ratios of nitrogen dioxide to nitrogen oxides, mean accumulation mode particles were 42 % lower with biodiesel. On the other hand, nuclei mode particles were higher with biodiesel, owing to heterogeneous nucleation and accounting for an increase in sulfate emissions up to 8 % with biodiesel as compared to diesel. On the average, trace metal emissions were significantly reduced showing 65–85.4 % reduction rates with biodiesel, relative to its counterpart. Further to this, individual congeners such as iron, calcium, and sodium were the predominant elements of the trace metals emitted from engine. The mean relative decrease in iron and calcium was 89–97.8 and 77.6–87 %, respectively, while the relative rise in sodium was in the range of 29–46 % with biodiesel. Further, elements such as zinc, chromium, and aluminum showed substantial abatement, whereas potassium, magnesium, and manganese exhibited irregular trends on account of variable engine loads and speeds during the various modes of cycle.  相似文献   
343.
The Gamow-Teller strength distribution function, B(GT), for the odd Z parent 51V, N?Z=5, up to 30 MeV of excitation energy in the daughter 51Ti is calculated in the domain of proton-neutron Quasiparticle Random Phase Approximation (pn-QRPA) theory. The pn-QRPA results are compared against other theoretical calculations, (n, p) and high resolution (d, 2He) reaction experiments. For the case of (d, 2He) reaction the calibration was performed for 0≤E j ≤5.0 MeV, where the authors stressed that within this excitation energy range the ΔL=0 transition strength can be extracted with high accuracy for 51V. Within this energy range the current pn-QRPA total B(GT) strength 0.79 is in good agreement with the (d, 2He) experiment’s total strength of 0.9±0.1. The pn-QRPA calculated Gamow-Teller centroid at 4.20 MeV in daughter 51Ti is also in good agreement with high resolution (d, 2He) experiment which placed the Gamow-Teller centroid at 4.1±0.4 MeV in daughter 51Ti. The low energy detailed Gamow-Teller structure and Gamow-Teller centroid play a sumptuous role in associated weak decay rates and consequently affect the stellar dynamics. The stellar weak rates are sensitive to the location and structure of these low-lying states in daughter 51Ti. The calculated electron capture rates on 51V in stellar matter are also in good agreement with the large scale shell model rates.  相似文献   
344.
Gamow-Teller (GT) transitions play a preeminent role in the collapse of stellar core in the stages leading to a Type-II supernova. The microscopically calculated GT strength distributions from ground and excited states are used for the calculation of weak decay rates for the core-collapse supernova dynamics and for probing the concomitant nucleosynthesis problem. The B(GT) strength for 57Zn is calculated in the domain of proton-neutron Quasiparticle Random Phase Approximation (pn-QRPA) theory. No experimental insertions were made (as usually made in other pn-QRPA calculations of B(GT) strength function) to check the performance of the model for proton-rich nuclei. The calculated B(GT) strength distribution is in good agreement with measurements and shows differences with the earlier reported shell model calculation. The pn-QRPA model reproduced the measured low-lying strength for 57Zn better in comparison to the KB3G interaction used in the large-scale shell model calculation. The stellar weak rates are sensitive to the location and structure of these low-lying states in daughter 57Cu. The structure of 57Cu plays a sumptuous role in the nucleosynthesis of proton-rich nuclei. The primary mechanism for producing such nuclei is the rp-process and is believed to be important in the dynamics of the collapsing supermassive stars. Small changes in the binding and excitation energies can lead to significant modifications of the predictions for the synthesis of proton rich isotopes. The ?? +-decay and electron capture (EC) rates on 57Zn are compared to the seminal work of Fuller, Fowler and Newman (FFN). The pn-QRPA calculated ?? +-decay rates are generally in good agreement with the FFN calculation. However at high stellar temperatures the calculated ?? +-decay rates are almost half of FFN rates. On the other hand, for rp-process conditions, the calculated electron capture (?? +-decay) rates are bigger than FFN rates by more than a factor 2 (1.5) and may have interesting astrophysical consequences.  相似文献   
345.
Palaeontology was established as a science in the Victorian era, yet has roots that stretch deeper into the recesses of history. More than 2000 years ago, the Greek philosopher Aristotle deduced that fossil sea shells were once living organisms, and around 500 ad Xenophanes used fossils to argue that many areas of land must have previously been submarine. In 1027, the Persian scholar Avicenna suggested that organisms were fossilized by petrifying fluids; this theory was accepted by most natural philosophers up until the eighteenth century Enlightenment, and even beyond. The late 1700s were notable for the work of Georges Cuvier who established the reality of extinction. This, coupled with advances in the recognition of faunal successions made by the canal engineer William Smith, laid the framework for the discipline that would become known as palaeontology. As the nineteenth century progressed, the scientific community became increasingly well organized. Most fossil workers were gentleman scientists and members of the clergy, who self‐funded their studies in a new and exciting field. Many of the techniques used to study fossils today were developed during this ‘classical’ period. Perhaps the most fundamental of these is to expose a fossil by splitting the rock housing it, and then conduct investigations based upon the exposed surface ( Fig. 1 ). This approach has served the science well in the last two centuries, having been pivotal to innumerable advances in our understanding of the history of life. Nevertheless, there are many cases where splitting a rock in this way results in incomplete data recovery; those where the fossils are not flattened, but are preserved in three‐dimensions. Even the ephemeral soft‐tissues of organisms are occasionally preserved in a three‐dimensional state, for example in the Herefordshire, La Voulte Sûr Rhone and Orsten ‘Fossil Lagerstätten’ (sites of exceptional fossil preservation). These rare and precious deposits provide a wealth of information about the history of life on Earth, and are perhaps our most important resource in the quest to understand the palaeobiology of extinct organisms. With the aid of twenty‐first century technology, we can now make the most of these opportunities through the field of ‘virtual palaeontology’—computer‐aided visualization of fossils.
Figure 1 Open in figure viewer PowerPoint A split nodule showing the fossil within, in this case a cockroachoid insect. Fossil 4 cm long (From Garwood & Sutton, in press ).  相似文献   
346.
Bangladesh is one of the most vulnerable countries to natural disasters such as droughts in the world. The pre-monsoon Aus rice in Bangladesh depends on rainfall and is threatened by increasing droughts. However, limited information on the changes in Aus rice as well as droughts hamper our understanding of the country’s agricultural resilience and adaption to droughts. Here, we collected all the official statistical data of Aus rice at the district level from 1980 to 2018, and examined the inter...  相似文献   
347.
An explicit model management framework is introduced for predictive Groundwater Levels(GWL),particularly suitable to Observation Wells(OWs)with sparse and possibly heterogeneous data.The framework implements Multiple Models(MM)under the architecture of organising them at levels,as follows:(i)Level 0:treat heterogeneity in the data,e.g.Self-Organised Mapping(SOM)to classify the OWs;and decide on model structure,e.g.formulate a grey box model to predict GWLs.(ii)Level 1:construct MMs,e.g.two Fuzzy Logic(FL)and one Neurofuzzy(NF)models.(iii)Level 2:formulate strategies to combine the MM at Level 1,for which the paper uses Artificial Neural Networks(Strategy 1)and simple averaging(Strategy 2).Whilst the above model management strategy is novel,a critical view is presented,according to which modelling practices are:Inclusive Multiple Modelling(IMM)practices contrasted with existing practices,branded by the paper as Exclusionary Multiple Modelling(EMM).Scientific thinking over IMMs is captured as a framework with four dimensions:Model Reuse(MR),Hierarchical Recursion(HR),Elastic Learning Environment(ELE)and Goal Orientation(GO)and these together make the acronym of RHEO.Therefore,IMM-RHEO is piloted in the aquifer of Tabriz Plain with sparse and possibly heterogeneous data.The results provide some evidence that(i)IMM at two levels improves on the accuracy of individual models;and(ii)model combinations in IMM practices bring‘model-learning’into fashion for learning with the goal to explain baseline conditions and impacts of subsequent management changes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号