首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   385篇
  免费   16篇
  国内免费   2篇
测绘学   6篇
大气科学   26篇
地球物理   127篇
地质学   102篇
海洋学   28篇
天文学   85篇
综合类   1篇
自然地理   28篇
  2022年   5篇
  2021年   4篇
  2020年   10篇
  2019年   7篇
  2018年   11篇
  2017年   7篇
  2016年   10篇
  2015年   11篇
  2014年   7篇
  2013年   14篇
  2012年   14篇
  2011年   6篇
  2010年   17篇
  2009年   14篇
  2008年   16篇
  2007年   18篇
  2006年   8篇
  2005年   12篇
  2004年   4篇
  2003年   12篇
  2002年   6篇
  2001年   6篇
  2000年   7篇
  1999年   5篇
  1998年   4篇
  1997年   6篇
  1996年   3篇
  1995年   8篇
  1994年   12篇
  1993年   9篇
  1992年   4篇
  1991年   6篇
  1990年   4篇
  1987年   6篇
  1986年   4篇
  1985年   12篇
  1984年   9篇
  1983年   5篇
  1982年   8篇
  1981年   9篇
  1980年   3篇
  1979年   8篇
  1978年   6篇
  1977年   8篇
  1976年   6篇
  1975年   4篇
  1973年   6篇
  1972年   2篇
  1971年   5篇
  1969年   4篇
排序方式: 共有403条查询结果,搜索用时 328 毫秒
61.
This paper presents a general framework for predicting the residual drift of idealized SDOF systems that can be used to represent non‐degrading structures, including those with supplemental dampers. The framework first uses post‐peak oscillation analysis to predict the maximum ratio of residual displacement to the peak transient displacement in a random sample. Then, residual displacement ratios obtained from nonlinear time‐history analyses using both farfield and near‐fault‐pulse records were examined to identify trends, which were explained using the oscillation mechanics of SDOF systems. It is shown that large errors can result in existing probability models that do not capture the influence of key parameters on the residual displacement. Building on the observations that were made, a general probability distribution for the ratio of residual displacement to the peak transient displacement that more accurately reflects the physical bounds obtained from post‐peak oscillation analysis is proposed for capturing the probabilistic residual displacement response of these systems. The proposed distribution is shown to be more accurate when compared with previously proposed distributions in the literature due to its explicit account of dynamic and damping properties, which have a significant impact on the residual displacement. This study provides a rational basis for further development of a residual drift prediction tool for the performance‐based design and analysis of more complex multi‐degree‐of‐freedom systems.  相似文献   
62.
This article introduces a type of DBMS called the Intentionally‐Linked Entities (ILE) DBMS for use as the basis for temporal and historical Geographical Information Systems. ILE represents each entity in a database only once, thereby mostly eliminating redundancy and fragmentation, two major problems in Relational and other database systems. These advantages of ILE are realized by using relationship objects and pointers to implement all of the relationships among data entities in a native fashion using dynamically‐allocated linked data structures. ILE can be considered to be a modern and extended implementation of the E/R data model. ILE also facilitates storage of things that are more faithful to the historical records, such as gazetteer entries of places with imprecisely known or unknown locations. This is difficult in Relational database systems but is a routine task using ILE because ILE is implemented using modern memory allocation techniques. We use the China Historical GIS (CHGIS) and other databases to illustrate the advantages of ILE. This is accomplished by modeling these databases in ILE and comparing them to the existing Relational implementations.  相似文献   
63.
Fault-controlled hydrothermal dolomitization in tectonically complex basins can occur at any depth and from different fluid compositions, including ‘deep-seated’, ‘crustal’ or ‘basinal’ brines. Nevertheless, many studies have failed to identify the actual source of these fluids, resulting in a gap in our knowledge on the likely source of magnesium of hydrothermal dolomitization. With development of new concepts in hydrothermal dolomitization, the study aims in particular to test the hypothesis that dolomitizing fluids were sourced from either seawater, ultramafic carbonation or a mixture between the two by utilizing the Cambrian Mount Whyte Formation as an example. Here, the large-scale dolostone bodies are fabric-destructive with a range of crystal fabrics, including euhedral replacement (RD1) and anhedral replacement (RD2). Since dolomite is cross-cut by low amplitude stylolites, dolomitization is interpreted to have occurred shortly after deposition, at a very shallow depth (<1 km). At this time, there would have been sufficient porosity in the mudstones for extensive dolomitization to occur, and the necessary high heat flows and faulting associated with Cambrian rifting to transfer hot brines into the near surface. While the δ18Owater and 87Sr/86Sr ratios values of RD1 are comparable with Cambrian seawater, RD2 shows higher values in both parameters. Therefore, although aspects of the fluid geochemistry are consistent with dolomitization from seawater, very high fluid temperature and salinity could be suggestive of mixing with another, hydrothermal fluid. The very hot temperature, positive Eu anomaly, enriched metal concentrations, and cogenetic relation with quartz could indicate that hot brines were at least partially sourced from ultramafic rocks, potentially as a result of interaction between the underlying Proterozoic serpentinites and CO2-rich fluids. This study highlights that large-scale hydrothermal dolostone bodies can form at shallow burial depths via mixing during fluid pulses, providing a potential explanation for the mass balance problem often associated with their genesis.  相似文献   
64.
This study is focused on a passive treatment system known as the horizontal reactive treatment well (HRX Well®) installed parallel to groundwater flow, which operates on the principle of flow focusing that results from the hydraulic conductivity (K) ratio of the well and aquifer media. Passive flow and capture in the HRX Well are described by simplified equations adapted from Darcy's Law. A field pilot-scale study (PSS) and numerical simulations using a finite element method (FEM) were conducted to verify the HRX Well concept and test the validity of the HRX Well-simplified equations. The hydraulic performance results from both studies were observed to be within a close agreement to the simplified equations and their hydraulic capture width approximately five times greater than the well diameter (0.20 m). Key parameters affecting capture included the aquifer thickness, well diameter, and permeability ratio of the HRX Well treatment media and aquifer material. During pilot testing, the HRX Well captured 39% of flow while representing 0.5% of the test pit cross-sectional volume, indicating that the well captures a substantial amount of surrounding groundwater. While uncertainty in the aquifer and well properties (porosity, K, well losses), including the effects of boundary conditions, may have caused minor differences in the results, data from this study indicate that the simplified equations are valid for the conceptual design of a field study. A full-scale HRX Well was installed at Site SS003 at Vanderberg Air Force Base, California, in July/August 2018 based on outcomes from this study.  相似文献   
65.
The horizontal reactive media treatment well (HRX Well®) uses directionally drilled horizontal wells filled with a treatment media to induce flow-focusing behavior created by the well-to-aquifer permeability contrast to passively capture proportionally large volumes of groundwater. Groundwater is treated in situ as it flows through the HRX Well and downgradient portions of the aquifer are cleaned via elution as these zones are flushed with clean water discharging from the HRX Well. The HRX Well concept is particularly well suited for sites where long-term mass discharge control is a primary performance objective. This concept is appropriate for recalcitrant and difficult-to-treat constituents, including chlorinated solvents, per- and polyfluoroalkyl substances (PFAS), 1,4-dioxane, and metals. A full-scale HRX Well was installed and operated to treat trichloroethene (TCE) with zero valent iron (ZVI). The model-predicted enhanced flow through the HRX Well (compared to the flow in and equivalent cross-sectional area orthogonal to flow in the natural formation before HRX Well installation) and treatment zone width was consistent with flows and widths estimated independently by point velocity probe (PVP) testing, HRX Well tracer testing, and observed treatment in downgradient monitoring wells. The actual average capture zone width was estimated to be between 45 and 69 feet. Total TCE mass discharge reduction was maintained through the duration of the performance monitoring period and exceeded 99.99% (%). Decreases in TCE concentrations were observed at all four downgradient monitoring wells within the treatment zone (ranging from 50 to 74% at day 436), and the first arrival of treated water was consistent with model predictions. The field demonstration confirmed the HRX Well technology is best suited for long-term mass discharge control, can be installed under active infrastructure, requires limited ongoing operation and maintenance, and has low life cycle energy and water requirements.  相似文献   
66.

Parameterization of wave runup is of paramount importance for an assessment of coastal hazards. Parametric models employ wave (e.g., Hs and Lp) and beach (i.e., β) parameters to estimate extreme runup (e.g., R2%). Thus, recent studies have been devoted to improving such parameterizations by including additional information regarding wave forcing or beach morphology features. However, the effects of intra-wave dynamics, related to the random nature of the wave transformation process, on runup statistics have not been incorporated. This work employs a phase- and depth- resolving model, based on the Reynolds-averaged Navier-Stokes equations, to investigate different sources of variability associated with runup on planar beaches. The numerical model is validated with laboratory runup data. Subsequently, the role of both aleatory uncertainty and other known sources of runup variability (i.e., frequency spreading and bed roughness) is investigated. Model results show that aleatory uncertainty can be more important than the contributions from other sources of variability such as the bed roughness and frequency spreading. Ensemble results are employed to develop a new parametric model which uses the Hunt (J Waterw Port Coastal Ocean Eng 85:123–152, 1959) scaling parameter \(\beta \left (H_{s}L_{p}\right )^{1/2}\).

  相似文献   
67.
Different critical state-related formulas, for the critical state line and the critical state-dependent interlocking effect, have been proposed in constitutive modeling of granular material during last decades, which rises up a confusion on how to select an appropriate model in the geotechnical applications. This paper aims to discuss the selection of these critical state-related formulas and parameters identification. Three formulas of critical state line together with two formulas of critical state-dependent interlocking effect are combined to propose six elasto-plastic models. Drained and undrained triaxial tests on four different granular materials are selected for simulations. In order to eliminate artificial errors, a new hybrid genetic algorithm-based intelligent method is proposed and used to identify parameters and estimate simulations with minimum errors for each granular material and each model. Then, the performance of each CSL and each state parameter is evaluated using two information criteria. Furthermore, the performance was evaluated by simulating three footing tests using finite-element analysis in which the models are implemented. All comparisons demonstrate the incorporation of nonlinear critical state line combined with the state parameter e/e c in constitutive modeling can result in relatively more satisfied simulated results.  相似文献   
68.
Using a subset of the SEG Advanced Modeling Program Phase I controlled‐source electromagnetic data, we apply our standard controlled‐source electromagnetic interpretation workflows to delineate a simulated hydrocarbon reservoir. Experience learned from characterizing such a complicated model offers us an opportunity to refine our workflows to achieve better interpretation quality. The exercise proceeded in a blind test style, where the interpreting geophysicists did not know the true resistivity model until the end of the project. Rather, the interpreters were provided a traditional controlled‐source electromagnetic data package, including electric field measurements, interpreted seismic horizons, and well log data. Based on petrophysical analysis, a background resistivity model was established first. Then, the interpreters started with feasibility studies to establish the recoverability of the prospect and carefully stepped through 1D, 2.5D, and 3D inversions with seismic and well log data integrated at each stage. A high‐resistivity zone is identified with 1D analysis and further characterized with 2.5D inversions. Its lateral distribution is confirmed with a 3D anisotropic inversion. The importance of integrating all available geophysical and petrophysical data to derive more accurate interpretation is demonstrated.  相似文献   
69.
Ronen D  Sorek S  Gilron J 《Ground water》2012,50(1):27-36
This issue paper presents how certain policies regarding management of groundwater quality lead to unexpected and undesirable results, despite being backed by seemingly reasonable assumptions. This happened in part because the so-called reasonable decisions were not based on an integrative and quantitative methodology. The policies surveyed here are: (1) implementation of a program for aquifer restoration to pristine conditions followed, after failure, by leaving it to natural attenuation; (2) the "Forget About The Aquifer" (FATA) approach, while ignoring possible damage that contaminated groundwater can inflict on the other environmental systems; (3) groundwater recharge in municipal areas while neglecting the presence of contaminants in the unsaturated zone and conditions exerted by upper impervious surfaces; (4) the Soil Aquifer Treatment (SAT) practice considering aquifers to be "filters of infinite capacity"; and (5) focusing on well contamination vs. aquifer contamination to conveniently defer grappling with the problem of the aquifer as a whole. Possible reasons for the failure of these seemingly rational policies are: (1) the characteristic times of processes associated with groundwater that are usually orders of magnitude greater than the residence times of decision makers in their managerial position; (2) proliferation of improperly trained "groundwater experts" or policymakers with sectoral agendas alongside legitimate differences of opinion among groundwater scientists; (3) the neglect of the cyclic nature of natural phenomena; and (4) ignoring future long-term costs because of immediate costs.  相似文献   
70.
Probabilistic seismic risk assessment for spatially distributed lifelines is less straightforward than for individual structures. While procedures such as the ‘PEER framework’ have been developed for risk assessment of individual structures, these are not easily applicable to distributed lifeline systems, due to difficulties in describing ground‐motion intensity (e.g. spectral acceleration) over a region (in contrast to ground‐motion intensity at a single site, which is easily quantified using Probabilistic Seismic Hazard Analysis), and since the link between the ground‐motion intensities and lifeline performance is usually not available in closed form. As a result, Monte Carlo simulation (MCS) and its variants are well suited for characterizing ground motions and computing resulting losses to lifelines. This paper proposes a simulation‐based framework for developing a small but stochastically representative catalog of earthquake ground‐motion intensity maps that can be used for lifeline risk assessment. In this framework, Importance Sampling is used to preferentially sample ‘important’ ground‐motion intensity maps, and K‐Means Clustering is used to identify and combine redundant maps in order to obtain a small catalog. The effects of sampling and clustering are accounted for through a weighting on each remaining map, so that the resulting catalog is still a probabilistically correct representation. The feasibility of the proposed simulation framework is illustrated by using it to assess the seismic risk of a simplified model of the San Francisco Bay Area transportation network. A catalog of just 150 intensity maps is generated to represent hazard at 1038 sites from 10 regional fault segments causing earthquakes with magnitudes between five and eight. The risk estimates obtained using these maps are consistent with those obtained using conventional MCS utilizing many orders of magnitudes more ground‐motion intensity maps. Therefore, the proposed technique can be used to drastically reduce the computational expense of a simulation‐based risk assessment, without compromising the accuracy of the risk estimates. This will facilitate computationally intensive risk analysis of systems such as transportation networks. Finally, the study shows that the uncertainties in the ground‐motion intensities and the spatial correlations between ground‐motion intensities at various sites must be modeled in order to obtain unbiased estimates of lifeline risk. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号