首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20篇
  免费   3篇
大气科学   4篇
地球物理   12篇
地质学   3篇
海洋学   2篇
天文学   1篇
自然地理   1篇
  2016年   2篇
  2015年   2篇
  2013年   1篇
  2012年   2篇
  2007年   1篇
  2006年   3篇
  2005年   1篇
  2003年   1篇
  2001年   1篇
  1999年   1篇
  1998年   1篇
  1997年   2篇
  1996年   1篇
  1989年   2篇
  1984年   1篇
  1978年   1篇
排序方式: 共有23条查询结果,搜索用时 46 毫秒
1.
We have discussed the behavior of a non-conserved scalar in the stationary, horizontally homogeneous, neutral surface-flux layer and, on the basis of conventional second-order closure, derived analytic expressions for flux and for mean concentration of a gas, subjected to a first-order removal process. The analytic flux solution showed a clear deviation from the constant flux, characterizing a conserved scalar in the surface-flux layer. It decreases with height and is reduced by an order of magnitude of the surface flux at a height equal to about the typical mean distance a molecule can travel before destruction. The predicted mean concentration profile, however, shows only a small deviation from the logarithmic behavior of a conserved scalar. The solution is consistent with assuming a flux-gradient relationship with a turbulent diffusivity corrected by the Damköhler ratio, the ratio of a characteristic turbulent time scale and the scalar mean lifetime. We show that if we use only first-order closure and neglect the effect of the Damköhler ratio on the turbulent diffusivity we obtain another analytic solution for the profiles of the flux and the mean concentration which, from an experimental point of view, is indistinguishable from the first analytic solution. We have discussed two cases where the model should apply, namely NO which, by night, is irreversibly destroyed by interaction with mainly O3 and the radioactive 220Rn. Only in the last case was it possible to find data to shed light on the validity of our predictions. The agreement seemed such that a falsification of our model was impossible. It is shown how the model can be used to predict the surface flux of 220Rn from measured concentration profiles.  相似文献   
2.
3.
4.
In the paper a recently proposed method for damage localization and quantification of RC-structures from response measurements is tested on experimental data. The method investigated requires at least one response measurement along the structures and the ground surface acceleration. Further, the two lowest time-varying eigenfrequencies of the structure must be identified. The data considered are sampled from a series of three RC-frame model tests performed at the structural laboratory at Aalborg University, Denmark during the autumn of 1996. The frames in the test series were exposed to two or three series of ground motions of increasing magnitude. After each of these runs the damage state of the frame was examined and each storey of the frame were classified into one of the following six classifications: undamaged, cracked, lightly damaged, damaged, severely damaged or collapse. During each of the ground motion events the storey accelerations were measured by accelerometers. After application of the last earthquake sequence to the structure the frames were cut into pieces and each of the beams and columns was statically tested and damage assessment was performed using the obtained stiffnesses. The damage in the storeys determined by the suggested method was then compared to the damage classification from the visual inspection as well as the static tests. It was found that especially in the cases where the damage is concentrated in a certain area of the structure a very good damage assessment is obtained using the suggested method. © 1998 John Wiley & Sons, Ltd.  相似文献   
5.
Time‐domain electromagnetic data are conveniently inverted by using smoothly varying 1D models with fixed vertical discretization. The vertical smoothness of the obtained models stems from the application of Occam‐type regularization constraints, which are meant to address the ill‐posedness of the problem. An important side effect of such regularization, however, is that horizontal layer boundaries can no longer be accurately reproduced as the model is required to be smooth. This issue can be overcome by inverting for fewer layers with variable thicknesses; nevertheless, to decide on a particular and constant number of layers for the parameterization of a large survey inversion can be equally problematic. Here, we present a focusing regularization technique to obtain the best of both methodologies. The new focusing approach allows for accurate reconstruction of resistivity distributions using a fixed vertical discretization while preserving the capability to reproduce horizontal boundaries. The formulation is flexible and can be coupled with traditional lateral/spatial smoothness constraints in order to resolve interfaces in stratified soils with no additional hypothesis about the number of layers. The method relies on minimizing the number of layers of non‐vanishing resistivity gradient, instead of minimizing the norm of the model variation itself. This approach ensures that the results are consistent with the measured data while favouring, at the same time, the retrieval of horizontal abrupt changes. In addition, the focusing regularization can also be applied in the horizontal direction in order to promote the reconstruction of lateral boundaries such as faults. We present the theoretical framework of our regularization methodology and illustrate its capabilities by means of both synthetic and field data sets. We further demonstrate how the concept has been integrated in our existing spatially constrained inversion formalism and show its application to large‐scale time‐domain electromagnetic data inversions.  相似文献   
6.
7.
While the spatial heterogeneity of many aquatic ecosystems is acknowledged, rivers are often mistakenly described as homogenous and well-mixed. The collection and visualization of attributes like water quality is key to our perception and management of these ecosystems. The assumption of homogeneity can lead to the conclusion that data collection from discrete, discontinuous points in space or time provide a comprehensive estimate of condition. To counter this perception, we combined high-density data collection with spatial interpolation techniques to created two-dimensional maps of water quality. Maps of four riverine transitions and habitats - wetland to urban, river to reservoir, river to estuary and a groundwater intrusion - were constructed from the continuous data. The examples provided show that the most basic water quality parameters - temperature, conductivity, salinity, turbidity, and chlorophyll florescence - are heterogeneous at spatial scales smaller than those captured by common point sampling statistical strategies. The 2-dimensional, interpolation-based maps of the Hillsborough River (Tampa, FL) show significant influences of a variety of geographic features including tributary confluences, submarine groundwater inflow, and riparian interfaces. We conclude that many sampling strategies do not account for the type of patchy heterogeneity observed. The integration of existing in-situ sensors, inexpensive autonomous sampling platforms, and geospatial mapping techniques provides high resolution visualization that can adds a more comprehensive geographic perspective needed for environmental monitoring and assessment programs.  相似文献   
8.
A 2-bay, 6-storey model test reinforced concrete frame (scale l:5) subjected to sequential earthquakes of increasing magnitude is considered in this paper. The frame was designed with a weak storey, in which the columns are weakened by using thinner and weaker reinforcement bars. The aim of the work is to study the global response to a damaging strong motion earthquake event of such buildings. Special emphasis is put on examining to what extent damage in the weak storey can be identified from global response measurements during an earthquake where the structure survives, and what level of excitation is necessary in order to identify the weak storey. Furthermore, emphasis is put on examining how and where damage develops in the structure and especially how the weak storey accumulates damage. Besides the damage in each storey the structure is identified by a static load at the top storey while measuring the horizontal displacement of the stories and also visual inspection is performed. From the investigations it is found that the reason for failure in the weak storey is that the absolute value of the stiffness deteriorates to a critical value where large plastic deformations occur and the storey is not capable of transferring the shear forces from the storeys above so failure is unavoidable.  相似文献   
9.
We have postulated a simple model for the spectral tensor ij (k) of an anisotropic, but homogeneous turbulent velocity field. It is a simple generalization of the spectral tensor inf ij piso(k) for isotropic turbulence and we show how in the limit of isotropy, ij (k) becomes equal to inf ij piso(k). Whereas inf ij piso(k) is determined entirely by one scalar function of k = ¦k¦, namely the energy spectrum, we need three independent scalar functions of k to specify ij (k). We show how it is possible by means of the three stream-wise velocity component spectra to determine the three scalar functions in ij (k) by solving two uncoupled, ordinary linear differential equations of first and second order. The analytic form of the component spectra each has a set of three parameters: the variance and the integral length scale of the velocity component and a dimensionless parameter, which governs the curvature of the spectrum in the transition domain from the inertial subrange towards lower wave numbers. When the three sets of parameters are the same, the three spectra correspond to isotropic turbulence and they are all interrelated and related to the energy spectrum. We show how it is possible to obtain these spectral forms in the neutral surface layer and in the convective boundary layer from data reported in the literature. The spectral tensor is used to predict the lateral coherences for all three velocity components and these predictions are compared with coherences obtained in two experiments, one using three masts at a horizontally homogeneous site in Denmark and one employing two aircraft flying in formation over eastern Colorado. Comparison shows reasonable agreement although with considerable experimental scatter.The National Center for Atmospheric Research is sponsored by the National Science Foundation.  相似文献   
10.
Abstract— Seventy-five orbits of Leonid meteors obtained during the 1998 outburst are presented. Thirty-eight are precise enough to recognize significant dispersion in orbital elements. Results from the nights of 1998 November 16/17 and 17/18 differ, in agreement with the dominant presence of different dust components. The shower rate profile of 1998 November 16/17 was dominated by a broad component, rich in bright meteors. The radiant distribution is compact. The semimajor axis is confined to values close to that of the parent comet, whereas the distribution of inclination has a central condensation in a narrow range. On the other hand, 1998 November 17/18 was dominated by dust responsible for a more narrow secondary peak in the flux curve. The declination of the radiant and the inclination of the orbit are more widely dispersed. The argument of perihelion, inclination, and the perihelion distance are displaced. These data substantiate the hypothesis that trapping in orbital resonances is important for the dynamical evolution of the broad component.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号