首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Properties and limitations of sequential indicator simulation   总被引:2,自引:0,他引:2  
The sequential indicator algorithm is a widespread geostatistical simulation technique that relies on indicator (co)kriging and is applicable to a wide range of datasets. However, such algorithm comes up against several limitations that are often misunderstood. This work aims at highlighting these limitations, by examining what are the conditions for the realizations to reproduce the input parameters (indicator means and correlograms) and what happens with the other parameters (other two-point or multiple-point statistics). Several types of random functions are contemplated, namely: the mosaic model, random sets, models defined by multiple indicators and isofactorial models. In each case, the conditions for the sequential algorithm to honor the model parameters are sought after. Concurrently, the properties of the multivariate distributions are identified and some conceptual impediments are emphasized. In particular, the prior multiple-point statistics are shown to depend on external factors such as the total number of simulated nodes and the number and locations of the samples. As a consequence, common applications such as a flow simulation or a change of support on the realizations may lead to hazardous interpretations.  相似文献   

2.
实时子结构实验Chang算法的稳定性和精度   总被引:7,自引:0,他引:7  
与慢速拟动力子结构实验相比,实时子结构实验的优点在于它能真实地反映速度相关型试件的特性。实时子结构实验的逐步积分算法通常借用拟动力算法,但是目前液压伺服作动器很难实现速度反馈控制,因而试件速度不能实现原算法的假定值,这样一来算法的稳定性和计算精度将发生改变。台湾学者S.Y.Chang提出一种无条件稳定的显式拟动力算法,本文分析了这种方法应用于实时子结构实验时的稳定性和计算精度。研究发现在实时子结构实验中该方法由无条件稳定变成了有条件稳定的,精度也发生了改变。  相似文献   

3.
统计2008-2011年全国干扰不明显的垂直摆仪器资料,分析覆盖层的厚度、岩性对仪器精度的影响以及土建成本,结果表明,并非覆盖层越厚仪器精度越高,在岩性好的地区,覆盖层无需达到40 m厚度,可节约成本,且前兆台网布局更加合理.  相似文献   

4.
By using the technique for evolutionary power spectrum proposed by Nakayama and with reference to the Kameda formula, an evolutionary spectrum prediction model for given earthquake magnitude and distance is established based on the 80 near-source acceleration records at rock surface with large magnitude from the ground motion database of western U.S.. Then a new iteration method is developed for generation of random accelerograms non-stationary both in amplitude and frequency which are compatible with target evolutionary spectrum. The phase spectra of those simulated accelerograms are also non-stationary in time and frequency domains since the interaction between amplitude and phase angle has been considered during the generation. Furthermore, the sign of the phase spectrum increment is identified to accelerate the iteration. With the proposed statistical model for predicting evolutionary power spectra and the new method for generating compatible time history, the artificial random earthquake accelerograms non-stationary both in amplitude and frequency for certain magnitude and distance can be provided.  相似文献   

5.
This paper describes the development and numerical verification of a test method to realistically simulate the seismic structural response of full‐scale buildings. The result is a new field testing procedure referred to as the linear shaker seismic simulation (LSSS) testing method. This test method uses a linear shaker system in which a mass mounted on the structure is commanded a specified acceleration time history, which in turn induces inertial forces in the structure. The inertia force of the moving mass is transferred as dynamic force excitation to the structure. The key issues associated with the LSSS method are (1) determining for a given ground motion displacement, xg, a linear shaker motion which induces a structural response that matches as closely as possible the response of the building if it had been excited at its base by xg (i.e. the motion transformation problem) and (2) correcting the linear shaker motion from Step (1) to compensate for control–structure interaction effects associated with the fact that linear shaker systems cannot impart perfectly to the structure the specified forcing functions (i.e. the CSI problem). The motion transformation problem is solved using filters that modify xg both in the frequency domain using building transfer functions and in the time domain using a least squares approximation. The CSI problem, which is most important near the modal frequencies of the structural system, is solved for the example of a linear shaker system that is part of the NEES@UCLA equipment site. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
Multivariate simulation is an important longstanding problem in geostatistics. Fitting a model of coregionalization to many variables is intractable and often not permitted; however, the matrix of collocated correlation coefficients is often well informed. Performing a matrix simulation with LU decomposition of the correlation matrix at each step of sequential simulation is implemented in some software. The target correlation matrix is not reproduced because of conditioning to local data and the particular variable ordering in the sequential/LU decomposition. A correction procedure is developed to calculate a modified correlation matrix that leads to reproduction of the target correlation matrix. The theoretical and practical aspects of this correction are developed.  相似文献   

7.
A statistical model for estimating the errors of epicenter location and origin time is proposed and applied to the old Fennoscandian seismic network. An average crustal model (Sellevoll andPomeroy, 1968) and P and S wave residuals as a function of azimuth have been used. The calculations are carried out for different maximum detection ranges. The analysis shows relatively, small (1 km) standard errors of epicenter location of strong earthquakes for central Fennoscandia. The largest errors are found in the southern and eastern parts of Fennoscandia.On leave from Institute of Geophysics, Polish Academy of Sciences, Warsaw, Poland.  相似文献   

8.
The aim of this study is to improve classification results of multispectral satellite imagery for supporting flood risk assessment analysis in a catchment area in Cyprus. For this purpose, precipitation and ground spectroradiometric data have been collected and analyzed with innovative statistical analysis methods. Samples of regolith and construction material were in situ collected and examined in the spectroscopy laboratory for their spectral response under consecutive different conditions of humidity. Moreover, reflectance values were extracted from the same targets using Landsat TM/ETM+ images, for drought and humid time periods, using archived meteorological data. The comparison of the results showed that spectral responses for all the specimens were less correlated in cases of substantial humidity, both in laboratory and satellite images. These results were validated with the application of different classification algorithms (ISODATA, maximum likelihood, object based, maximum entropy) to satellite images acquired during time period when precipitation phenomena had been recorded.  相似文献   

9.
The central difference method (CDM) that is explicit for pseudo‐dynamic testing is also believed to be explicit for real‐time substructure testing (RST). However, to obtain the correct velocity dependent restoring force of the physical substructure being tested, the target velocity is required to be calculated as well as the displacement. The standard CDM provides only explicit target displacement but not explicit target velocity. This paper investigates the required modification of the standard central difference method when applied to RST and analyzes the stability and accuracy of the modified CDM for RST. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
Phosphate behaviour in natural estuarine systems can be studied by performing field measurements and by undertaking laboratory simulation experiments. Thus, in this paper we describe the use of a dynamic automated estuarine simulator to characterize the geochemical reactivity of phosphate in varying salinity gradients in order to study possible mechanisms of phosphate removal from the dissolved phase (e.g. formation of some kind of apatite) and how changes in pH and salinity values influence this removal. Six laboratory assays, representing various salinity and pH gradients (average pH values between 7 and 8), were carried out. The geochemical equilibrium model MINTEQA2 was employed to characterize removal of phosphate. Among the minerals from which dissolved phosphate can originate, it seems that hydroxyapatite is by far the mineral that shows the greatest saturation indexes in the experiments. Thus, there is evidence that a type of calcium phosphate (hydroxyapatite) is involved in phosphate removal in the assays. Phosphate removal by Ca2+ occurs sharply at salinity values of 1–2, whereas by Fe3+ it is relatively gradual, at least until a salinity value of 7. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
Ocean Dynamics - At the nautical bottom approach, part of the fluid mud layers can be included in the available depth if they present favorable rheology. As it is difficult to perform in situ...  相似文献   

12.
In experimental studies of structural behaviour, it is often desirable, even necessary, to perform tests on a test structure from its undamaged state, through its damaged states, and finally to failure. The fact that experiments of this type are not often done primarily because of its prohibitive cost. In this paper, a testing procedure is proposed in which a test structure is allowed to undergo its degradation in real time yet it is not physically damaged, thus allowing it to be reused. The underlying concept is that of active structural control. Considerable research and development of active structural control in civil engineering has taken place relative to responsive control of structures against damaging environmental loads. While the use of active control systems to simulate damage in an experimental setting as proposed in this paper appears to be new, much of the existing knowledge base in active structural control is directly applicable. © 1998 John Wiley & Sons, Ltd.  相似文献   

13.
Lei Yao  Liding Chen  Wei Wei 《水文研究》2016,30(12):1836-1848
Imperviousness, considered as a critical indicator of the hydrologic impacts of urbanization, has gained increasing attention both in the research field and in practice. However, the effectiveness of imperviousness on rainfall–runoff dynamics has not been fully determined in a fine spatiotemporal scale. In this study, 69 drainage subareas <1 ha of a typical residential catchment in Beijing were selected to evaluate the hydrologic impacts of imperviousness, under a typical storm event with a 3‐year return period. Two metrics, total impervious area (TIA) and effective impervious area (EIA), were identified to represent the impervious characteristics of the selected subareas. Three runoff variables, total runoff depth (TR), peak runoff depth (PR), and lag time (LT), were simulated by using a validated hydrologic model. Regression analyses were developed to explore the quantitative associations between imperviousness and runoff variables. Then, three scenarios were established to test the applicability of the results in considering the different infiltration conditions. Our results showed that runoff variables are significantly related to imperviousness. However, the hydrologic performances of TIA and EIA were scale dependent. Specifically, with finer spatial scale and the condition heavy rainfall, TIA rather than EIA was found to contribute more to TR and PR. EIA tended to have a greater impact on LT and showed a negative relationship. Moreover, the relative significance of TIA and EIA was maintained under the different infiltration conditions. These findings may provide potential implications for landscape and drainage design in urban areas, which help to mitigate the runoff risk. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
A new method is presented for inferring seabed type from the properties of seabed echoes stimulated by echosounders. The methodology currently used classifies echoes indirectly through feature extraction, usually in conjunction with dimensional reduction techniques such as Principal Components Analysis. The features or principal components derived from them are classified by statistical clustering or other means into groups with similar sets of mathematical properties. However, a simpler technique is to directly cluster the echoes themselves. A priori modelling or curve fitting, feature extraction, and dimensional reduction are not required, simplifying the processing and analysis chain, and eliminating data distortions. In effect the echoes are treated as geometrical entities, which are classified by their shapes and positions. Direct clustering places the analysis focus on the actual echoes, not on proxy parameters or mathematical techniques. This allows simple and direct evaluations of results, without the need to work in abstract mathematical spaces of unknown relation to echo properties. The direct clustering method for seabed echoes is demonstrated with echosounder data obtained in Balls Head Bay, Sydney Harbour, Australia, an area with mud, sand, and shell beds.  相似文献   

15.
Bulletin of Earthquake Engineering - Soil-structure interaction (SSI) can potentially compromise structures that are subjected to seismic excitation. In recent years, real-time hybrid testing...  相似文献   

16.
Microbiological degradation of perchloroethylene (PCE) under anaerobic conditions follows a series of chain reactions, in which, sequentially, trichloroethylene (TCE), cis‐dichloroethylene (c‐DCE), vinylchloride (VC) and ethene are generated. First‐order degradation rate constants, partitioning coefficients and mass exchange rates for PCE, TCE, c‐DCE and VC were compiled from the literature. The parameters were used in a case study of pump‐and‐treat remediation of a PCE‐contaminated site near Tilburg, The Netherlands. Transport, non‐equilibrium sorption and biodegradation chain processes at the site were simulated using the CHAIN_2D code without further calibration. The modelled PCE compared reasonably well with observed PCE concentrations in the pumped water. We also performed a scenario analysis by applying several increased reductive dechlorination rates, reflecting different degradation conditions (e.g. addition of yeast extract and citrate). The scenario analysis predicted considerably higher concentrations of the degradation products as a result of enhanced reductive dechlorination of PCE. The predicted levels of the very toxic compound VC were now an order of magnitude above the maximum permissible concentration levels. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

17.
For single-phase flow through a network model of a porous medium, we report (1) solutions of the Navier–Stokes equation for the flow, (2) micro-particle imaging velocimetry (PIV) measurements of local flow velocity vectors in the “pores throats” and “pore bodies,” and (3) comparisons of the computed and measured velocity vectors. A “two-dimensional” network of cylindrical pores and parallelepiped connecting throats was constructed and used for the measurements. All pore bodies had the same dimensions, but three-different (square cross-section) pore-throat sizes were randomly distributed throughout the network. An unstructured computational grid for flow through an identical network was developed and used to compute the local pressure gradients and flow vectors for several different (macroscopic) flow rates. Numerical solution results were compared with the experimental data, and good agreement was found. Cross-over from Darcy flow to inertial flow was observed in the computational results, and the permeability and inertia coefficients of the network were estimated. The development of inertial flow was seen as a “two-step” process: (1) recirculation zones appeared in more and more pore bodies as the flow rate was increased, and (2) the strengths of individual recirculation zones increased with flow rate. Because each pore-throat and pore-body dimension is known, in this approach an experimental (and/or computed) local Reynolds number is known for every location in the porous medium at which the velocity has been measured (and/or computed).  相似文献   

18.
The Bay of Biscay, located in the Northeast Atlantic Ocean, is exposed to energetic waves coming from the open ocean that have crucial effects on the coast. Knowledge of the wave climate and trends in this region are critical to better understand the last decade’s evolution of coastal hazards and morphology and to anticipate their potential future changes. This study aims to characterize the long-term trends of the present wave climate over the second half of the twentieth century in the Bay of Biscay through a robust and homogeneous intercomparison of five-wave datasets (Corrected ERA-40 (C-ERA-40), ECMWF Reanalysis Interim (ERA-Interim), Bay Of Biscay Wave Atlas (BOBWA-10kH), ANEMOC, and Bertin and Dodet 2010)). The comparison of the quality of the datasets against offshore and nearshore measurements reveals that at offshore locations, global reanalyses slightly underestimate wave heights, while regional hindcasts overestimate wave heights, especially for the highest quantiles. At coastal locations, BOBWA-10kH is the dataset that compares the best with observations. Concerning long time-scale features, the comparison highlights that the main significant trends are similarly present in the five datasets, especially during summer for which there is an increase of significant wave heights and mean wave periods (up to +15 cm and +0.6 s over the period 1970–2001) as well as a southerly shift of wave directions (around ?0.4° year?1). Over the same period, an increase of high quantiles of wave heights during the autumn season (around 3 cm year?1 for 90th quantile of significant wave heights (SWH90)) is also apparent. During winter, significant trends are much lower than during summer and autumn despite a slight increase of wave heights and periods during 1958–2001. These trends can be related to modifications in the wave-type occurrence. Finally, the trends common to the five datasets are discussed by analyzing the similarities with centennial trends issued from longer time-scale studies and exploring the various factors that could explain them.  相似文献   

19.
A shaking table testing program was undertaken with the main objective of providing basic information for the calibration of analytical models, and procedures for determining seismic response of typical stone masonry temples of the 16–18th centuries stone masonry construction in Mexico. A typical colonial temple was chosen as a prototype. A model at a 1:8 geometric scale was built with the same materials and techniques as the prototype, and was subjected to horizontal and vertical motions of increasing intensities. The maximum applied intensity corresponded to a base shear force of about 58 of the total building weight. Vertical component of the base motion significantly affected the response and increased the damage of the model. Damage patterns were similar to those observed in actual temples. Damping coefficients of the response ranged from 7 for undamaged state, reached about 14 for severe damage. The main features of the measured response were compared with those computed using a nonlinear, finite element model; for the latter, a constitutive law developed for plain concrete was adopted for reproducing cracking and crushing of the irregular stone masonry. Observed damage patterns as well as measured response could be reproduced with reasonable accuracy by the analytical simulation, except for some local vibrations, as those at the top of the bell towers. It can be concluded that the simple constitutive law adopted for the simulation was able to reproduce the experimental response with reasonable level of accuracy. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
Different approaches to estimating the parameters of SWAP physically based model, which describes heat and water transfer processes in the soil-vegetaion (snow) cover-atmosphere system are examined. In particular, two methods of a priori estimation of parameter values and two variants of their calibration are discussed. The parameter sets obtained by different methods were used to simulate the runoff from 12 experimental catchments in the eastern USA. The calculations were conducted for a 39-year period (1960–1998) with a 3-hour step. The results of calculations were compared with each other and with measured river runoff values in order to identify the parameter set that is optimal for runoff evaluation. A strategy is proposed for a priori parameter estimation in the case of basins where observational data are too poor to enable parameter calibration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号