首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Which rule of mixture is the best for predicting the overall elastic properties of polyphase rocks based on the elastic properties and volume fractions of their constituents? In order to address this question, we sintered forsterite-enstatite polycrystalline aggregates with a varied forsterite volume fraction (0, 0.2, 0.4, 0.5, 0.6, 0.8, and 1.0). Elastic properties (shear, bulk, and Young's moduli) of these synthesized composites were measured as a function of pressure up to 3.0 GPa in a liquid-medium piston-cylinder apparatus using a high-precision ultrasonic interferometric technique. The experimental data can be much better described by the shear-lag model than by the commonly used simple models such as Voigt, Ruess and Hill averages, Hashin-Shtrikman bounds, Ravichandran bounds, Halpin-Tsai equations, and Paul's calculations. We attributed this to the fact that the elastic interaction and stress transfer between phases are neglected in all the models except for the shear-lag model. In particular, t  相似文献   

2.
Tight oil siltstones are rocks with complex structure at pore scale and are characterized by low porosity and low permeability at macroscale. The production of tight oil siltstone reservoirs can be increased by hydraulic fracturing. For optimal fracking results, it is desirable to map the ability to fracture based on seismic data prior to fracturing. Brittleness is currently thought to be a key parameter for evaluating the ability to fracture. To link seismic information to the brittleness distribution, a rock physics model is required. Currently, there exists no commonly accepted rock physics model for tight oil siltstones. Based on the observed correlation between porosity and mineral composition and known microstructure of tight oil siltstone in Daqing oilfield of Songliao basin, we develop a rock physics model by combining the Voigt–Reuss–Hill average, self-consistent approximation and differential effective medium theory. This rock physics model allows us to explore the dependence of the brittleness on porosity, mineral composition, microcrack volume fraction and microcrack aspect ratio. The results show that, as quartz content increases and feldspar content decreases, Young's modulus tends to increase and Poisson ratio decreases. This is taken as a signature of higher brittleness. Using well log data and seismic inversion results, we demonstrate the versatility of the rock physics template for brittleness prediction.  相似文献   

3.
Serpentinization of the mantle wedge is an important process that influences the seismic and mechanical properties in subduction zones. Seismic detection of serpentines relies on the knowledge of elastic properties of serpentinites, which thus far has not been possible in the absence of single-crystal elastic properties of antigorite. The elastic constants of antigorite, the dominant serpentine at high-pressure in subduction zones, were measured using Brillouin spectroscopy under ambient conditions. In addition, antigorite lattice preferred orientations (LPO) were determined using an electron back-scattering diffraction (EBSD) technique. Isotropic aggregate velocities are significantly lower than those of peridotites to allow seismic detection of serpentinites from tomography. The isotropic VP/VS ratio is 1.76 in the Voigt–Reuss–Hill average, not very different from that of 1.73 in peridotite, but may vary between 1.70 and 1.86 between the Voigt and Reuss bonds. Antigorite and deformed serpentinites have a very high seismic anisotropy and remarkably low velocities along particular directions. VP varies between 8.9 km s? 1 and 5.6 km s? 1 (46% anisotropy), and 8.3 km s? 1 and 5.8 km s? 1 (37%), and VS between 5.1 km s? 1 and 2.5 km s? 1 (66%), and 4.7 km s? 1 and 2.9 km s? 1 (50%) for the single-crystal and aggregate, respectively. The VP/VS ratio and shear wave splitting also vary with orientation between 1.2 and 3.4, and 1.3 and 2.8 for the single-crystal and aggregate, respectively. Thus deformed serpentinites can present seismic velocities similar to peridotites for wave propagation parallel to the foliation or lower than crustal rocks for wave propagation perpendicular to the foliation. These properties can be used to detect serpentinite, quantify the amount of serpentinization, and to discuss relationships between seismic anisotropy and deformation in the mantle wedge. Regions of high VP/VS ratios and extremely low velocities in the mantle wedge of subduction zones (down to about 6 and 3 km.s?1 for VP and VS, respectively) are difficult to explain without strong preferred orientation of serpentine. Local variations of anisotropy may result from kilometer-scale folding of serpentinites. Shear wave splittings up to 1–1.5 s can be explained with moderately thick (10–20 km) serpentinite bodies.  相似文献   

4.
The shear-wave velocity is a very important parameter in oil and gas seismic exploration, and vital in prestack elastic-parameters inversion and seismic attribute analysis. However, shearing-velocity logging is seldom carried out because it is expensive. This paper presents a simple method for predicting S-wave velocity which covers the basic factors that influence seismic wave propagation velocity in rocks. The elastic modulus of a rock is expressed here as a weighted arithmetic average between Voigt and Reuss bounds, where the weighting factor, w, is a measurement of the geometric details of the pore space and mineral grains. The S-wave velocity can be estimated from w, which is derived from the P-wave modulus. The method is applied to process well-logging data for a carbonate reservoir in Sichuan Basin, and shows the predicted S-wave velocities agree well with the measured S-wave velocities.  相似文献   

5.
Low frequencies are necessary in seismic data for proper acoustic impedance imaging and for petrophysical interpretation. Without lower frequencies, images can be distorted leading to incorrect reservoir interpretation and petrophysical predictions. As part of the Foinaven Active Reservoir Management (FARM) project, a Towed Streamer survey and an Ocean Bottom Hydrophone (OBH) survey were shot in both 1995 and 1998. The OBH surveys contain lower frequencies than the streamer surveys, providing a unique opportunity to study the effects that low frequencies have on both the acoustic impedance image along with petrophysical time‐lapse predictions. Artefacts that could easily have been interpreted as high‐resolution features in the streamer data impedance volumes can be distinguished by comparison with the impedance volumes created from the OBH surveys containing lower frequencies. In order to obtain results from the impedance volumes, impedance must be related to saturation. The mixing of exsolved gas, oil and water phases involves using the Reuss (uniform) or Voigt (patchy approximation) mixing laws. The Voigt average is easily misused by assuming that the end‐points correspond to 0% and 100% gas saturation. This implies that the patches are either 0% gas saturation or 100% gas saturation, which is never the case. Here, the distribution of gas as it comes out of solution is assumed to be uniform until the gas saturation reaches a sufficiently high value (critical gas saturation) to allow gas to flow. Therefore, at low gas saturations the distribution is uniform, but at saturations above critical, it is patchy, with patches that range from critical gas saturation to the highest gas saturation possible (1 minus residual oil and irreducible water saturation).  相似文献   

6.
重力数据的物性反演面临着严重的多解性问题,降低多解性的有效手段是加入约束条件.而边界识别、深度估计及成像方法可获取地质体的水平位置、深度范围等几何参数信息,本文将基于数据本身挖掘的地质体几何参数信息约束到物性反演中,以降低反演的多解性.通过引入基于深度信息的深度加权函数及基于水平位置的水平梯度加权函数建立优化约束条件,有效地提高了反演结果的横向及纵向分辨率.重力梯度数据包含更多的地质体空间特征信息,将优化约束反演方法应用到全张量数据的反演中,模型试验表明本文方法反演结果与理论模型更加吻合.最后对美国路易斯安那州文顿盐丘实测航空重力梯度数据的应用表明,本文方法在其他地球物理、地质资料不足的情况下获得更可靠的反演结果.  相似文献   

7.
《Advances in water resources》2004,27(10):1017-1032
This paper presents a numerical solution for the effective conductivity of a periodic binary medium with cuboid inclusions located on an octahedral lattice. The problem is defined by five dimensionless geometric parameters and one dimensionless conductivity contrast parameter. The effective conductivity is determined by considering the flow through the “elementary flow domain” (EFD), which is an octant of the unitary domain of the periodic media. We derive practical bounds of interest for the six-dimensional parameter space of the EFD and numerically compute solutions at regular intervals throughout the entire bounded parameter space. A continuous solution of the effective conductivity within the limits of the simulated parameter space is then obtained via interpolation of the numerical results. Comparison to effective conductivities derived for random heterogeneous media demonstrate similarities and differences in the behavior of the effective conductivity in regular periodic (low entropy) vs. random (high entropy) media. The results define the low entropy bounds of effective conductivity in natural media, which is neither completely random nor completely periodic, over a large range of structural geometries. For aniso-probable inclusion spacing, the absolute bounds of Keff for isotropic inclusions are the Wiener bounds, not the Hashin-Shtrikman bounds. For isotropic inclusion and isoprobable conditions well below the percolation threshold, the results are in agreement with the self-consistent approach. For anisotropic cuboid inclusions, or at relatively close spacing in at least one direction (p > 0.2) (aniso-probable conditions), the effective conductivity of the periodic media is significantly different from that found in anisotropic random binary or Gaussian media.  相似文献   

8.
Reliability and risk assessment of lifeline systems call for efficient methods that integrate hazard and interdependencies. Such methods are computationally challenged when the probabilistic response of systems is tied to multiple events, as performance quantification requires a large catalog of ground motions. Available methods to address this issue use catalog reductions and importance sampling. However, besides comparisons against baseline Monte Carlo trials in select cases, there is no guarantee that such methods will perform or scale well in practice. This paper proposes a new efficient method for reliability assessment of interdependent lifeline systems, termed RAILS, that considers systemic performance and is particularly effective when dealing with large catalogs of events. RAILS uses the state‐space partition method to estimate systemic reliability with theoretical bounds and, for the first time, supports cyclic interdependencies among lifeline systems. Recycling computations across an entire seismic catalog with RAILS considerably reduces the number of system performance evaluations in seismic performance studies. Also, when performance estimate bounds are not tight, we adopt an importance and stratified sampling method that in our computational experiments is various orders of magnitude more efficient than crude Monte Carlo. We assess the efficiency of RAILS using synthetic networks and illustrate its application to quantify the seismic risk of realistic yet streamlined systems hypothetically located in the San Francisco Bay Region.  相似文献   

9.
This paper presents the evaluation of two approximate methods recently proposed in the literature to estimate residual (permanent) drift demands at the end of earthquake excitation for seismic assessment of buildings. Both methods require an estimate of the peak (maximum) interstory drift demand and the corresponding drift demand at significant yielding of the building. Additionally, an approximate method is proposed as part of this study. The introduced method follows a coefficient‐based approach similar to the Coefficient Method included in several US documents. For evaluating the approximate methods, five moment‐resisting steel framed buildings having different number of stories were analyzed under four sets of earthquake ground motions. Quantification of the accuracy of the approximate methods to estimate residual drift demands with respect to results from nonlinear time‐history analyses was performed through error measures computed for each building and each set of earthquake ground motions. Results show that the mean standard error tends to increase as the seismic hazard level increases. Between the two methods, the method introduced by Erochko et al. seems more effective in predicting residual drift demands than that proposed in the FEMA P‐58 recommendations in the USA. It is demonstrated that including additional sources of stiffness and strength in the modeling approach constrains the amplitude of residual drift demands. As a beneficial consequence, the accuracy of both approximate methods in predicting residual drift demands is significantly improved (i.e., mean standard error decreases). The introduced method also provides similar accuracy than the approximate methods available in the literature. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
A method is developed for determining the depth to the centroid (the geometric center) of ‘semi-compact' sources. The method, called the anomaly attenuation rate (AAR) method, involves computing radial averages of AARs with increasing distances from a range of assumed source centers. For well-isolated magnetic anomalies from ‘semi-compact' sources, the theoretical AARs range from 2 (close to the sources) to 3 (in the far-field region); the corresponding theoretical range of AARs for gravity anomalies is 1 to 2. When the estimated source centroid is incorrect, the AARs either exceed or fall short of the theoretical values. The levelling-off of the far-field AARs near their theoretical maximum values indicates the upper (deeper) bound of the centroid location. Similarly, near-field AARs lower than the theoretical minimum indicate the lower (shallower) bound of the centroid location. It is not always possible to determine usable upper and lower bounds of the centroids because the method depends on characteristics of sources/anomalies and the noise level of the data. For the environmental magnetic examples considered in this study, the determined deeper bounds were within 4% of the true centroid-to-observation distance. For the case of the gravity anomaly from the Bloomfield Pluton, Missouri, USA, determination of only the shallower bound of the centroid location (7 km) was possible. This estimate agrees closely with the centroid of a previously determined three-dimensional model of the Bloomfield Pluton. For satellite magnetic anomalies, the method is appropriate only for high-amplitude, near-circular anomalies due to the inherent low signal-to-noise ratio of satellite magnetic anomalies. Model studies indicate that the AAR method is able to place depths within ±20–30 km of actual center locations from a 400-km observation altitude. Thus, the method may be able to discriminate between upper crustal, lower crustal, and mantle magnetic sources. The results from the prominent Kentucky anomaly are relatively well-resolved (centroid depth 30 km below the Earth's surface). For the Kiruna Magsat anomaly, the deleterious effects from neighboring anomalies make a determination difficult (possible depth could be between 20 and 30 km). The centroid depths are deeper for the Kursk anomaly (40–50 km). These depths may indicate that magnetic anomalies from the near-surface Kursk iron formations (a known contributor) and deep crustal magnetic sources could combine to form the Kursk Magsat anomaly.  相似文献   

11.
重磁遗传算法三维反演中高速计算及有效存储方法技术   总被引:13,自引:15,他引:13       下载免费PDF全文
将地下场源区域规则划分成很多小长方体单元,并且通过反演确定这些单元的物性变 化,勾画出场源的分布图像,这种方式逐步成为重磁反演,特别是三维反演的重要方向;遗 传算法等非线性技术进行该类反演将逐步成为发展趋势. 本文指出,在应用遗传算法进行该 类反演过程中,隐含着数据量较大时超常规的计算量,它已成为制约该类反演充分发挥作用 的瓶颈问题;同时,本文提出了针对性的分离并存储几何格架的计算策略、以及独特的几何 格架等效压缩存储技术,可以从根本上提高非线性反演计算速度,为该类反演的有效应用奠 定了坚实的基础.  相似文献   

12.
The elastic moduli of four sandstone samples are measured at seismic (2?2000 Hz) and ultrasonic (1 MHz) frequencies and water- and glycerin-saturated conditions. We observe that the high-permeability samples under partially water-saturated conditions and the low-permeability samples under partially glycerin-saturated conditions show little dispersion at low frequencies (2?2000 Hz). However, the high-permeability samples under partially glycerin-saturated conditions and the low-permeability samples under partially water-saturated conditions produce strong dispersion in the same frequency range (2?2000 Hz). This suggests that fluid mobility largely controls the pore-fluid movement and pore pressure in a porous medium. High fluid mobility facilitates pore-pressure equilibration either between pores or between heterogeneous regions, resulting in a low-frequency domain where the Gassmann equations are valid. In contrast, low fluid mobility produces pressure gradients even at seismic frequencies, and thus dispersion. The latter shows a systematic shift to lower frequencies with decreasing mobility. Sandstone samples showed variations in Vp as a function of fluid saturation. We explore the applicability of the Gassmann model on sandstone rocks. Two theoretical bounds for the P-velocity are known, the Gassmann–Wood and Gassmann–Hill limits. The observations confirm the effect of wave-induced flow on the transition from the Gassmann–Wood to the Gassmann–Hill limit. With decreasing fluid mobility, the P-velocity at 2–2000 Hz moves from the Gassmann–Wood boundary to the Gassmann–Hill boundary. In addition,, we investigate the mechanisms responsible for this transition.  相似文献   

13.
Reconstructing past climate is beneficial for researchers to understand the mechanism of past climate change, recognize the context of modern climate change and predict scenarios of future climate change. Paleoclimate data assimilation (PDA), which was first introduced in 2000, is a promising approach and a significant issue in the context of past climate research. PDA has the same theoretical basis as the traditional data assimilation (DA) employed in the fields of atmosphere science, ocean science and land surface science. The main aim of PDA is to optimally estimate past climate states that are both consistent with the climate signal recorded in proxy and the dynamic understanding of the climate system through combining the physical laws and dynamic mechanisms of climate systems represented by climate models with climate signals recorded in proxies (e.g., tree rings, ice cores). After investigating the research status and latest advances of PDA abroad, in this paper, the background, concept and methodology of PAD are briefly introduced. Several special aspects and the development history of PAD are systematically summarized. The theoretical basis and typical cases associated with three frequently-used PAD methods (e.g., nudging, particle filter and ensemble square root filter) are analyzed and demonstrated. Finally, some underlying problems in current studies and key prospects in future research related to PDA are proposed to provide valuable thoughts on and a scientific basis for PDA research.  相似文献   

14.
There has for many years been interest in finding necessary conditions for dynamo action. These are usually expressed in terms of bounds on integrated properties of the flow. The bounds can clearly be improved when the flow structure can be taken into account. Recent research presents techniques for finding optimised dynamos (that is with the lowest dynamo threshold) subject to constraints, (e.g. with fixed mean square vorticity). It is natural to ask if such an optimum solution can exist when the mean square velocity is fixed. The aim of this note is to show that this is not the case and in fact that a steady or periodic dynamo can exist in a bounded conductor with an arbitrarily small value of the kinetic energy.  相似文献   

15.
A number of methods and formulae has been proposed in the literature to estimate the discharge capacity of compound channels. When the main channel has a meandering pattern, a reduction in the conveyance capacity for a given stage is observed, which is due to the energy dissipations caused by the development of strong secondary currents and to the decrease of the main channel bed slope with respect to the valley bed slope. The discharges in meandering compound channels are usually assessed applying, with some adjustments, the same methods used in the straight compound channels. Specifically, the sinuosity of the main channel is frequently introduced to account for its meandering pattern, although some methods use different geometric parameters.In this paper the stage—discharge curves for several compound channels having identical cross-sectional area, roughness and bed slope but different planimetric patterns are numerically calculated and compared, in order to identify which geometric parameter should be efficaciously used in empirical formulae to account for meandering patterns. The simulations are carried out using a 3D finite-volume model that solves the RANS equations using a k-ε turbulence model. The numerical code is validated against experimental data collected in both straight and meandering compound channels.The numerical results show that the sinuosity is the main parameter to be accounted for in empirical formulae to assess the conveyance capacity of meandering compound channels. Comparison of the stage—discharge curves in the meandering compound channels with that obtained in a straight channel having identical cross-sectional area clearly shows the reduction of discharge due to the presence of bends in the main channel. The effect of other geometric parameters, such as the meander-belt width and the mean curvature radius, results very weak.  相似文献   

16.
This research incorporates the generalized likelihood uncertainty estimation (GLUE) methodology in a high‐resolution Environmental Protection Agency Storm Water Management Model (SWMM), which we developed for a highly urbanized sewershed in Syracuse, NY, to assess SWMM modelling uncertainties and estimate parameters. We addressed two issues that have long been suggested having a great impact on the GLUE uncertainty estimation: the observations used to construct the likelihood measure and the sampling approach to obtain the posterior samples of the input parameters and prediction bounds of the model output. First, on the basis of the Bayes' theorem, we compared the prediction bounds generated from the same Gaussian distribution likelihood measure conditioned on flow observations of varying magnitude. Second, we employed two sampling techniques, the sampling importance resampling (SIR) and the threshold sampling methods, to generate posterior parameter distributions and prediction bounds, based on which the sampling efficiency was compared. In addition, for a better understanding of the hydrological responses of different pervious land covers in urban areas, we developed new parameter sets in SWMM representing the hydrological properties of trees and lawns, which were estimated through the GLUE procedure. The results showed that SIR was a more effective alternative to the conventional threshold sampling method. The combined total flow and peak flow data were an efficient alternative to the intensive 5‐min flow data for reducing SWMM parameter and output uncertainties. Several runoff control parameters were found to have a great effect on peak flows, including the newly introduced parameters for trees. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
A theoretical framework is presented for the estimation of the physical parameters of a structure (i.e., mass, stiffness, and damping) from measured experimental data (i.e., input–output or output‐only data). The framework considers two state‐space models: a physics‐based model derived from first principles (i.e., white‐box model) and a data‐driven mathematical model derived by subspace system identification (i.e., black‐box model). Observability canonical form conversion is introduced as a powerful means to convert the data‐driven mathematical model into a physically interpretable model that is termed a gray‐box model. Through an explicit linking of the white‐box and gray‐box model forms, the physical parameters of the structural system can be extracted from the gray‐box model in the form of a finite element discretization. Prior to experimental verification, the framework is numerically verified for a multi‐DOF shear building structure. Without a priori knowledge of the structure, mass, stiffness, and damping properties are accurately estimated. Then, experimental verification of the framework is conducted using a six‐story steel frame structure under support excitation. With a priori knowledge of the lumped mass matrix, the spatial distribution of structural stiffness and damping is estimated. With an accurate estimation of the physical parameters of the structure, the gray‐box model is shown to be capable of providing the basis for damage detection. With the use of the experimental structure, the gray‐box model is used to reliably estimate changes in structural stiffness attributed to intentional damage introduced. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
A new uncertainty estimation method, which we recently introduced in the literature, allows for the comprehensive search of model posterior space while maintaining a high degree of computational efficiency. The method starts with an optimal solution to an inverse problem, performs a parameter reduction step and then searches the resulting feasible model space using prior parameter bounds and sparse‐grid polynomial interpolation methods. After misfit rejection, the resulting model ensemble represents the equivalent model space and can be used to estimate inverse solution uncertainty. While parameter reduction introduces a posterior bias, it also allows for scaling this method to higher dimensional problems. The use of Smolyak sparse‐grid interpolation also dramatically increases sampling efficiency for large stochastic dimensions. Unlike Bayesian inference, which treats the posterior sampling problem as a random process, this geometric sampling method exploits the structure and smoothness in posterior distributions by solving a polynomial interpolation problem and then resampling from the resulting interpolant. The two questions we address in this paper are 1) whether our results are generally compatible with established Bayesian inference methods and 2) how does our method compare in terms of posterior sampling efficiency. We accomplish this by comparing our method for two electromagnetic problems from the literature with two commonly used Bayesian sampling schemes: Gibbs’ and Metropolis‐Hastings. While both the sparse‐grid and Bayesian samplers produce compatible results, in both examples, the sparse‐grid approach has a much higher sampling efficiency, requiring an order of magnitude fewer samples, suggesting that sparse‐grid methods can significantly improve the tractability of inference solutions for problems in high dimensions or with more costly forward physics.  相似文献   

19.
Existing data supporting or disputing the validity of the Hashin-Shtrikman bounds on the elastic properties of multiphase aggregates often do not consider porosity, elastic anisotropy, or experimental errors. In this experiment, two-phase aggregates of KCl + (NH4Br, TlBr, CsCl, NaCl, Cu, and LiF) at every 20% volume fraction were vacuum hot-pressed and the compressional and shear velocities were measured with a computer-controlled ultrasonic interferometer to ±0.2%. The ratio of the shear moduli, μ, (phase 2/KCl) varied from about 1 to 5, producing a range of separations between the theoretical two-phase Hashin-Shtrikman bounds for the composites. Samples were generally 99% or better of the theoretical density, with less than 1% velocity anisotropy. Porosity corrections were applied assuming spherical pores, based on the observed velocity-pressure behaviour. Velocities agreed with the HS bounds calculated from the end-member single-crystal stiffnesses when anisotropy was taken into account.The velocity data were also used to estimate the bulk modulus, K, and shear modulus of the second phase by means of the matrix method — taking the K and μ of KCl as known and calculating the moduli of the other phase assuming that the measured velocities were the two-phase Hashin-Shtrikman bounds or the Voigt-Reuss-Hill average. A narrow range of moduli estimates results only if the μ's of both phases are fairly closely matched. For μ's mismatched by a factor of 5, the theoretical uncertainty in the estimates can be 10 times larger than the experimental uncertainty. Estimates using the VRH average can lie outside the HS-based results.  相似文献   

20.
《水文科学杂志》2013,58(5):852-871
Abstract

To reflect the uncertainties of a hydrological model in simulating and forecasting observed discharges according to rainfall inputs, the estimated result for each time step should not be just a point estimate (a single numerical value), but should be expressed as a prediction interval, i.e. a band defined by the prediction bounds of a particular confidence level α. How best to assess the quality of the prediction bounds thus becomes very important for understanding the modelling uncertainty in a comprehensive and objective way. This paper focuses on seven indices for characterizing the prediction bounds from different perspectives. For the three case-study catchments presented, these indices are calculated for the prediction bounds generated by the generalized likelihood uncertainty estimation (GLUE) method for various threshold values. In addition, the relationships among these indices are investigated, particularly that of the containing ratio (CR) to the other indices. In this context, three main findings are obtained for the prediction bounds estimated by GLUE. Firstly, both the average band-width and the average relative band-width are seen to have very strong linear correlations with the CR index. Secondly, a high CR value, a narrow band-width, and a high degree of symmetry with respect to the observed hydrograph, all of which are clearly desirable properties of the prediction bounds estimated by the uncertainty assessment methods, cannot all be achieved simultaneously. Thirdly, for the prediction bounds considered, the higher CR values and the higher degrees of symmetry with respect to the observed hydrograph are found to be associated with both the larger band-widths and the larger deviation amplitudes. It is recommended that a set of different indices, such as those considered in this study, be employed for assessing and comparing the prediction bounds in a more comprehensive and objective way.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号