首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Resources and environmental systems management (RESM) is challenged by the synchronic effects of interval uncertainties in the related practices. The synchronic interval uncertainties are misrepresented as random variables, fuzzy sets, or interval numbers in conventional RESM programming techniques including stochastic programming. This may lead to ineffectiveness of resources allocation, high costs of recourse measures, increased risks of unreasonable decisions, and decreased optimality of system profits. To fill the gap of few corresponding studies, a synchronic interval linear programming (SILP) method is proposed in this study. The proposition of interval sets and interval functions and coupling them with linear programming models lead to development of an SILP model for RESM. This enables incorporation of interval uncertainties in resource constraints and synchronic interval uncertainties in the programming objective into the optimization process. An analysis of the distribution-independent geometric properties of the feasible regions of SILP models results in proposition of constraint violation likelihoods. The tradeoff between system optimality and constraint violation is analyzed. The overall optimality of SILP systems under synchronic intervalness is quantified through proposition of integrally optimal solutions. Integration of these efforts leads to a violation-constrained interval integral method for optimization of RESM systems under synchronic interval uncertainties. Comparisons with selected existing methods reveal the effectiveness of SILP at eliminating negativity of synchronic intervalness, enabling risk management of and achieving overall optimality of RESM systems, and enhancing the reliability of optimization techniques for RESM problems. The exploited framework for analyzing synchronic interval uncertainties in RESM systems is helpful for addressing synchronisms of other uncertainties such as randomness or fuzziness and avoiding the resultant decision mistakes and disasters due to neglecting them.  相似文献   

2.
Floods have changed in a complex manner, triggered by the changing environment (i.e., intensified human activities and global warming). Hence, for better flood control and mitigation in the future, bivariate frequency analysis of flood and extreme precipitation events is of great necessity to be performed within the context of changing environment. Given this, in this paper, the Pettitt test and wavelet coherence transform analysis are used in combination to identify the period with transformed flood-generating mechanism. Subsequently, the primary and secondary return periods of annual maximum flood (AMF) discharge and extreme precipitation (Pr) during the identified period are derived based on the copula. Meanwhile, the conditional probability of occurring different flood discharge magnitudes under various extreme precipitation scenarios are estimated using the joint dependence structure between AMF and Pr. Moreover, Monte Carlo-based algorithm is performed to evaluate the uncertainties of the above copula-based analyses robustly. Two catchments located on the Loess plateau are selected as study regions, which are Weihe River Basin (WRB) and Jinghe River Basin (JRB). Results indicate that: (1) the 1994–2014 and 1981–2014 are identified as periods with transformed flood-generating mechanism in the WRB and JRB, respectively; (2) the primary and secondary return periods for AMF and Pr are examined. Furthermore, chance of occurring different AMF under varying Pr scenarios also be elucidated according to the joint distribution of AMF and Pr. Despite these, one thing to notice is that the associate uncertainties are considerable, thus greatly challenges measures of future flood mitigation. Results of this study offer technical reference for copula-based frequency analysis under changing environment at regional and global scales.  相似文献   

3.
We propose a scenario-based method for simulating and mapping the risk of surge floods for use by local authorities concerned with public safety and urban planning in coastal areas. Focusing on the triad of hazard, vulnerability and adaptation capability, we estimate the comprehensive risk and display its spatial distribution using the raster calculation tool in ArcGIS. The detailed methodology is introduced via a case study of Yuhuan, an island county in Zhejiang Province, China, which is frequently affected by typhoon storm surges. First, we designed 24 typhoon scenarios and modeled the flood process in each scenario using the hydrodynamic module of MIKE 21. Second, flood depth and area were used for hazard assessment; an authorized indicator system of land use categories and a survey of emergency shelters were used for vulnerability and adaptation capability assessment, respectively; and a quantified model was used for assessment of the comprehensive risk. Lastly, we used the GIS raster calculation tool for mapping the risk of storm surges in multiple typhoon scenarios. Our principal findings are as follows: (1) Seawalls are more likely to be overtopped or destroyed by more severe storm surges with increasing typhoon intensity. (2) Most of the residential areas with inadequate emergency shelters are highly vulnerable to flood events. (3) As projected in the risk mapping, if an exceptional typhoon with a central pressure of 915 or 925 hPa made a landfall in Yuhuan, a wide range of areas would be flooded and at high risk. (4) Determining optimal strategies based on identification of risk-inducing factors is the most effective way of promoting safe and sustainable development in coastal cities.  相似文献   

4.
Model performance evaluation for real-time flood forecasting has been conducted using various criteria. Although the coefficient of efficiency (CE) is most widely used, we demonstrate that a model achieving good model efficiency may actually be inferior to the naïve (or persistence) forecasting, if the flow series has a high lag-1 autocorrelation coefficient. We derived sample-dependent and AR model-dependent asymptotic relationships between the coefficient of efficiency and the coefficient of persistence (CP) which form the basis of a proposed CECP coupled model performance evaluation criterion. Considering the flow persistence and the model simplicity, the AR(2) model is suggested to be the benchmark model for performance evaluation of real-time flood forecasting models. We emphasize that performance evaluation of flood forecasting models using the proposed CECP coupled criterion should be carried out with respect to individual flood events. A single CE or CP value derived from a multi-event artifactual series by no means provides a multi-event overall evaluation and may actually disguise the real capability of the proposed model.  相似文献   

5.
Accurate and precise estimation of return levels is often a key goal of any extreme value analysis. For example, in the UK the British Standards Institution (BSI) incorporate estimates of ‘once-in-50-year wind gust speeds’—or 50-year return levels—into their design codes for new structures; similarly, the Dutch Delta Commission use estimates of the 10,000-year return level for sea-surge to aid the construction of flood defence systems. In this paper, we briefly highlight the shortcomings of standard methods for estimating return levels, including the commonly-adopted block maxima and peaks over thresholds approach, before presenting an estimation framework which we show can substantially increase the precision of return level estimates. Our work allows explicit quantification of seasonal effects, as well as exploiting recent developments in the estimation of the extremal index for handling extremal clustering. From frequentist ideas, we turn to the Bayesian paradigm as a natural approach for building complex hierarchical or spatial models for extremes. Through simulations we show that the return level posterior mean does not have an exceedance probability in line with the intended encounter risk; we also argue that the Bayesian posterior predictive value gives the most satisfactory representation of a return level for use in practice, accounting for uncertainty in parameter estimation and future observations. Thus, where feasible, we propose a Bayesian estimation strategy for optimal return level inference.  相似文献   

6.
During preliminary flood risk assessment in Lithuania 54 significant flood areas (SFA) were identified. The detailed flood hazard and risk maps were prepared for these areas in 2014. European Union Floods Directive does not indicate the concrete criteria for SFA delineation. The uncertainty analysis shows that the total length of SFA is not very sensitive to used methodology. In some rivers the uncertainties of 100 year flood peek discharge (Q1%) were large, but the variation of SFA boundary location was relatively small due to properties of hydrological network. The catchment area and Q1% change rapidly near the junction with large tributaries, so the boundaries of SFA are usually attached to these junctions. The formal criteria are mostly used to evaluate the possibility of significant floods, but the delineation of SFA is usually based on subjective decision.  相似文献   

7.
Self-centering buckling-restrained braces (SCBRBs) were proposed recently to minimize residual deformation of the braces induced by yielding or buckling. Although earthquake resilience of structures equipped with the SCBRBs can be well achieved using displacement based designs (DBDs), previously proposed DBD procedures generally involve iterations. In this study, a novel direct displacement-based design method with a non-iterative procedure, named RCR DDBD, is proposed and applied to design of steel braced frame structures with SCBRBs. Unlike previously adopted DBD, the yield displacement does not need to be assumed initially in the proposed procedure. Instead, the yield strength and yield displacement are determined directly by the predetermined objective drift (ratio), using the relation of the strength reduction factor (R) and constant-strength inelastic displacement ratio spectra (CR spectra), i.e. the RCR relation. Since the derived RCR relation is independent with the peak ground acceleration of the earthquake records when stiffness and strength degradation are not considered, the proposed procedure can be accurate for any seismic level. The RCR DDBD is supposed to begin with the knowledge of the seismic excitation level (according to the structure category, site classification and owner’s requirements) and the corresponding target drift; the end of the design is to obtain the cross sections of main frame members and all the bracing parameters. The result of two 7-story buildings designed according to the RCR DDBD procedure demonstrates that this procedure can be effective and fairly simple for practical seismic design.  相似文献   

8.
The Aki-Utsu method of Gutenberg-Richter (G-R) b value estimation is often misapplied so that estimations not using the G-R histogram are often meaningless because they are not based on adequate samples. We propose a method to estimate the likelihood Pr(b?b m , N, M 1, M 2) that an observed b m estimate, based on a sample of N magnitudes within an [M 1????≤?ΔM/2,?M 2?+?ΔM/2) range, where ΔM?=?0.1 is the usual rounding applied to magnitudes, is due to a “true” source b value, b, and use these likelihoods to estimate source b ranges corresponding to various confidence levels. As an example of application of the method, we estimate the b values before and after the occurrence of a 7.4-magnitude earthquake in the Mexican subduction zone, and find a difference of 0.82 between them with 100% confidence that the b values are different.  相似文献   

9.
Effects of temporally correlated infiltration on water flow in an unsaturated–saturated system were investigated. Both white noise and exponentially correlated infiltration processes were considered. The moment equations of the pressure head (ψ) were solved numerically to obtain the variance and autocorrelation functions of ψ at 14 observation points. Monte Carlo simulations were conducted to verify the numerical results and to estimate the power spectrum of ψ (S ψψ ). It was found that as the water flows through the system, the variance of the ψ (\( \sigma_{\psi }^{2} \)) were damped by the system: the deeper in the system, the smaller the \( \sigma_{\psi }^{2} \), and the larger the correlation timescale of the infiltration process (λ I ), the larger the \( \sigma_{\psi }^{2} \). The unsaturated–saturated system gradually filters out the short-term fluctuations of ψ and the damping effect is most significant in the upper part of the system. The fluctuations of ψ is non-stationary at early time and becomes stationary as time progresses: the larger the value of λ I , the longer the non-stationary period. The correlation timescale of the ψ (λ ψ ) increases with depth and approaches a constant value at depth: the larger the value of λ I , the larger the value of λ ψ . The results of the estimated S ψψ is consistent with those of the variance and autocorrelation function.  相似文献   

10.
Shear and compressional wave velocities, coupled with other petrophysical data, are very important for hydrocarbon reservoir characterization. In situ shear wave velocity (Vs) is measured by some sonic logging tools. Shear velocity coupled with compressional velocity is vitally important in determining geomechanical parameters, identifying the lithology, mud weight design, hydraulic fracturing, geophysical studies such as VSP, etc. In this paper, a correlation between compressional and shear wave velocity is obtained for Gachsaran formation in Maroon oil field. Real data were used to examine the accuracy of the prediction equation. Moreover, the genetic algorithm was used to obtain the optimal value for constants of the suggested equation. Furthermore, artificial neural network was used to inspect the reliability of this method. These investigations verify the notion that the suggested equation could be considered as an efficient, fast, and cost-effective method for predicting Vs from Vp.  相似文献   

11.
To alert the public to the possibility of tornado (T), hail (H), or convective wind (C), the National Weather Service (NWS) issues watches (V) and warnings (W). There are severe thunderstorm watches (SV), tornado watches (TV), and particularly dangerous situation watches (PV); and there are severe thunderstorm warnings (SW), and tornado warnings (TW). Two stochastic models are formulated that quantify uncertainty in severe weather alarms for the purpose of making decisions: a one-stage model for deciders who respond to warnings, and a two-stage model for deciders who respond to watches and warnings. The models identify all possible sequences of watches, warnings, and events, and characterize the associated uncertainties in terms of transition probabilities. The modeling approach is demonstrated on data from the NWS Norman, Oklahoma, warning area, years 2000–2007. The major findings are these. (i) Irrespective of its official designation, every warning type {SW, TW} predicts with a significant probability every event type {T, H, C}. (ii) An ordered intersection of SW and TW, defined as reinforced warning (RW), provides additional predictive information and outperforms SW and TW. (iii) A watch rarely leads directly to an event, and most frequently is false. But a watch that precedes a warning does matter. The watch type \(\{SV\), TV, \(PV\}\) is a predictor of the warning type \(\{SW\), RW, \(TW\}\) and of the warning performance: It sharpens the false alarm rate of the warning and the predictive probability of an event, and it increases the average lead time of the warning.  相似文献   

12.
A preliminary study of b value of rocks with two kinds of structural models has been made on the base of a new acoustic emission recording system. It shows that b value of the sample decreases obviously when the sample with compressive en echelon faults changes into a tensile one after interchange occurs between stress axis σ 1 and σ 2. A similar experiment is observed when the sample with tensile en echelon faults changes into that with a bend fault after two segments of the en echelon fault linking up. These facts indicate that the variation of b value may contain the information of the regional dominant structural model. Therefore, b-value analyses could be a new method for studying regional dominant structural models.  相似文献   

13.
The consideration of the relation between the daytime and nighttime values of the critical frequency F2, foF2 of the ionospheric F2 layer, started in the previous publication of the authors, is continued. The main regularities in variations in the correlation coefficient R(foF2) characterizing this relation are confirmed using larger statistical material (more ionospheric stations and longer observational series). Long-term trends in the R(foF2) value are found: at all stations the negative value of R(foF2) increases with time after 1980.  相似文献   

14.
In this study a new method is presented to determine model parameters from magnetic anomalies caused by dipping dikes. The proposed method is applied by employing only the even component of the anomaly. First, the maximum of the even component is divided to its value at any distance x in order to obtain S1. Then, theoretical even component values are computed for the minimal depth (h) and half-width (b) values. S2 is obtained by dividing their maximum to the value computed for the same distance x. A set of S2 values is calculated by slowly increasing the half-width, and h and b for the S2 closest to S1 are determined. The same procedure is repeated by increasing the depth. The determined b values are plotted against the corresponding values of h. After repeating the process and plotting curves for different distances, it is possible to determine the actual depth and half-width values.  相似文献   

15.
We study the frictional and viscous effects on earthquake nucleation, especially for the nucleation phase, based on a one-degree-of-freedom spring-slider model with friction and viscosity. The frictional and viscous effects are specified by the characteristic displacement, U c, and viscosity coefficient, η, respectively. Simulation results show that friction and viscosity can both lengthen the natural period of the system and viscosity increases the duration time of motion of the slider. Higher viscosity causes a smaller amplitude of lower velocity motion than lower viscosity. A change of either U c (under large η) or η (under large U c) from a large value (U ch for U c and η h for η) to a small one (U cl for U c and η l for η) in two stages during sliding can result in a clear nucleation phase prior to the P-wave. The differences δU c = U ch ? U cl and δη = η h ? η l are two important factors in producing a nucleation phase. The difference between the nucleation phase and the P-wave increases with either δU c or δη. Like seismic observations, the peak amplitude of P-wave, which is associated with the earthquake magnitude, is independent upon the duration time of nucleation phase. A mechanism specified with a change of either η or U c from a larger value to a smaller one due to temporal variations in pore fluid pressure and temperature in the fault zone based on radiation efficiency is proposed to explain the simulation results and observations.  相似文献   

16.
Based on the theory of two-phase interacting nanoparticles, the formation of thermoremanent and chemical remanent magnetization in nanosized titanomagnetites is modeled. It is shown that the value of thermoremanent magnetization barely depends on the degree of titanomagnetite exsolution whereas, chemical remanent magnetization which emerges during the exsolution increases up to at most the value of thermoremanent magnetization. The values of the ratio of thermoremanent to ideal magnetization, R t , fall within the limits 0.8 ≤ R t ≤ 1. The analogous ratio of chemical remanent magnetization to the ideal R c are below R t at all stages of the exsolution. Besides, the magnetic interaction between the nanoparticles reduces the values of thermoremanent and chemical magnetization but barely affects the ratio.  相似文献   

17.
We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on morphostructural analysis, pattern recognition, and the Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. The USLE stands for an empirical relationship log10N(M, L)?=?A?+?B·(5 – M)?+?C·log10L, where N(M, L) is the expected annual number of earthquakes of a certain magnitude M within a seismically prone area of linear dimension L. We use parameters A, B, and C of USLE to estimate, first, the expected maximum magnitude in a time interval at seismically prone nodes of the morphostructural scheme of the region under study, then map the corresponding expected ground shaking parameters (e.g., peak ground acceleration, PGA, or macro-seismic intensity). After a rigorous verification against the available seismic evidences in the past (usually, the observed instrumental PGA or the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures (e.g., those based on census of population, buildings inventory). The methodology of seismic hazard and risk assessment is illustrated by application to the territory of Greater Caucasus and Crimea.  相似文献   

18.
Seismic intensity measures (IMs) perform a pivotal role in probabilistic seismic demand modeling. Many studies investigated appropriate IMs for structures without considering soil liquefaction potential. In particular, optimal IMs for probabilistic seismic demand modeling of bridges in liquefied and laterally spreading ground are not comprehensively studied. In this paper, a coupled-bridge-soil-foundation model is adopted to perform an in-depth investigation of optimal IMs among 26 IMs found in the literature. Uncertainties in structural and geotechnical material properties and geometric parameters of bridges are considered in the model to produce comprehensive scenarios. Metrics such as efficiency, practicality, proficiency, sufficiency and hazard computability are assessed for different demand parameters. Moreover, an information theory based approach is adopted to evaluate the relative sufficiency among the studied IMs. Results indicate the superiority of velocity-related IMs compared to acceleration, displacement and time-related ones. In particular, Housner spectrum intensity (HI), spectral acceleration at 2.0 s (S a-20), peak ground velocity (PGV), cumulative absolute velocity (CAV) and its modified version (CAV 5) are the optimal IMs. Conversely, Arias intensity (I a ) and shaking intensity rate (SIR) which are measures often used in liquefaction evaluation or related structural demand assessment demonstrate very low correlations with the demand parameters. Besides, the geometric parameters do not evidently affect the choice of optimal IMs. In addition, the information theory based sufficiency ranking of IMs shows an identical result to that with the correlation measure based on coefficient of determination (R 2). This means that R 2 can be used to preliminarily assess the relative sufficiency of IMs.  相似文献   

19.
During the ruptures of an earthquake,the strain energy.△E,.will be transferred into,at least,three parts,i.e..the seismic radiation energy(E_s),fracture energy(E_g),and frictional energy(E_f),that is,△E = E_s + E_g + E_f.Friction,which is represented by a velocity- and state-dependent friction law by some researchers,controls the three parts.One of the main parameters of the law is the characteristic slip displacement.D_c.It is significant and necessary to evaluate the reliable value of D_c from observed and inverted seismic data.Since D_c controls the radiation efficiency.η_R = E_s/(E_s+ E_g),the value of η_r is a good constraint of estimating D_c.Integrating observed data and inverted results of source parameters from recorded seismograms.the values of E_s and E_g of an earthquake can be measured,thus leading to the value of η_R.The constraint used to estimate the reliable value of D_c will be described in this work.An example of estimates of D_c.based on the observed and inverted values of source parameters of the September 20,1999 M_S 7.6 Chi-Chi(Ji-Ji).Taiwan region,earthquake will be presented.  相似文献   

20.
Simultaneous observations of high-latitude long-period irregular pulsations at frequencies of 2.0–6.0 mHz (ipcl) and magnetic field disturbances in the solar wind plasma at low geomagnetic activity (Kp ~ 0) have been studied. The 1-s data on the magnetic field registration at Godhavn (GDH) high-latitude observatory and the 1-min data on the solar wind plasma and IMF parameters for 2011–2013 were used in an analysis. Ipcl (irregular pulsations continuous, long), which were observed against a background of the IMF Bz reorientation from northward to southward, have been analyzed. In this case other solar wind plasma and IMF parameters, such as velocity V, density n, solar wind dynamic pressure P = ρV2 (ρ is plasma density), and strength magnitude B, were relatively stable. The effect of the IMF Bz variation rate on the ipcl spectral composition and intensity has been studied. It was established that the ipcl spectral density reaches its maximum (~10–20 min) after IMF Bz sign reversal in a predominant number of cases. It was detected that the ipcl average frequency (f) is linearly related to the IMF Bz variation rate (ΔBzt). It was shown that the dependence of f on ΔBzt is controlled by the α = arctan(By/Bx) angle value responsible for the MHD discontinuity type at the front boundary of magnetosphere. The results made it possible to assume that the formation of the observed ipcl spectrum, which is related to the IMF Bz reorientation, is caused by solar wind plasma turbulence, which promotes the development of current sheet instability and surface wave amplification at the magnetopause.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号