首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A multivariate spatial sampling design that uses spatial vine copulas is presented that aims to simultaneously reduce the prediction uncertainty of multiple variables by selecting additional sampling locations based on the multivariate relationship between variables, the spatial configuration of existing locations and the values of the observations at those locations. Novel aspects of the methodology include the development of optimal designs that use spatial vine copulas to estimate prediction uncertainty and, additionally, use transformation methods for dimension reduction to model multivariate spatial dependence. Spatial vine copulas capture non-linear spatial dependence within variables, whilst a chained transformation that uses non-linear principal component analysis captures the non-linear multivariate dependence between variables. The proposed design methodology is applied to two environmental case studies. Performance of the proposed methodology is evaluated through partial redesigns of the original spatial designs. The first application is a soil contamination example that demonstrates the ability of the proposed methodology to address spatial non-linearity in the data. The second application is a forest biomass study that highlights the strength of the methodology in incorporating non-linear multivariate dependence into the design.  相似文献   

2.
The test for exponentiality of a dataset in terms of a specific aging property constitutes an interesting problem in reliability analysis. To this end, a wide variety of tests are proposed in the literature. In this paper, the excess-wealth function is recalled and new asymptotic properties are studied. By using the characterization of the exponential distribution based on the excess-wealth function, a new exponentiality test is proposed. Through simulation techniques, it is shown that this new test works well on small sample sizes. The exact null distribution and normality asymptotic is also obtained for the statistic proposed. This test and a new empirical graph based on the excess-wealth function are applied to extreme-value examples.  相似文献   

3.
The properties of linear spatial interpolators of single realizations and trend components of regionalized variables are examined in this work. In the case of the single realization estimator explicit and exact expressions for the weighting vector and the variances of estimator and estimation error were obtained from a closed-form expression for the inverse of the Lagrangian matrix. The properties of the trend estimator followed directly from the Gauss-Markoff theorem. It was shown that the single realization estimator can be decomposed into two mutually orthogonal random functions of the data, one of which is the trend estimator. The implementation of liear spatial estimation was illustrated with three different methods, i.e., full information maximum likelihood (FIML), restricted maximum likelihood (RML), and Rao's minimum norm invariant quadratic unbiased estimation (MINQUE) for the single realization case and via generalized least squares (GLS) for the trend. The case study involved large correlation length-scale in the covariance of specific yield producing a nested covariance structure that was nearly positive semidefinite. The sensitivity of model parameters, i.e., drift and variance components (local and structured) to the correlation length-scale, choice of covariance model (i.e., exponential and spherical), and estimation method was examined. the same type of sensitivity analysis was conducted for the spatial interpolators. It is interesting that for this case study, characterized by a large correlation length-scale of about 50 mi (80 km), both parameter estimates and linear spatial interpolators were rather insensitive to the choice of covariance model and estimation method within the range of credible values obtained for the correlation length-scale, i.e., 40–60 mi (64–96 km), with alternative estimates falling within ±5% of each other.  相似文献   

4.
In this study a simulation-based fuzzy chance-constrained programming (SFCCP) model is developed based on possibility theory. The model is solved through an indirect search approach which integrates fuzzy simulation, artificial neural network and simulated annealing techniques. This approach has the advantages of: (1) handling simulation and optimization problems under uncertainty associated with fuzzy parameters, (2) providing additional information (i.e. possibility of constraint satisfaction) indicating that how likely one can believe the decision results, (3) alleviating computational burdens in the optimization process, and (4) reducing the chances of being trapped in local optima. The model is applied to a petroleum-contaminated aquifer located in western Canada for supporting the optimal design of groundwater remediation systems. The model solutions provide optimal groundwater pumping rates for the 3, 5 and 10 years of pumping schemes. It is observed that the uncertainty significantly affects the remediation strategies. To mitigate such impacts, additional cost is required either for increased pumping rate or for reinforced site characterization.  相似文献   

5.
The goal of quantile regression is to estimate conditional quantiles for specified values of quantile probability using linear or nonlinear regression equations. These estimates are prone to “quantile crossing”, where regression predictions for different quantile probabilities do not increase as probability increases. In the context of the environmental sciences, this could, for example, lead to estimates of the magnitude of a 10-year return period rainstorm that exceed the 20-year storm, or similar nonphysical results. This problem, as well as the potential for overfitting, is exacerbated for small to moderate sample sizes and for nonlinear quantile regression models. As a remedy, this study introduces a novel nonlinear quantile regression model, the monotone composite quantile regression neural network (MCQRNN), that (1) simultaneously estimates multiple non-crossing, nonlinear conditional quantile functions; (2) allows for optional monotonicity, positivity/non-negativity, and generalized additive model constraints; and (3) can be adapted to estimate standard least-squares regression and non-crossing expectile regression functions. First, the MCQRNN model is evaluated on synthetic data from multiple functions and error distributions using Monte Carlo simulations. MCQRNN outperforms the benchmark models, especially for non-normal error distributions. Next, the MCQRNN model is applied to real-world climate data by estimating rainfall Intensity–Duration–Frequency (IDF) curves at locations in Canada. IDF curves summarize the relationship between the intensity and occurrence frequency of extreme rainfall over storm durations ranging from minutes to a day. Because annual maximum rainfall intensity is a non-negative quantity that should increase monotonically as the occurrence frequency and storm duration decrease, monotonicity and non-negativity constraints are key constraints in IDF curve estimation. In comparison to standard QRNN models, the ability of the MCQRNN model to incorporate these constraints, in addition to non-crossing, leads to more robust and realistic estimates of extreme rainfall.  相似文献   

6.
面向设计应用的地震动空间相干函数模型   总被引:1,自引:1,他引:1  
本文对现有的常用地震动空间相干模型进行了总结,提出了一个新的面向工程抗震设计应用的形式统一的地震动空间相干函数模型,在此基础上推导出了多点地震反应谱和功率谱计算所需要的振型组合系数的解析表达式,避免了耗费时间的数值积分运算。本文模型与计算方法使多点地震激励下结构响应的计算时间减低至积分方法的1/20以下,使多点地震反应谱方法和多点地震功率谱方法在计算时间方面实用化。  相似文献   

7.
The equivalent linearization method approximates the maximum displacement response of nonlinear structures through the corresponding equivalent linear system.By using the particle swarm optimization technique,a new statistical approach is developed to determine the key parameters of such an equivalent linear system over a 2D space of period and damping ratio.The new optimization criterion realizes the consideration of the structural safety margin in the equivalent linearization method when applied to the performance-based seismic design/evaluation of engineering structures.As an application,equations for equivalent system parameters of both bilinear hysteretic and stiffness degrading single-degree-offreedom systems are deduced with the assumption of a constant ductility ratio.Error analyses are also performed to validate the proposed approach.  相似文献   

8.
This article deals with the right-tail behavior of a response distribution \(F_Y\) conditional on a regressor vector \({\mathbf {X}}={\mathbf {x}}\) restricted to the heavy-tailed case of Pareto-type conditional distributions \(F_Y(y|\ {\mathbf {x}})=P(Y\le y|\ {\mathbf {X}}={\mathbf {x}})\), with heaviness of the right tail characterized by the conditional extreme value index \(\gamma ({\mathbf {x}})>0\). We particularly focus on testing the hypothesis \({\mathscr {H}}_{0,tail}:\ \gamma ({\mathbf {x}})=\gamma _0\) of constant tail behavior for some \(\gamma _0>0\) and all possible \({\mathbf {x}}\). When considering \({\mathbf {x}}\) as a time index, the term trend analysis is commonly used. In the recent past several such trend analyses in extreme value data have been published, mostly focusing on time-varying modeling of location or scale parameters of the response distribution. In many such environmental studies a simple test against trend based on Kendall’s tau statistic is applied. This test is powerful when the center of the conditional distribution \(F_Y(y|{\mathbf {x}})\) changes monotonically in \({\mathbf {x}}\), for instance, in a simple location model \(\mu ({\mathbf {x}})=\mu _0+x\cdot \mu _1\), \({\mathbf {x}}=(1,x)'\), but the test is rather insensitive against monotonic tail behavior, say, \(\gamma ({\mathbf {x}})=\eta _0+x\cdot \eta _1\). This has to be considered, since for many environmental applications the main interest is on the tail rather than the center of a distribution. Our work is motivated by this problem and it is our goal to demonstrate the opportunities and the limits of detecting and estimating non-constant conditional heavy-tail behavior with regard to applications from hydrology. We present and compare four different procedures by simulations and illustrate our findings on real data from hydrology: weekly maxima of hourly precipitation from France and monthly maximal river flows from Germany.  相似文献   

9.
In flood risk management, the divergent concept of resilience of a flood defense system cannot be fully defined quantitatively by one indicator and multiple indicators need to be considered simultaneously. In this paper, a multi-objective optimization (MOO) design framework is developed to determine the optimal protection level of a levee system based on different resilience indicators that depend on the probabilistic features of the flood damage cost arising under the uncertain nature of rainfalls. An evolutionary-based MOO algorithm is used to find a set of non-dominated solutions, known as Pareto optimal solutions for the optimal protection level. The objective functions, specifically resilience indicators of severity, variability and graduality, that account for the uncertainty of rainfall can be evaluated by stochastic sampling of rainfall amount together with the model simulations of incurred flood damage estimation for the levee system. However, these model simulations which usually require detailed flood inundation simulation are computationally demanding. This hinders the wide application of MOO in flood risk management and is circumvented here via a surrogate flood damage modeling technique that is integrated into the MOO algorithm. The proposed optimal design framework is applied to a levee system in a central basin of flood-prone Jakarta, Indonesia. The results suggest that the proposed framework enables the application of MOO with resilience objectives for flood defense system design under uncertainty and solves the decision making problems efficiently by drastically reducing the required computational time.  相似文献   

10.
For snow avalanches, passive defense structures are generally designed by considering high return period events. However, defining a return period turns out to be tricky as soon as different variables are simultaneously considered. This problem can be overcome by maximizing the expected economic benefit of the defense structure, but purely stochastic approaches are not possible for paths with a complex geometry in the runout zone. Therefore, in this paper, we include a multivariate numerical avalanche propagation model within a Bayesian decisional framework. The influence of a vertical dam on an avalanche flow is quantified in terms of local energy dissipation with a simple semi-empirical relation. Costs corresponding to dam construction and the damage to a building situated in the runout zone are roughly evaluated for each dam height–hazard value pair, with damage intensity depending on avalanche velocity. Special attention is given to the poor local information to be taken into account for the decision. Using a case study from the French avalanche database, the Bayesian optimal dam height is shown to be more pessimistic than the classical optimal height because of the increasing effect of parameter uncertainty. It also appears that the lack of local information is especially critical for a building exposed to the most extreme events only. The residual hazard after dam construction is analyzed and the sensitivity to the different modelling assumptions is evaluated. Finally, possible further developments of the approach are discussed.  相似文献   

11.
A numerical experiment of flow in variably saturated porous media was performed in order to evaluate the spatial and temporal distribution of the groundwater recharge at the phreatic surface for a shallow aquifer as a function of the input rainfall process and soil heterogeneity. The study focused on the groundwater recharge which resulted from the percolation of the excess rainfall for a 90-days period of an actual precipitation record. Groundwater recharge was defined as the water flux across the moving phreatic surface. The observed spatial non-uniformity of the groundwater recharge was caused by soil heterogeneity and is particularly pronounced during the stage of recharge peak (substantial percolation stage). During that stage the recharge is associated with preferential flow paths defined as soil zones of locally higher hydraulic conductivity. For the periods of low percolation intensity the groundwater recharge was exhibiting more uniform spatial characteristics. The temporal distribution of the recharge was found to be a function of the frequency and intensity of the rainfall events. Application of sampling design demonstrates the joint influence of the spatial and temporal recharge variability on the cost-effective monitoring of groundwater potentiometric surfaces.  相似文献   

12.
13.
This paper investigates the dynamic behavior and the seismic effectiveness of a non‐conventional Tuned Mass Damper (TMD) with large mass ratio. Compared with conventional TMD, the device mass is increased up to be comparable with the mass of the structure to be protected, aiming at a better control performance. In order to avoid the introduction of an excessive additional weight, masses already present on the structure are converted into tuned masses, retaining structural or architectural functions beyond the mere control function. A reduced order model is introduced for design purposes and the optimal design of a large mass ratio TMD for seismic applications is then formulated. The design method is specifically developed to implement High‐Damping Rubber Bearings (HDRB) to connect the device mass to the main structure, taking advantage of combining stiffness and noticeable damping characteristics. Ground acceleration is modeled as a Gaussian random process with white noise power spectral density. A numerical searching technique is used to obtain the optimal design parameter, the frequency ratio alpha, which minimizes the root‐mean‐square displacement response of the main structure. The study finally comprises shaking table tests on a 1:5 scale model under a wide selection of accelerograms, both artificial and natural, to assess the seismic effectiveness of the proposed large mass ratio TMD. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
The increasing frequency and/or severity of extreme climate events are becoming increasingly apparent over multi‐decadal timescales at the global scale, albeit with relatively low scientific confidence. At the regional scale, scientific confidence in the future trends of extreme event likelihood is stronger, although the trends are spatially variable. Confidence in these extreme climate risks is muddied by the confounding effects of internal landscape system dynamics and external forcing factors such as changes in land use and river and coastal engineering. Geomorphology is a critical discipline in disentangling climate change impacts from other controlling factors, thereby contributing to debates over societal adaptation to extreme events. We review four main geomorphic contributions to flood and storm science. First, we show how palaeogeomorphological and current process studies can extend the historical flood record while also unraveling the complex interactions between internal geomorphic dynamics, human impacts and changes in climate regimes. A key outcome will be improved quantification of flood probabilities and the hazard dimension of flood risk. Second, we present evidence showing how antecedent geomorphological and climate parameters can alter the risk and magnitude of landscape change caused by extreme events. Third, we show that geomorphic processes can both mediate and increase the geomorphological impacts of extreme events, influencing societal risk. Fourthly, we show the potential of managing flood and storm risk through the geomorphic system, both near‐term (next 50 years) and longer‐term. We recommend that key methods of managing flooding and erosion will be more effective if risk assessments include palaeodata, if geomorphological science is used to underpin nature‐based management approaches, and if land‐use management addresses changes in geomorphic process regimes that extreme events can trigger. We argue that adopting geomorphologically‐grounded adaptation strategies will enable society to develop more resilient, less vulnerable socio‐geomorphological systems fit for an age of climate extremes. © 2016 The Authors. Earth Surface Processes and Landforms published by John Wiley & Sons Ltd.  相似文献   

15.
Limitations are existed in current ensemble forecasting initial perturbation methods for describing the interactions among various spheres of the Earth system. In this study, a new method is proposed, namely, the coupled conditional nonlinear optimal perturbation(C-CNOP) method, which incorporates multisphere interactions much appropriately. The El Ni?o-Southern Oscillation(ENSO) is a typical ocean-atmosphere “coupling”(or “interaction”) phenomenon. The C-CNOP method is applied to ensemble forec...  相似文献   

16.
A simulation experiment for optimal design hyetograph selection   总被引:1,自引:0,他引:1  
The aim of this work is to assess the accuracy of literature design hyetographs for the evaluation of peak discharges during flood events. Five design hyetographs are examined in a set of simulations, based upon the following steps: (i) an ideal river basin is defined, characterized by a Beta distribution shaped unit hydrograph (UH); (ii) 1000 years of synthetic rainfall are artificially generated; (iii) a discharge time‐series is obtained from the convolution of the rainfall time‐series and the UH, and the reference T‐years flood is computed from this series; (iv) for the same return period T, the parameters of the intensity–duration–frequency (IDF) curve are estimated from the 1000 years of synthetic rainfall; (v) five design hyetographs are determined from the IDF curves and are convolved with the discrete UH to find the corresponding design hydrographs; (vi) the hydrograph peaks are compared with the reference T‐years flood and the advantages and drawbacks of each of the five approaches are evaluated. The rainfall and UH parameters are varied, and the whole procedure is repeated to assess the sensitivity of results to the system configuration. We found that all design hyetographs produce flood peak estimates that are consistently biased in most of the climatic and hydrologic conditions considered. In particular, significant underestimation of the design flood results from the adoption of any rectangular hyetograph used in the context of the rational formula. In contrast, the Chicago hyetograph tends to overestimate peak flows. In two cases it is sufficient to multiply the result by a constant scaling factor to obtain robust and nearly unbiased estimates of the design floods. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
Adequate knowledge of soil moisture storage as well as evaporation and transpiration at the land surface is essential to the understanding and prediction of the reciprocal influences between land surface processes and weather and climate. Traditional techniques for soil moisture measurements are ground-based, but space-based sampling is becoming available due to recent improvement of remote sensing techniques. A fundamental question regarding the soil moisture observation is to estimate the sampling error for a given sampling scheme [G.R. North, S. Nakamoto, J Atmos. Ocean Tech. 6 (1989) 985–992; G. Kim, J.B. Valdes, G.R. North, C. Yoo, J. Hydrol., submitted]. In this study we provide the formalism for estimating the sampling errors for the cases of ground-based sensors and space-based sensors used both separately and together. For the study a model for soil moisture dynamics by D. Entekhabi, I. Rodriguez-Iturbe [Adv. Water Res. 17 (1994) 35–45] is introduced and an example application is given to the Little Washita basin using the Washita '92 soil moisture data. As a result of the study we found that the ground-based sensor network is ineffective for large or continental scale observation, but should be limited to a small-scale intensive observation such as for a preliminary study.  相似文献   

18.
Source/body edge detection is a common feature in the processing and interpretation of potential field data sets. A wide range of spatial derivatives is available to enhance the information contained in the basic data. Here the ability of these procedures to assist with the mapping interpretation of non‐potential field data is considered. The study uses airborne electromagnetic (conductivity) data but also provides a general context for other conductivity/resistivity data, provided the non‐potential field nature of active and thus spatially‐focused, measurements is acknowledged. The study discusses and demonstrates the application of a range of common spatial derivative procedures, including the analytic signal and upward continuation, to both magnetic and conductivity data. The ability of the tilt derivative to provide enhanced mapping of conductivity data is considered in detail. Tilt and its associated functions are formed by taking combinations of vertical and horizontal derivatives of the data set. Theoretical forward modelling studies are first carried out to assess the performance of the tilt derivative in relation to the detection and definition of concealed conductivity structure. The tilt derivative embodies automatic gain control that normalizes the detection and definition of both weak and strong conductivity gradients across an appropriate subsurface depth range. The use of high‐order spatial derivatives inevitably results in a degree of noise (cultural perturbation) amplification that is survey and technique specific. Both of these aspects are considered using practical case studies of jointly obtained magnetic and conductivity data at a variety of spatial scales.  相似文献   

19.
The design and the management of pump-and-treat (PAT) remediation systems for contaminated aquifers under uncertain hydrogeological settings and parameters often involve decisions that trade off cost optimality against reliability. Both design objectives can be improved by planning site characterization programs that reduce subsurface parameter uncertainty. However, the cost for subsurface investigation often weighs heavily upon the budget of the remedial action and must thus be taken into account in the trade-off analysis. In this paper, we develop a stochastic data-worth framework with the purpose of estimating the economic opportunity of subsurface investigation programs. Since the spatial distribution of hydraulic conductivity is most often the major source of uncertainty, we focus on the direct sampling of hydraulic conductivity at prescribed locations of the aquifer. The data worth of hydraulic conductivity measurements is estimated from the reduction of the overall management cost ensuing from the reduction in parameter uncertainty obtained from sampling. The overall cost is estimated as the expected value of the cost of installing and operating the PAT system plus penalties incurred due to violations of cleanup goals and constraints. The crucial point of the data-worth framework is represented by the so-called pre-posterior analysis. Here, the tradeoff between decreasing overall costs and increasing site-investigation budgets is assessed to determine a management strategy proposed on the basis of the information available at the start of remediation. The goal of the pre-posterior analysis is to indicate whether the proposed management strategy should be implemented as is, or re-designed on the basis of additional data collected with a particular site-investigation program. The study indicates that the value of information is ultimately related to the estimates of cleanup target violations and decision makers’ degree of risk-aversion.  相似文献   

20.
On the optimal risk based design of highway drainage structures   总被引:2,自引:1,他引:2  
For a proposed highway bridge or culvert, the total cost to the public during its expected service life includes capital investment on the structures, regular operation and maintenance costs, and various flood related costs. The flood related damage costs include items such as replacement and repair costs of the highway bridge or culvert, flood plain property damage costs, users costs from traffic interruptions and detours, and others. As the design discharge increases, the required capital investment increases but the corresponding flood related damage costs decrease. Hydraulic design of a bridge or culvert using a riskbased approach is to choose among the alternatives the one associated with the least total expected cost.In this paper, the risk-based design procedure is applied to pipe culvert design. The effect of the hydrologic uncertainties such as sample size and type of flood distribution model on the optimal culvert design parameters including design return period and total expected cost are examined in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号