首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
For the offline segmentation of long hydrometeological time series, a new algorithm which combines the dynamic programming with the recently introduced remaining cost concept of branch-and-bound approach is developed. The algorithm is called modified dynamic programming (mDP) and segments the time series based on the first-order statistical moment. Experiments are performed to test the algorithm on both real world and artificial time series comprising of hundreds or even thousands of terms. The experiments show that the mDP algorithm produces accurate segmentations in much shorter time than previously proposed segmentation algorithms.  相似文献   

2.
Segmentation algorithm for long time series analysis   总被引:2,自引:2,他引:0  
Time series analysis is an important issue in the earth science-related engineering applications such as hydrology, meteorology and environmetrics. Inconsistency and nonhomogeneity that might arise in a time series yield segments with different statistical characteristics. In this study, an algorithm based on the first order statistical moment (average) of a time series is developed and applied on five time series with length ranging from 84 items to nearly 1,300. Comparison to the existing segmentation algorithms proves the applicability and usefulness of the proposed algorithm in long hydrometeorological and geophysical time series analysis.  相似文献   

3.
A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry‐based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model‐based method. The results show that in some cases, that is, for some time series, the trigonometry‐based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R2, while in other cases the multiple linear/nonlinear regression model‐based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software.  相似文献   

4.
With the availability of spatially distributed data, distributed hydrologic models are increasingly used for simulation of spatially varied hydrologic processes to understand and manage natural and human activities that affect watershed systems. Multi‐objective optimization methods have been applied to calibrate distributed hydrologic models using observed data from multiple sites. As the time consumed by running these complex models is increasing substantially, selecting efficient and effective multi‐objective optimization algorithms is becoming a nontrivial issue. In this study, we evaluated a multi‐algorithm, genetically adaptive multi‐objective method (AMALGAM) for multi‐site calibration of a distributed hydrologic model—Soil and Water Assessment Tool (SWAT), and compared its performance with two widely used evolutionary multi‐objective optimization (EMO) algorithms (i.e. Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Non‐dominated Sorted Genetic Algorithm II (NSGA‐II)). In order to provide insights into each method's overall performance, these three methods were tested in four watersheds with various characteristics. The test results indicate that the AMALGAM can consistently provide competitive or superior results compared with the other two methods. The multi‐method search framework of AMALGAM, which can flexibly and adaptively utilize multiple optimization algorithms, makes it a promising tool for multi‐site calibration of the distributed SWAT. For practical use of AMALGAM, it is suggested to implement this method in multiple trials with relatively small number of model runs rather than run it once with long iterations. In addition, incorporating different multi‐objective optimization algorithms and multi‐mode search operators into AMALGAM deserves further research. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
Dense networks of wireless structural health monitoring systems can effectively remove the disadvantages associated with current wire‐based sparse sensing systems. However, recorded data sets may have relative time‐delays due to interference in radio transmission or inherent internal sensor clock errors. For structural system identification and damage detection purposes, sensor data require that they are time synchronized. The need for time synchronization of sensor data is illustrated through a series of tests on asynchronous data sets. Results from the identification of structural modal parameters show that frequencies and damping ratios are not influenced by the asynchronous data; however, the error in identifying structural mode shapes can be significant. The results from these tests are summarized in Appendix A. The objective of this paper is to present algorithms for measurement data synchronization. Two algorithms are proposed for this purpose. The first algorithm is applicable when the input signal to a structure can be measured. The time‐delay between an output measurement and the input is identified based on an ARX (auto‐regressive model with exogenous input) model for the input–output pair recordings. The second algorithm can be used for a structure subject to ambient excitation, where the excitation cannot be measured. An ARMAV (auto‐regressive moving average vector) model is constructed from two output signals and the time‐delay between them is evaluated. The proposed algorithms are verified with simulation data and recorded seismic response data from multi‐story buildings. The influence of noise on the time‐delay estimates is also assessed. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
Seismic reliability assessment of lifeline networks gives rise to various technical challenges, which are mostly caused by a large number of network components, complex network topology, and statistical dependence between component failures. For effective risk assessment and probabilistic inference based on post‐hazard observations, various non‐simulation‐based algorithms have been developed, including the selective recursive decomposition algorithm (S‐RDA). To facilitate the application of such an algorithm to large networks, a new multi‐scale approach is developed in this paper. Using spectral clustering algorithms, a network is first divided into an adequate number of clusters such that the number of inter‐cluster links is minimized while the number of the nodes in each cluster remains reasonably large. The connectivity around the identified clusters is represented by super‐links. The reduced size of the simplified network enables the S‐RDA algorithm to perform the network risk assessment efficiently. When the simplified network is still large even after a clustering, additional levels of clustering can be introduced to have a hierarchical modeling structure. The efficiency and effectiveness of the proposed multi‐scale approach are demonstrated successfully by numerical examples of a hypothetical network, a gas transmission pipeline network, and a water transmission network. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, Rosenbrock‐based algorithms originally developed for real‐time testing of linear systems with dynamic substructuring are extended for use on nonlinear systems. With this objective in mind and for minimal overhead, both two‐ and three‐stages linearly implicit real‐time compatible algorithms were endowed with the Jacobian matrices requiring only one evaluation at the beginning of each time step. Moreover, these algorithms were improved with subcycling strategies. In detail, the paper briefly introduces Rosenbrock‐based L‐Stable Real‐Time (LSRT) algorithms together with linearly implicit and explicit structural integrators, which are now commonly used to perform real‐time tests. Then, the LSRT algorithms are analysed in terms of linearized stability with reference to an emulated spring pendulum, which was chosen as a nonlinear test problem, because it is able to exhibit a large and relatively slow nonlinear circular motion coupled to an axial motion that can be set to be stiff. The accuracy analysis on this system was performed for all the algorithms described. Following this, a coupled spring‐pendulum example typical of real‐time testing is analysed with respect to both stability and accuracy issues. Finally, the results of representative numerical simulations and real‐time substructure tests, considering nonlinearities both in the numerical and the physical substructure, are explored. These tests were used to demonstrate how the LSRT algorithms can be used for substructuring tests with strongly nonlinear components. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
Local governmental agencies are increasingly undertaking potentially costly “status‐and‐trends” monitoring to evaluate the effectiveness of stormwater control measures and land‐use planning strategies or to satisfy regulatory requirements. Little guidance is presently available for such efforts, and so we have explored the application, interpretation, and temporal limitations of well‐established hydrologic metrics of runoff changes from urbanization, making use of an unusually long‐duration, high‐quality data set from the Pacific Northwest (USA) with direct applicability to urban and urbanizing watersheds. Three metrics previously identified for their utility in identifying hydrologic conditions with biological importance that respond to watershed urbanization—TQmean (the fraction of time that flows exceed the mean annual discharge), the Richards‐Baker Index (characterizing flashiness relative to the mean discharge), and the annual tally of wet‐season day‐to‐day flow reversals (the total number of days that reverse the prior days' increasing or decreasing trend)—are all successful in stratifying watersheds across a range of urbanization, as measured by total contributing area of urban development. All metrics respond with statistical significance to multidecadal trends in urbanization, but none detect trends in watershed‐scale urbanization over the course of a single decade. This suggests a minimum period over which dependable trends in hydrologic alteration (or improvement) can be detected with confidence. The metrics also prove less well suited to urbanizing watersheds in a semi‐arid climate, with only flow reversals showing a response consistent with prior findings from more humid regions. We also explore the use of stage as a surrogate for discharge in calculating these metrics, recognizing potentially significant agency cost savings in data collection with minimal loss of information. This approach is feasible but cannot be implemented under current data‐reporting practices, requiring measurement of water‐depth values and preservation of the full precision of the original recorded data. With these caveats, however, hydrologic metrics based on stage should prove as or more useful, at least in the context of status‐and‐trends monitoring, as those based on subsequent calculations of discharge.  相似文献   

9.
How much data is needed for calibration of a hydrological catchment model? In this paper we address this question by evaluating the information contained in different subsets of discharge and groundwater time series for multi‐objective calibration of a conceptual hydrological model within the framework of an uncertainty analysis. The study site was a 5·6‐km2 catchment within the Forsmark research site in central Sweden along the Baltic coast. Daily time series data were available for discharge and several groundwater wells within the catchment for a continuous 1065‐day period. The hydrological model was a site‐specific modification of the conceptual HBV model. The uncertainty analyses were based on a selective Monte Carlo procedure. Thirteen subsets of the complete time series data were investigated with the idea that these represent realistic intermittent sampling strategies. Data subsets included split‐samples and various combinations of weekly, monthly, and quarterly fixed interval subsets, as well as a 53‐day ‘informed observer’ subset that utilized once per month samples except during March and April—the months containing large and often dominant snow melt events—when sampling was once per week. Several of these subsets, including that of the informed observer, provided very similar constraints on model calibration and parameter identification as the full data record, in terms of credibility bands on simulated time series, posterior parameter distributions, and performance indices calculated to the full dataset. This result suggests that hydrological sampling designs can, at least in some cases, be optimized. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Karstic watersheds are highly complex hydrogeological systems that are characterized by a multiscale behaviour corresponding to the different pathways of water in these systems. The main issue of karstic spring discharge fluctuations consists in the presence and the identification of characteristic time scales in the discharge time series. To identify and characterize these dynamics, we acquired, for many years at the outlet of two karstic watersheds in South of France, discharge data at 3‐mn, 30‐mn and daily sampling rate. These hydrological records constitute to our knowledge the longest uninterrupted discharge time series available at these sampling rates. The analysis of the hydrological records at different levels of detail leads to a natural scale analysis of these time series in a multifractal framework. From a universal class of multifractal models based on cascade multiplicative processes, the time series first highlights two cut‐off scales around 1 and 16 h that correspond to distinct responses of the aquifer drainage system. Then we provide estimates of the multifractal parameters α and C1 and the moment of divergence qD corresponding to the behaviour of karstic systems. These results constitute the first estimates of the multifractal characteristics of karstic spingflows based on 10 years of high‐resolution discharge time series and should lead to several improvements in rainfall‐karstic springflow simulation models. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
One of the methods for studying the near-surface low-velocity zone and for subsequent determination of static corrections is the technique of employing first arrivals. During the past few years several computer algorithms, based on some simplifying assumptions, have been suggested for automatic determination of first arrivals. This paper suggests a new method for automatic picking of first arrivals, which works under quite general assumptions concerning the character of the data. The method is based on the use of correlation properties of signals and the application of a statistical criterion for the estimation of first arrivals time. A number of dimensionless parameters is used in the algorithm making it possible to regulate the level of reliability and the resolution of the picking procedure. The second stage of the algorithm is the parameterization of the traveltime curve, that is a division of the previously obtained t—x curve into separate rectilinear segments. The suggested algorithm of parameterization is based on an heuristic use of some properties of maximum likelihood estimates. This permits location of the breakpoints of the t—x curve and the estimation of the parameters of each rectilinear segment. A computer program has been written based on the picking and parameterization algorithm. This program has been tested on a large amount of field data and the results show that it works at least as well as the hand procedure.  相似文献   

12.
Two analyses, one based on multiple regression and the other using the Holt–Winters algorithm, for investigating non‐stationarity in environmental time series are presented. They are applied to monthly rainfall and average maximum temperature time series of lengths between 38 and 108 years, from six stations in the Murray Darling Basin and four cities in eastern Australia. The first analysis focuses on the residuals after fitting regression models which allow for seasonal variation, the Pacific Decadal Oscillation (PDO) and the Southern Oscillation Index (SOI). The models provided evidence that rainfall is reduced during periods of negative SOI, and that the interaction between PDO and SOI pronounces this effect during periods of negative PDO. Following this, there was no evidence of any trend in either the PDO or SOI time series. The residuals from this regression were analysed with a cumulative sum (CUSUM) technique, and the statistical significance was assessed using a Monte Carlo method. The residuals were also analysed for volatility, autocorrelation, long‐range dependence and spatial correlation. For all ten rainfall and temperature time series, CUSUM plots of the residuals provided evidence of non‐stationarity for both temperature and rainfall, after removing seasonal effects and the effects of PDO and SOI. Rainfall was generally lower in the first half of the twentieth century and higher during the second half. However, it decreased again over the last 10 years. This pattern was highlighted with 5‐year moving average plots. The residuals for temperature showed a complementary pattern with increases in temperature corresponding to decreased rainfall. The second analysis decomposed the rainfall and temperature time series into random variation about an underlying level, trend and additive seasonal effects and changes in the level; trend and seasonal effects were tracked using a Holt–Winters algorithm. The results of this analysis were qualitatively similar to those of the regression analysis. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Long‐term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, such as Central America, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information—to locally observed discharge—can be used to constrain model parameter uncertainty for ungauged catchments. Given the strong influence that climatic large‐scale processes exert on streamflow variability in the Central American region, we explored the use of climate variability knowledge as process constraints to constrain the simulated discharge uncertainty for a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty, we first rejected parameter relationships that disagreed with our understanding of the system. Then, based on this reduced parameter space, we applied the climate‐based process constraints at long‐term, inter‐annual, and intra‐annual timescales. In the first step, we reduced the initial number of parameters by 52%, and then, we further reduced the number of parameters by 3% with the climate constraints. Finally, we compared the climate‐based constraints with a constraint based on global maps of low‐flow statistics. This latter constraint proved to be more restrictive than those based on climate variability (further reducing the number of parameters by 66% compared with 3%). Even so, the climate‐based constraints rejected inconsistent model simulations that were not rejected by the low‐flow statistics constraint. When taken all together, the constraints produced constrained simulation uncertainty bands, and the median simulated discharge followed the observed time series to a similar level as an optimized model. All the constraints were found useful in constraining model uncertainty for an—assumed to be—ungauged basin. This shows that our method is promising for modelling long‐term flow data for ungauged catchments on the Pacific side of Central America and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.  相似文献   

14.
This paper synthesizes 10‐years' worth of interannual time‐series space‐borne ERS‐1 and RADARSAT‐1 synthetic aperture radar (SAR) data collected coincident with daily measurement of snow‐covered, land‐fast first‐year sea ice (FYI) geophysical and surface radiation data collected from the Seasonal Sea Ice Monitoring and Modeling Site, Collaborative‐Interdisciplinary Cryospheric Experiment and 1998 North Water Polynya study over the period 1992 to 2002. The objectives are to investigate the seasonal co‐relationship of the SAR time‐series dataset with selected surface mass (bulk snow thickness) and climate state variables (surface temperature and albedo) measured in situ for the purpose of measuring the interannual variability of sea ice spring melt transitions and validating a time‐series SAR methodology for sea ice surface mass and climate state parameter estimation. We begin with a review of the salient processes required for our interpretation of time‐series microwave backscatter from land‐fast FYI. Our results suggest that time‐series SAR data can reliably measure the timing and duration of surface albedo transitions at daily to weekly time‐scales and at a spatial scales that are on the order of hundreds of metres. Snow thickness on FYI immediately prior to melt onset explains a statistically significant portion of the variability in timing of SAR‐detected melt onset to pond onset for SAR time‐series that are made up of more than 25 images. Our results also show that the funicular regime of snowmelt, resolved in time‐series SAR data at a temporal resolution of approximately 2·5 images per week, is not detectable for snow covers less than 25 cm in thickness. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
Unsteady bedload transport was measured in two c. 5 m wide anabranches of a gravel‐bed braided stream draining the Haut Glacier d'Arolla, Switzerland, during the 1998 and 1999 melt seasons. Bedload was directly sampled using 152 mm square Helley–Smith type samplers deployed from a portable measuring bridge, and independent transport rate estimates for the coarser size fractions were obtained from the dispersion of magnetically tagged tracer pebbles. Bedload transport time series show pulsing behaviour under both marginal (1998) and partial (1999) transport regimes. There are generally weak correlations between transport rates and shear stresses determined from velocity data recorded at the measuring bridge. Characteristic parameters of the bedload grain‐size distributions (D50, D84) are weakly correlated with transport rates. Analysis of full bedload grain‐size distributions reveals greater structure, with a tendency for transport to become less size selective at higher transport rates. The bedload time series show autoregressive behaviour but are dif?cult to distinguish by this method. State–space plots, and associated measures of time‐series separation, reveal the structure of the time series more clearly. The measured pulses have distinctly different time‐series characteristics from those modelled using a one‐dimensional sediment routing model in which bed shear stress and grain size are varied randomly. These results suggest a mechanism of pulse generation based on irregular low‐amplitude bedforms, that may be generated in‐channel or may represent the advection of material supplied by bank erosion events. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
Ground motions with strong velocity pulses are of particular interest to structural earthquake engineers because they have the potential to impose extreme seismic demands on structures. Accurate classification of records is essential in several earthquake engineering fields where pulse‐like ground motions should be distinguished from nonpulse‐like records, such as probabilistic seismic hazard analysis and seismic risk assessment of structures. This study proposes an effective method to identify pulse‐like ground motions having single, multiple, or irregular pulses. To effectively characterize the intrinsic pulse‐like features, the concept of an energy‐based significant velocity half‐cycle, which is visually identifiable, is first presented. Ground motions are classified into 6 categories according to the number of significant half‐cycles in the velocity time series. The pulse energy ratio is used as an indicator for quantitative identification, and then the energy threshold values for each type of ground motions are determined. Comprehensive comparisons of the proposed approach with 4 benchmark identification methods are conducted, and the results indicate that the methodology presented in this study can more accurately and efficiently distinguish pulse‐like and nonpulse‐like ground motions. Also presented are some insights into the reasons why many pulse‐like ground motions are not detected successfully by each of the benchmark methods.  相似文献   

17.
A statistical test on climate and hydrological series from different spatial resolution could obtain different regional trend due to spatial heterogeneity and its temporal variability. In this study, annual series of the precipitation heterogeneity indices of concentration index (CI) and the number of wet days (NW) along with annual total amount of precipitation were calculated based on at‐site daily precipitation series during 1962–2011 in the headwater basin of the Huaihe River, China. The regional trends of the indices were first detected based on at‐site series by using the aligned and intrablock methods, and field significance tests that consider spatial heterogeneity over sites. The detected trends were then compared with the trends of the regional index series derived from daily areal average precipitation (DAAP), which averages at‐site differences and thus neglects spatial heterogeneity. It was found that the at‐site‐based regional test shows increasing trends of CI and NW in the basin, which follows the test on individual sites that most of sites were characterized by increasing CI and NW. However, the DAAP‐derived regional series of CI and NW were tested to show a decreasing trend. The disparity of the regional trend test on at‐site‐based regional series and the DAAP‐derived regional series arises from a temporal change of the spatial heterogeneity, which was quantified by the generalized additive models for location, scale, and shape. This study highlights that compared with averaging indices, averaging at‐site daily precipitation could lead to an error in the regional trend inference on annual precipitation heterogeneity indices. More attention should be paid to temporal variability in spatial heterogeneity when data at large scales are used for regional trend detection on hydro‐meteorological events associated with intra‐annual heterogeneity.  相似文献   

18.
Gerd Bürger 《水文研究》2017,31(22):4039-4042
A main obstacle to trend detection in time series occurs when they are autocorrelated. By reducing the effective sample size of a series, autocorrelation leads to decreased trend significance. Numerous recipes attempt to mitigate the effect of autocorrelation, either by adjusting for the reduced effective sample size or by removing the autocorrelated components of a series. This short note deals with the latter, also called prewhitening (PW). It is known that removal of autocorrelation also removes part of the trend, which may affect the signal‐to‐noise ratio. Two popular methods have dealt with this problem, the trend‐free prewhitening (TFPW) and the iterative prewhitening. Although it is generally accepted that both methods reduce the adverse effects of PW on the trend magnitude, corresponding effects on statistical significance have not been clearly stated for TFPW. Using a Monte Carlo approach, it is demonstrated that both methods entail quite different Type‐I error rates. The iterative prewhitening produces rates that are generally close to the nominal significance level. The TFPW, however, shows very high Type‐I error rates with increasing autocorrelation. The corresponding rate of false trend detections is unacceptable for applications, so that published trends based on TFPW need to be reassessed.  相似文献   

19.
Various types of neural networks have been proposed in previous papers for applications in hydrological events. However, most of these applied neural networks are classified as static neural networks, which are based on batch processes that update action only after the whole training data set has been presented. The time variate characteristics in hydrological processes have not been modelled well. In this paper, we present an alternative approach using an artificial neural network, termed real‐time recurrent learning (RTRL) for stream‐flow forecasting. To define the properties of the RTRL algorithm, we first compare the predictive ability of RTRL with least‐square estimated autoregressive integrated moving average models on several synthetic time‐series. Our results demonstrate that the RTRL network has a learning capacity with high efficiency and is an adequate model for time‐series prediction. We also investigated the RTRL network by using the rainfall–runoff data of the Da‐Chia River in Taiwan. The results show that RTRL can be applied with high accuracy to the study of real‐time stream‐flow forecasting networks. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
Comparison of waveform inversion, part 2: phase approach   总被引:1,自引:0,他引:1  
In this paper, we take advantage of the natural separation into amplitude and phase of a logarithmic‐based approach to full‐wavefield inversion and concentrate on deriving purely kinematic approaches for both conventional and logarithmic‐based methods. We compare the resulting algorithms theoretically and empirically. To maintain consistency between this and the previous paper in this series, we continue with the same symbolism and notation and apply our new algorithms to the same three data sets. We show that both of these new techniques, although different in implementation style, share the same computational methodology. We also show that reverse‐time back‐propagation of the residuals for our new kinematic methods continues to be the basis for calculation of the steepest‐descent vector. We conclude that the logarithmic phase‐based method is more practical than its conventionally based counterpart, but, in spite of the fact that the conventional algorithm appears unstable, differences are not great.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号