首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We present a detrending algorithm for the removal of trends in time series. Trends in time series could be caused by various systematic and random noise sources such as cloud passages, changes of airmass, telescope vibration, CCD noise or defects of photometry. Those trends undermine the intrinsic signals of stars and should be removed. We determine the trends from subsets of stars that are highly correlated among themselves. These subsets are selected based on a hierarchical tree clustering algorithm. A bottom-up merging algorithm based on the departure from normal distribution in the correlation is developed to identify subsets, which we call clusters. After identification of clusters, we determine a trend per cluster by weighted sum of normalized light curves. We then use quadratic programming to detrend all individual light curves based on these determined trends. Experimental results with synthetic light curves containing artificial trends and events are presented. Results from other detrending methods are also compared. The developed algorithm can be applied to time series for trend removal in both narrow and wide field astronomy.  相似文献   

2.
We describe the largest data‐producing astronomy project in the coming decade – the LSST (Large Synoptic Survey Telescope). The enormous data output, database contents, knowledge discovery, and community science expected from this project will impose massive data challenges on the astronomical research community. One of these challenge areas is the rapid machine learning, data mining, and classification of all novel astronomical events from each 3‐gigapixel (6‐GB) image obtained every 20 seconds throughout every night for the project duration of 10 years.We describe these challenges and a particular implementation of a classification broker for this data fire hose. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
There is an increasing number of large, digital, synoptic sky surveys, in which repeated observations are obtained over large areas of the sky in multiple epochs. Likewise, there is a growth in the number of (often automated or robotic) follow‐up facilities with varied capabilities in terms of instruments, depth, cadence, wavelengths, etc., most of which are geared toward some specific astrophysical phenomenon. As the number of detected transient events grows, an automated, probabilistic classification of the detected variables and transients becomes increasingly important, so that an optimal use can be made of follow‐up facilities, without unnecessary duplication of effort. We describe a methodology now under development for a prototype event classification system; it involves Bayesian and Machine Learning classifiers, automated incorporation of feedback from follow‐up observations, and discriminated or directed follow‐up requests. This type of methodology may be essential for the massive synoptic sky surveys in the future. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
We describe briefly the Palomar‐Quest (PQ) digital synoptic sky survey, including its parameters, data processing, status, and plans. Exploration of the time domain is now the central scientific and technological focus of the survey. To this end, we have developed a real‐time pipeline for detection of transient sources.We describe some of the early results, and lessons learned which may be useful for other, similar projects, and time‐domain astronomy in general. Finally, we discuss some issues and challenges posed by the real‐time analysis and scientific exploitation of massive data streams from modern synoptic sky surveys. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
We address the problem of assessing the statistical significance of candidate periodicities found using the so-called 'multiharmonic' periodogram, which is being used for detection of non-sinusoidal signals and is based on the least-squares fitting of truncated Fourier series. The recent author's investigation made for the Lomb–Scargle periodogram is extended to the more general multiharmonic periodogram. As a result, closed and efficient analytic approximations to the false alarm probability, associated with multiharmonic periodogram peaks, are obtained. The resulting analytic approximations are tested under various conditions using Monte Carlo simulations. The simulations showed a nice precision and robustness of these approximations.  相似文献   

6.
7.
I provide an incomplete inventory of the astronomical variability that will be found by next‐generation time‐domain astronomical surveys. These phenomena span the distance range from near‐Earth satellites to the farthest Gamma Ray Bursts. The surveys that detect these transients will issue alerts to the greater astronomical community; this decision process must be extremely robust to avoid a slew of “false” alerts, and to maintain the community's trust in the surveys. I review the functionality required of both the surveys and the telescope networks that will be following them up, and the role of VOEvents in this process. Finally, I offer some ideas about object and event classification, which will be explored more thoroughly by other articles in these proceedings. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
The estimation of the frequency, amplitude and phase of a sinusoid from observations contaminated by correlated noise is considered. It is assumed that the observations are regularly spaced, but may suffer missing values or long time stretches with no data. The typical astronomical source of such data is high-speed photoelectric photometry of pulsating stars. The study of the observational noise properties of nearly 200 real data sets is reported: noise can almost always be characterized as a random walk with superposed white noise. A scheme for obtaining weighted non-linear least-squares estimates of the parameters of interest, as well as standard errors of these estimates, is described. Simulation results are presented for both complete and incomplete data. It is shown that, in finite data sets, results are sensitive to the initial phase of the sinusoid.  相似文献   

9.
It is assumed that   O − C   ('observed minus calculated') values of periodic variable stars are determined by three processes, namely measurement errors, random cycle-to-cycle jitter in the period, and possibly long-term changes in the mean period. By modelling the latter as a random walk, the covariances of all   O − C   values can be calculated. The covariances can then be used to estimate unknown model parameters, and to choose between alternative models. Pseudo-residuals which could be used in model fit assessment are also defined. The theory is illustrated by four applications to spotted stars in eclipsing binaries.  相似文献   

10.
One of the tools used to identify the pulsation modes of stars is a comparison of the amplitudes and phases as observed photometrically at different wavelengths. Proper application of the method requires that the errors on the measured quantities, and the correlations between them, be known (or at least estimated). It is assumed that contemporaneous measurements of the light intensity of a pulsating star are obtained in several wavebands. It is also assumed that the measurements are regularly spaced in time, although there may be missing observations. The amplitude and phase of the pulsation are estimated separately for each of the wavebands, and amplitude ratios and phase differences are calculated. A general scheme for estimating the covariance matrix of the amplitude ratios and phase differences is described. The first step is to fit a time series to the residuals after pre-whitening the observations by the best-fitting sinusoid. The residuals are then cross-correlated to study the interdependence between the errors in the different wavebands. Once the multivariate time-series structure can be modelled, the covariance matrix can be found by bootstrapping. An illustrative application is described in detail.  相似文献   

11.
12.
We present the results of applying new object classification techniques to the supernova search of the Nearby Supernova Factory. In comparison to simple threshold cuts, more sophisticated methods such as boosted decision trees, random forests, and support vector machines provide dramatically better object discrimination: we reduced the number of nonsupernova candidates by a factor of 10 while increasing our supernova identification efficiency. Methods such as these will be crucial for maintaining a reasonable false positive rate in the automated transient alert pipelines of upcoming large optical surveys. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
This contribution aims to introduce the idea that a well‐evolved HTN of the far future, with the anticipated addition of very large apertures, could also be made to incorporate the ability to carry out photonic astronomy observations, particularly Optical VLBI in a revived Hanbury‐Brown and Twiss Intensity Interferometry (HBTII) configuration. Such an HTN could exploit its inherent rapid reconfigurational ability to become a multi‐aperture distributed photon‐counting network able to study higher‐order spatiotemporal photon correlations and provide a unique tool for direct diagnostics of astrophysical emission processes. We very briefly review various considerations associated with the switching of the HTN to a special mode in which single‐photon detection events are continuously captured for a posteriori intercorrelation. In this context, photon arrival times should be determined to the highest time resolution possible and extremely demanding absolute time keeping and absolute time distribution schemes should be devised and implemented in the HTN nodes involved. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
Observations of present and future X‐ray telescopes include a large number of ipitous sources of unknown types. They are a rich source of knowledge about X‐ray dominated astronomical objects, their distribution, and their evolution. The large number of these sources does not permit their individual spectroscopical follow‐up and classification. Here we use Chandra Multi‐Wavelength public data to investigate a number of statistical algorithms for classification of X‐ray sources with optical imaging follow‐up. We show that up to statistical uncertainties, each class of X‐ray sources has specific photometric characteristics that can be used for its classification. We assess the relative and absolute performance of classification methods and measured features by comparing the behaviour of physical quantities for statistically classified objects with what is obtained from spectroscopy. We find that among methods we have studied, multi‐dimensional probability distribution is the best for both classifying source type and redshift, but it needs a sufficiently large input (learning) data set. In absence of such data, a mixture of various methods can give a better final result.We discuss some of potential applications of the statistical classification and the enhancement of information obtained in this way. We also assess the effect of classification methods and input data set on the astronomical conclusions such as distribution and properties of X‐ray selected sources. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
16.
DO WE NEED TO MODEL PLATES AT ALL?   总被引:2,自引:0,他引:2  
The historical method of obtaining equatorial coordinates for stars observed via astronomical photography has been to convert the measured coordinates to equatorial coordinates with the aid of a plate model which corrects for a variety of effects. However, we now have at our disposal novel smoothing techniques, of considerable generality, which in conjunction with modern star catalogues can reproduce the essence of the plate model while dramatically minimizing both the fortuitous and systematic errors of observation. In this paper we demonstrate that, with this technique and extant catalogues, one can obtain, with at least the same precision and better accuracy, the information necessary to transform the measured coordinates successfully into standard coordinates by a process that involves no sophisticated model for the plate. Using external checks we estimate the increase in accuracy to be of the order of 25 per cent.  相似文献   

17.
18.
The theory of low-order linear stochastic differential equations is reviewed. Solutions to these equations give the continuous time analogues of discrete time autoregressive time-series. Explicit forms for the power spectra and covariance functions of first- and second-order forms are given. A conceptually simple method is described for fitting continuous time autoregressive models to data. Formulae giving the standard errors of the parameter estimates are derived. Simulated data are used to verify the performance of the methods. Irregularly spaced observations of the two hydrogen-deficient stars FQ Aqr and NO Ser are analysed. In the case of FQ Aqr the best-fitting model is of second order, and describes a quasi-periodicity of about 20 d with an e-folding time of 3.7 d. The NO Ser data are best fitted by a first-order model with an e-folding time of 7.2 d.  相似文献   

19.
A statistical model is formulated that enables one to analyse jointly the times between maxima and minima in the light curves of monoperiodic pulsating stars. It is shown that the combination of both sets of data into one leads to analyses that are more sensitive. Illustrative applications to the American Association of Variable Star Observers data for a number of long-period variables demonstrate the usefulness of the approach.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号