首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe briefly the Palomar‐Quest (PQ) digital synoptic sky survey, including its parameters, data processing, status, and plans. Exploration of the time domain is now the central scientific and technological focus of the survey. To this end, we have developed a real‐time pipeline for detection of transient sources.We describe some of the early results, and lessons learned which may be useful for other, similar projects, and time‐domain astronomy in general. Finally, we discuss some issues and challenges posed by the real‐time analysis and scientific exploitation of massive data streams from modern synoptic sky surveys. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
Temporal sampling does more than add another axis to the vector of observables. Instead, under the recognition that how objects change (and move) in time speaks directly to the physics underlying astronomical phenomena, next‐generation wide‐field synoptic surveys are poised to revolutionize our understanding of just about anything that goes bump in the night (which is just about everything at some level). Still, even the most ambitious surveys will require targeted spectroscopic follow‐up to fill in the physical details of newly discovered transients. We are now building a new system intended to ingest and classify transient phenomena in near real‐time from high‐throughput imaging data streams. Described herein, the Transient Classification Project at Berkeley will be making use of classification techniques operating on “features” extracted from time series and contextual (static) information. We also highlight the need for a community adoption of a standard representation of astronomical time series data (ie. “VOTimeseries”). (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
The last decade has seen a dramatic change in the way astronomy is carried out. The dawn of the the new microelectronic devices, like CCDs has dramatically extended the amount of observed data. Large, in some cases all sky surveys emerged in almost all the wavelength ranges of the observable spectrum of electromagnetic waves. This large amount of data has to be organized, published electronically and a new style of data retrieval is essential to exploit all the hidden information in the multiwavelength data. Many statistical algorithms required for these tasks run reasonably fast when using small sets of in‐memory data, but take noticeable performance hits when operating on large databases that do not fit into memory. We utilize new software technologies to develop and evaluate fast multidimensional indexing schemes that inherently follow the underlying, highly non‐uniform distribution of the data: they are layered uniform indices, hierarchical binary space partitioning, and sampled flat Voronoi tessellation of the data. These techniques can dramatically speed up operations such as finding similar objects by example, classifying objects or comparing extensive simulation sets with observations. (© 2007 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
An automated spectral classification technique for large sky surveys is pro-posed. We firstly perform spectral line matching to determine redshift candidates for an observed spectrum, and then estimate the spectral class by measuring the similarity be-tween the observed spectrum and the shifted templates for each redshift candidate. As a byproduct of this approach, the spectral redshift can also be obtained with high accuracy. Compared with some approaches based on computerized learning methods in the liter-ature, the proposed approach needs no training, which is time-consuming and sensitive to selection of the training set. Both simulated data and observed spectra are used to test the approach; the results show that the proposed method is efficient, and it can achieve a correct classification rate as high as 92.9%, 97.9% and 98.8% for stars, galaxies and quasars, respectively.  相似文献   

5.
We present the results of applying new object classification techniques to the supernova search of the Nearby Supernova Factory. In comparison to simple threshold cuts, more sophisticated methods such as boosted decision trees, random forests, and support vector machines provide dramatically better object discrimination: we reduced the number of nonsupernova candidates by a factor of 10 while increasing our supernova identification efficiency. Methods such as these will be crucial for maintaining a reasonable false positive rate in the automated transient alert pipelines of upcoming large optical surveys. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
Observations of present and future X‐ray telescopes include a large number of ipitous sources of unknown types. They are a rich source of knowledge about X‐ray dominated astronomical objects, their distribution, and their evolution. The large number of these sources does not permit their individual spectroscopical follow‐up and classification. Here we use Chandra Multi‐Wavelength public data to investigate a number of statistical algorithms for classification of X‐ray sources with optical imaging follow‐up. We show that up to statistical uncertainties, each class of X‐ray sources has specific photometric characteristics that can be used for its classification. We assess the relative and absolute performance of classification methods and measured features by comparing the behaviour of physical quantities for statistically classified objects with what is obtained from spectroscopy. We find that among methods we have studied, multi‐dimensional probability distribution is the best for both classifying source type and redshift, but it needs a sufficiently large input (learning) data set. In absence of such data, a mixture of various methods can give a better final result.We discuss some of potential applications of the statistical classification and the enhancement of information obtained in this way. We also assess the effect of classification methods and input data set on the astronomical conclusions such as distribution and properties of X‐ray selected sources. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

7.
Protocols for dealing with time‐sensitive observations have traditionally focused on robotic telescope networks and other types of automated dedicated facilities, mostly in the optical domain. Using UKIRT and JCMT as examples, which are infrared and sub‐millimetre telescopes with a traditional PI‐dominated user base, we discuss how such facilities can join a heterogeneous telescope network to their mutual advantage. (© 2006 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
I provide an incomplete inventory of the astronomical variability that will be found by next‐generation time‐domain astronomical surveys. These phenomena span the distance range from near‐Earth satellites to the farthest Gamma Ray Bursts. The surveys that detect these transients will issue alerts to the greater astronomical community; this decision process must be extremely robust to avoid a slew of “false” alerts, and to maintain the community's trust in the surveys. I review the functionality required of both the surveys and the telescope networks that will be following them up, and the role of VOEvents in this process. Finally, I offer some ideas about object and event classification, which will be explored more thoroughly by other articles in these proceedings. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
The present generation of weak lensing surveys will be superseded by surveys run from space with much better sky coverage and high level of signal-to-noise ratio, such as the Supernova/Acceleration Probe ( SNAP ). However, removal of any systematics or noise will remain a major cause of concern for any weak lensing survey. One of the best ways of spotting any undetected source of systematic noise is to compare surveys that probe the same part of the sky. In this paper we study various measures that are useful in cross-correlating weak lensing surveys with diverse survey strategies. Using two different statistics – the shear components and the aperture mass – we construct a class of estimators which encode such cross-correlations. These techniques will also be useful in studies where the entire source population from a specific survey can be divided into various redshift bins to study cross-correlations among them. We perform a detailed study of the angular size dependence and redshift dependence of these observables and of their sensitivity to the background cosmology. We find that one-point and two-point statistics provide complementary tools which allow one to constrain cosmological parameters and to obtain a simple estimate of the noise of the survey.  相似文献   

10.
An automated classification technique for large size stellar surveys is proposed. It uses the extended Kalman filter as a feature selector and pre-classifier of the data, and the radial basis function neural networks for the classification. Experiments with real data have shown that the correct classification rate can reach as high as 93%, which is quite satisfactory. When different system models are selected for the extended Kalman filter, the classification results are relatively stable. It is shown that for this particular case the result using extended Kalman filter is better than using principal component analysis.  相似文献   

11.
The Planck Satellite will survey the entire sky in 9 millimeter/submillimeter bands and detect thousands of galaxy clusters via their thermal Sunyaev‐Zel'dovich (SZ) effect. The unprecedented volume of the survey will permit the construction of a unique catalog of massive clusters out to redshifts of order unity. We describe the expected contents of this catalog and use an empirical model of the intra‐cluster gas to predict the X‐ray properties of Planck SZ clusters. Using this information we show how a ∼10 Ms follow‐up program on XMM‐Newton could increase by ∼100‐fold the number of clusters with measured temperatures in the redshift range z = 0.5–1. Such a large sample of well‐studied massive clusters at these redshifts would be a powerful cosmological tool and a significant legacy for XMM‐Newton. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
The Effelsberg‐Bonn H I survey (EBHIS) comprises an all‐sky survey north of Dec = –5° of the Milky Way and the local volume out to a red‐shift of z ≃ 0.07. Using state of the art Field Programmable Gate Array (FPGA) spectrometers it is feasible to cover the 100 MHz bandwidth with 16.384 spectral channels. High speed storage of H I spectra allows us to minimize the degradation by Radio Frequency Interference (RFI) signals. Regular EBHIS survey observations started during the winter season 2008/2009 after extensive system evaluation and verification tests. Until today, we surveyed about 8000 square degrees, focusing during the first all‐sky coverage of the Sloan‐Digital Sky Survey (SDSS) area and the northern extension of the Magellanic stream. The first whole sky coverage will be finished in 2011. Already this first coverage will reach the same sensitivity level as the Parkes Milky Way (GASS) and extragalactic surveys (HIPASS). EBHIS data will be calibrated, stray‐radiation corrected and freely accessible for the scientific community via a webinterface. In this paper we demonstrate the scientific data quality and explore the expected harvest of this new all‐sky survey (© 2011 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
The technique of gravitational microlensing is currently unique in its ability to provide a sample of terrestrial exoplanets around both Galactic disk and bulge stars, allowing to measure their abundance and determine their distribution with respect to mass and orbital separation. Thus, valuable information for testing models of planet formation and orbital migration is gathered, constituting an important piece in the puzzle for the existence of life forms throughout the Universe. In order to achieve these goals in reasonable time, a well‐coordinated effort involving a network of either 2m or 4×1m telescopes at each site is required. It could lead to the first detection of an Earth‐mass planet outside the Solar system, and even planets less massive than Earth could be discovered. From April 2008, ARTEMiS (Automated Robotic Terrestrial Exoplanet Microlensing Search) is planned to provide a platform for a three‐step strategy of survey, follow‐up, and anomaly monitoring. As an expert system embedded in eSTAR (e‐Science Telescopes for Astronomical Research), ARTEMiS will give advice for follow‐up based on a priority algorithm that selects targets to be observed in order to maximize the expected number of planet detections, and will also alert on deviations from ordinary microlensing light curves by means of the SIGNALMEN anomaly detector. While the use of the VOEvent (Virtual Observatory Event) protocol allows a direct interaction with the telescopes that are part of the HTN (Heterogeneous Telescope Networks) consortium, additional interfaces provide means of communication with all existing microlensing campaigns that rely on human observers. The success of discovering a planet by microlensing critically depends on the availability of a telescope in a suitable location at the right time, which can mean within 10 min. To encourage follow‐up observations, microlensing campaigns are therefore releasing photometric data in real time. On ongoing planetary anomalies, world‐wide efforts are being undertaken to make sure that sufficient data are obtained, since there is no second chance. Real‐time modelling offers the opportunity of live discovery of extra‐solar planets, thereby providing “Science live to your home”. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
RoboNet‐II uses a global network of robotic telescopes to perform follow‐up observations of microlensing events in the Galactic Bulge. The current network consists of three 2 m telescopes located in Hawaii and Australia (owned by Las Cumbres Observatory) and the Canary Islands (owned by Liverpool John Moores University). In future years the network will be expanded by deploying clusters of 1 m telescopes in other suitable locations. A principal scientific aim of the RoboNet‐II project is the detection of cool extra‐solar planets by the method of gravitational microlensing. These detections will provide crucial constraints to models of planetary formation and orbital migration. RoboNet‐II acts in coordination with the PLANET microlensing follow‐up network and uses an optimization algorithm (“web‐PLOP”) to select the targets and a distributed scheduling paradigm (eSTAR) to execute the observations. Continuous automated assessment of the observations and anomaly detection is provided by the ARTEMiS system (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
We simulated both the matter and light (galaxy) distributions in a wedge of the Universe and calculated the gravitational lensing magnification caused by the mass along the line-of-sight of galaxies and galaxy groups identified in sky surveys. A large volume redshift cone containing cold dark matter particles mimics the expected cosmological matter distribution in a flat universe with low matter density and a cosmological constant. We generate a mock galaxy catalogue from the matter distribution and identify thousands of galaxy groups in the luminous sky projection. We calculate the expected magnification around galaxies and galaxy groups and then the induced quasi-stellar object (QSO)–lens angular correlation due to magnification bias. This correlation is observable and can be used both to estimate the average mass of the lens population and to make cosmological inferences. We also use analytical calculations and various analyses to compare the observational results with theoretical expectations for the cross-correlation between faint QSOs from the 2dF Survey and nearby galaxies and groups from the Automated Plate Measurement and Sloan Digital Sky Survey Early Data Release. The observed QSO–lens anticorrelations are stronger than the predictions for the cosmological model used. This suggests that there could be unknown systematic errors in the observations and data reduction, or that the model used is not adequate. If the observed signal is assumed to be solely due to gravitational lensing, then the lensing is stronger than expected, due to more massive galactic structures or more efficient lensing than simulated.  相似文献   

16.
By means of a batch of low-redshift spectral data of AGNs taken from the SDSS, an automated K-nearest neighbor method is developed to classify AGNs into two types: broad-line and narrow-line AGNs. According to the different characteristics of emission lines of broad-line and narrow-line AGNs, the spectral wavebands containing the Hβ, [OIII], H and [NII] emission lines are used separately or in combination in the classification. experiment. The results show that the best results are obtained when only the wavebands of H and [NII] are used, and that for a training set of size 1000 and a testing set of 3313, we can achieve a speed of 32.89 single classifications per second. It is demonstrated that, where the typical spectral features are sufficiently exploited, the automated classification method is feasible for the spectra of AGNs in largescale spectral surveys and provides a fast and straightforward alternative to classification schemes based on using the FWHM values of emission lines or the line strength ratio diagnostic diagrams.  相似文献   

17.
In preparation for future, large‐scale, multi‐object, high‐resolution spectroscopic surveys of the Galaxy, we present a series of tests of the precision in radial velocity and chemical abundances that any such project can achieve at a 4 m class telescope. We briefly discuss a number of science cases that aim at studying the chemo‐dynamical history of the major Galactic components (bulge, thin and thick disks, and halo) – either as a follow‐up to the Gaia mission or on their own merits. Based on a large grid of synthetic spectra that cover the full range in stellar parameters of typical survey targets, we devise an optimal wavelength range and argue for a moderately high‐resolution spectrograph. As a result, the kinematic precision is not limited by any of these factors, but will practically only suffer from systematic effects, easily reaching uncertainties <1km s–1. Under realistic survey conditions (namely, considering stars brighter than r = 16 mag with reasonable exposure times) we prefer an ideal resolving power of R ∼20 000 on average, for an overall wavelength range (with a common two‐arm spectrograph design) of [395;456.5] nm and [587;673] nm. We show for the first time on a general basis that it is possible to measure chemical abundance ratios to better than 0.1 dex for many species (Fe, Mg, Si, Ca, Ti, Na, Al, V, Cr, Mn, Co, Ni, Y, Ba, Nd, Eu) and to an accuracy of about 0.2 dex for other species such as Zr, La, and Sr. While our feasibility study was explicitly carried out for the 4MOST facility, the results can be readily applied to and used for any other conceptual design study for high‐resolution spectrographs. (© 2013 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
A number of experiments for measuring anisotropies of the cosmic microwave background (CMB) use scanning strategies in which temperature fluctuations are measured along circular scans on the sky. It is possible, from a large number of such intersecting circular scans, to build two-dimensional sky maps for subsequent analysis. However, since instrumental effects — especially the excess low-frequency 1/ f noise — project on to such two-dimensional maps in a non-trivial way, we discuss the analysis approach which focuses on information contained in the individual circular scans. This natural way of looking at CMB data from experiments scanning on the circles combines the advantages of elegant simplicity of Fourier series for the computation of statistics useful for constraining cosmological scenarios, and superior efficiency in analysing and quantifying most of the crucial instrumental effects.  相似文献   

19.
Recent wide field photometric surveys, which target a specific field for long durations, are ideal for studying both long- and short-period stellar variability. Here, we report on 75 variable stars detected during the observations of a field in Pegasus using the Wide Angle Search for Planets Prototype (WASP0) instrument, 73 of which are new discoveries. The variables detected include 16 δ Scuti stars, 34 eclipsing binaries, 3 BY Draconis stars and 4 RR Lyraes. We estimate that the fraction of stars in the field brighter than   V ∼ 13.5  exhibiting variable behaviour with an amplitude greater than 0.6 per cent rms is ∼0.4 per cent. These results are compared with other wide field stellar variability surveys, and implications for detecting transits due to extra-solar planets are discussed.  相似文献   

20.
We report the discovery of a bright blue quasar: SDSS J022218.03–062511.1. This object was discovered spectroscopically while searching for hot white dwarfs that may be used as calibration sources for large sky surveys such as the Dark Energy Survey or the Large Synoptic Survey Telescope project. We present the calibrated spectrum, spectral line shifts and report a redshift of z = 0.521±0.0015 and a rest‐frame g‐band luminosity of 8.71×1011 L. (© 2015 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号