首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast‐Fourier‐Transform spectrometer (DFFT, based on FPGA‐technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two‐dimensional baseline fit in the time‐frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer‐generated RFI data as well as on real DFFT data recorded at the Effelsberg 100‐m telescope. At 21‐cm wavelength RFI signals can be identified down to the 4σ rms level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the astronomical line emission of the Milky Way, (2) interferences are polarised, (3) electronic devices in the neighbourhood of the telescope contribute significantly to the RFI radiation. We also show that the radiometer equation is no longer fulfilled in presence of RFI signals. (© 2007 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
This paper is primarily an investigation of whether the 'optimal extraction' techniques used in CCD spectroscopy can be applied to imaging photometry. It is found that using such techniques provides a gain of around 10 per cent in signal-to-noise ratio over normal aperture photometry. Formally, it is shown to be equivalent to profile fitting, but offers advantages of robust error estimation, freedom from bias introduced by mis-estimating the point spread function, and convenience. In addition some other techniques are presented, which can be applied to profile fitting, aperture photometry and the 'optimal' photometry. Code implementing these algorithms is available at http://www.astro.keele.ac.uk/~timn/.  相似文献   

3.
4.
The quantitative spectroscopy of stellar objects in complex environments is mainly limited by the ability of separating the object from the background. Standard slit spectroscopy, restricting the field of view to one dimension, is obviously not the proper technique in general. The emerging Integral Field (3D) technique with spatially resolved spectra of a two‐dimensional field of view provides a great potential for applying advanced subtraction methods. In this paper an image reconstruction algorithm to separate point sources and a smooth background is applied to 3D data. Several performance tests demonstrate the photometric quality of the method. The algorithm is applied to real 3D observations of a sample Planetary Nebula in M31, whose spectrum is contaminated by the bright and complex galaxy background. The ability of separating sources is also studied in a crowded field in M33. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using manually classified images of elliptical, spiral and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbour rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with an accuracy of ∼90 per cent compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects.  相似文献   

6.
7.
In this paper, we describe the capabilities of E3D, the Euro3D visualization tool, to handle and display data created by large Integral Field Units (IFUs) and by mosaics consisting of multiple pointings. The reliability of the software has been tested with real data, originating from the PMAS instrument in mosaic mode and from the VIMOS instrument, which features the largest IFU currently available. The capabilities and limitations of the current software are examined in view of future large IFUs, which will produce extremely large datasets. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
9.
We describe the software requirement and design specifications for all-sky panoramic astronomical pipelines. The described software aims to meet the specific needs of superwide-angle optics, and includes cosmic-ray hit rejection, image compression, star recognition, sky opacity analysis, transient detection and a web server allowing access to real-time and archived data. The presented software is being regularly used for the pipeline processing of 11 all-sky cameras located in some of the world's premier observatories. We encourage all-sky camera operators to use our software and/or our hosting services and become part of the global Night Sky Live network.  相似文献   

10.
The ORAC‐DR data reduction pipeline has been used by the Joint Astronomy Centre since 1998. Originally developed for an infrared spectrometer and a submillimetre bolometer array, it has since expanded to support twenty instruments from nine different telescopes. By using shared code and a common infrastructure, rapid development of an automated data reduction pipeline for nearly any astronomical data is possible. This paper discusses the infrastructure available to developers and estimates the development timescales expected to reduce data for new instruments using ORAC‐DR. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
An algorithm for cosmic‐ray rejection from single images is presented. The algorithm is based on modeling human perception using fuzzy logic. The proposed algorithm is specifically designed to reject multiple‐pixel cosmic ray hits that are larger than some of the point spread functions of the true astronomical sources. Experiments show that the algorithm can accurately reject ∼97.5% of the cosmic rays hits, while mistakenly rejecting 0.02% of the true astronomical sources. The major advantage of the presented algorithm is its computational efficiency. (© 2005 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
13.
Using the Very Large Array (VLA), we observed all three of the 6-cm lines of the  2Π1/2, J = 1/2  state of OH with sub-arcsecond resolution (∼0.4 arcsec) in W49A. While the spatial distribution and the range in velocities of the 6-cm lines are similar to those of the ground-state (18-cm) OH lines, a large fraction of the total emission in all three 6-cm lines has large linewidths (∼5–10 km s−1) and is spatially extended, very unlike typical ground-state OH masers, which typically are point-like at VLA resolutions and have linewidths ≤1 km s−1. We find brightness temperatures of 5900, 4700 and ≥730 K for the 4660, 4750 and 4765-MHz lines, respectively. We conclude that these are indeed maser lines. However, the gains are ∼0.3, again very unlike the 18-cm lines, which have gains  ≥104  . We compare the excited-state OH emission with that from other molecules observed with comparable angular resolution to estimate physical conditions in the regions emitting the peculiar, low-gain maser lines. We also comment on the relationship with the 18-cm masers.  相似文献   

14.
Difference imaging is a technique for obtaining precise relative photometry of variable sources in crowded stellar fields and, as such, constitutes a crucial part of the data reduction pipeline in surveys for microlensing events or transiting extrasolar planets. The Optimal Image Subtraction (OIS) algorithm of Alard & Lupton (1998) permits the accurate differencing of images by determining convolution kernels which, when applied to reference images with particularly good seeing and signal‐to‐noise (S/N), provide excellent matches to the point‐spread functions (PSF) in other images of the time series to be analysed. The convolution kernels are built as linear combinations of a set of basis functions, conventionally bivariate Gaussians modulated by polynomials. The kernel parameters, mainly the widths and maximal degrees of the basis function model, must be supplied by the user. Ideally, the parameters should be matched to the PSF, pixel‐sampling, and S/N of the data set or individual images to be analysed. We have studied the dependence of the reduction outcome as a function of the kernel parameters using our new implementation of OIS within the IDL‐based TRIPP package. From the analysis of noise‐free PSF simulations of both single objects and crowded fields, as well as the test images in the ISIS OIS software package, we derive qualitative and quantitative relations between the kernel parameters and the success of the subtraction as a function of the PSF widths and sampling in reference and data images and compare the results to those of other implementations found in the literature. On the basis of these simulations, we provide recommended parameters for data sets with different S/N and sampling. (© 2007 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

15.
Radio interferometry significantly improves the resolution of observed images, and the final result also relies heavily on data recovery. The Cotton-Schwab CLEAN(CS-Clean) deconvolution approach is a widely used reconstruction algorithm in the field of radio synthesis imaging. However, parameter tuning for this algorithm has always been a difficult task. Here, its performance is improved by considering some internal characteristics of the data. From a mathematical point of view, a peak signal-to-noise-based(PSNRbased) method was introduced to optimize the step length of the steepest descent method in the recovery process. We also found that the loop gain curve in the new algorithm is a good indicator of parameter tuning.Tests show that the new algorithm can effectively solve the problem of oscillation for a large fixed loop gain and provides a more robust recovery.  相似文献   

16.
An efficient algorithm for adaptive kernel smoothing (AKS) of two-dimensional imaging data has been developed and implemented using the Interactive Data Language ( idl ). The functional form of the kernel can be varied (top-hat, Gaussian, etc.) to allow different weighting of the event counts registered within the smoothing region. For each individual pixel, the algorithm increases the smoothing scale until the signal-to-noise ratio (S/N) within the kernel reaches a pre-set value. Thus, noise is suppressed very efficiently, while at the same time real structure, that is, signal that is locally significant at the selected S/N level, is preserved on all scales. In particular, extended features in noise-dominated regions are visually enhanced. The asmooth algorithm differs from other AKS routines in that it allows a quantitative assessment of the goodness of the local signal estimation by producing adaptively smoothed images in which all pixel values share the same S/N above the background .
We apply asmooth to both real observational data (an X-ray image of clusters of galaxies obtained with the Chandra X-ray Observatory) and to a simulated data set. We find the asmooth ed images to be fair representations of the input data in the sense that the residuals are consistent with pure noise, that is, they possess Poissonian variance and a near-Gaussian distribution around a mean of zero, and are spatially uncorrelated.  相似文献   

17.
18.
We present the results of applying new object classification techniques to the supernova search of the Nearby Supernova Factory. In comparison to simple threshold cuts, more sophisticated methods such as boosted decision trees, random forests, and support vector machines provide dramatically better object discrimination: we reduced the number of nonsupernova candidates by a factor of 10 while increasing our supernova identification efficiency. Methods such as these will be crucial for maintaining a reasonable false positive rate in the automated transient alert pipelines of upcoming large optical surveys. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.
The entropic prior for distributions with positive and negative values   总被引:1,自引:0,他引:1  
The maximum entropy method has been used to reconstruct images in a wide range of astronomical fields, but in its traditional form it is restricted to the reconstruction of strictly positive distributions. We present an extension of the standard method to include distributions that can take both positive and negative values. The method may therefore be applied to a much wider range of astronomical reconstruction problems. In particular, we derive the form of the entropy for positive/negative distributions and use direct counting arguments to find the form of the entropic prior. We also derive the measure on the space of positive/negative distributions, which allows the definition of probability integrals and hence the proper quantification of errors.  相似文献   

20.
A probabilistic technique for the joint estimation of background and sources with the aim of detecting faint and extended celestial objects is described. Bayesian probability theory is applied to gain insight into the co-existence of background and sources through a probabilistic two-component mixture model, which provides consistent uncertainties of background and sources. A multiresolution analysis is used for revealing faint and extended objects in the frame of the Bayesian mixture model. All the revealed sources are parametrized automatically providing source position, net counts, morphological parameters and their errors.
We demonstrate the capability of our method by applying it to three simulated data sets characterized by different background and source intensities. The results of employing two different prior knowledge on the source signal distribution are shown. The probabilistic method allows for the detection of bright and faint sources independently of their morphology and the kind of background. The results from our analysis of the three simulated data sets are compared with other source detection methods. Additionally, the technique is applied to ROSAT All-Sky Survey data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号