首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Radio interferometry significantly improves the resolution of observed images, and the final result also relies heavily on data recovery. The Cotton-Schwab CLEAN(CS-Clean) deconvolution approach is a widely used reconstruction algorithm in the field of radio synthesis imaging. However, parameter tuning for this algorithm has always been a difficult task. Here, its performance is improved by considering some internal characteristics of the data. From a mathematical point of view, a peak signal-to-noise-based(PSNRbased) method was introduced to optimize the step length of the steepest descent method in the recovery process. We also found that the loop gain curve in the new algorithm is a good indicator of parameter tuning.Tests show that the new algorithm can effectively solve the problem of oscillation for a large fixed loop gain and provides a more robust recovery.  相似文献   

2.
A new technique is presented for producing images from interferometric data. The method, 'smear fitting', makes the constraints necessary for interferometric imaging double as a model, with uncertainties, of the sky brightness distribution. It does this by modelling the sky with a set of functions and then convolving each component with its own elliptical Gaussian to account for the uncertainty in its shape and location that arises from noise. This yields much sharper resolution than clean for significantly detected features, without sacrificing any sensitivity. Using appropriate functional forms for the components provides both a scientifically interesting model and imaging constraints that tend to be better than those used by traditional deconvolution methods. This allows it to avoid the most serious problems that limit the imaging quality of those methods. Comparisons of smear fitting to clean and maximum entropy are given, using both real and simulated observations. It is also shown that the famous Rayleigh criterion (resolution = wavelength/baseline) is inappropriate for interferometers as it does not consider the reliability of the measurements.  相似文献   

3.
The auroras on Jupiter and Saturn can be studied with a high sensitivity and resolution by the Hubble Space Telescope ( HST ) ultraviolet (UV) and far-ultraviolet Space Telescope Imaging Spectrograph (STIS) and Advanced Camera for Surveys (ACS) instruments. We present results of automatic detection and segmentation of Jupiter's auroral emissions as observed by the HST ACS instrument with the VOronoi Image SEgmentation (VOISE). VOISE is a dynamic algorithm for partitioning the underlying pixel grid of an image into regions according to a prescribed homogeneity criterion. The algorithm consists of an iterative procedure that dynamically constructs a tessellation of the image plane based on a Voronoi diagram, until the intensity of the underlying image within each region is classified as homogeneous. The computed tessellations allow the extraction of quantitative information about the auroral features, such as mean intensity, latitudinal and longitudinal extents and length-scales. These outputs thus represent a more automated and objective method of characterizing auroral emissions than manual inspection.  相似文献   

4.
5.
An unbiased method for improving the resolution of astronomical images is presented. The strategy at the core of this method is to establish a linear transformation between the recorded image and an improved image at some desirable resolution. In order to establish this transformation only the actual point spread function and a desired point spread function need be known. No image actually recorded is used in establishing the linear transformation between the recorded and improved image.
This method has a number of advantages over other methods currently in use. It is not iterative, which means it is not necessary to impose any criteria, objective or otherwise, to stop the iterations. The method does not require an artificial separation of the image into 'smooth' and 'point-like' components, and thus is unbiased with respect to the character of structures present in the image. The method produces a linear transformation between the recorded image and the deconvolved image, and therefore the propagation of pixel-by-pixel flux error estimates into the deconvolved image is trivial. It is explicitly constrained to preserve photometry and should be robust against random errors.  相似文献   

6.
The success of LISA Pathfinder in demonstrating the LISA drag-free requirement paved the way for using space interferometers to detect low-frequency and middle-frequency gravitational waves(GWs). The TAIJI GW mission and the new LISA GW mission propose using an arm length of 3 Gm(1 Gm = 10~6 km) and an arm length of 2.5 Gm respectively. For a space laser-interferometric GW antenna,due to astrodynamical orbit variation, time delay interferometry(TDI) is needed to achieve nearly equivalent equal-arms for suppressing the laser frequency noise below the level of optical path noise, acceleration noise, etc in order to attain the requisite sensitivity. In this paper, we simulate TDI numerically for the TAIJI mission and the new LISA mission. To do this, we work out a set of 2200-day(6-year) optimized science orbits for each mission starting on 2028 March 22 using the CGC 2.7.1 ephemeris framework. Then we use the numerical method to calculate the residual optical path differences of the first-generation TDI configurations and the selected second-generation TDI configurations. The resulting optical path differences of the second-generation TDI configurations calculated for TAIJI, new LISA and eLISA are well below their respective requirements for laser frequency noise cancelation. However, for the first-generation TDI configurations, the original requirements need to be relaxed by 3 to 30 fold to be satisfied. For TAIJI and the new LISA, about one order of magnitude relaxation would be good and recommended; this could be borne on the laser stability requirement in view of recent progress in laser stability, or the GW detection sensitivities of the second-generation TDIs have to be used in the diagnosis of the observed data instead of the commonly used X, Y and Z TDIs.  相似文献   

7.
Radio interferometry probes astrophysical signals through incomplete and noisy Fourier measurements. The theory of compressed sensing demonstrates that such measurements may actually suffice for accurate reconstruction of sparse or compressible signals. We propose new generic imaging techniques based on convex optimization for global minimization problems defined in this context. The versatility of the framework notably allows introduction of specific prior information on the signals, which offers the possibility of significant improvements of reconstruction relative to the standard local matching pursuit algorithm CLEAN used in radio astronomy. We illustrate the potential of the approach by studying reconstruction performances on simulations of two different kinds of signals observed with very generic interferometric configurations. The first kind is an intensity field of compact astrophysical objects. The second kind is the imprint of cosmic strings in the temperature field of the cosmic microwave background radiation, of particular interest for cosmology.  相似文献   

8.
Recent theoretical developments in astronomical aperture synthesis have revealed the existence of integer‐ambiguity prob‐lems. Those problems, which appear in the self‐calibration procedures of radio imaging, have been shown to be similar to the nearest‐lattice point (NLP) problems encountered in high‐precision geodetic positioning and in global navigation satellite systems. In this paper we analyse the theoretical aspects of the matter and propose new methods for solving those NLP problems. The related optimization aspects concern both the preconditioning stage, and the discrete‐search stage in which the integer ambiguities are finally fixed. Our algorithms, which are described in an explicit manner, can easily be implemented. They lead to substantial gains in the processing time of both stages. Their efficiency was shown via intensive numerical tests. (© 2014 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

9.
We develop a radio astronomical approach to 3D‐reconstruction in few projections tomography. It is based on the 2‐CLEAN DSA method which consists of two clean algorithms by using a synthesized beam. In complex cases two extreme solutions are used for the analysis of the image structure. These solutions determine the limits of permissible energy redistribution on the image among the components of small and large scales. Two variants of 3D‐reconstruction proceeding from a set of two‐dimensional projections (3D2D) and from a set of one‐dimensional ones (3D1D) are considered. It is shown that the quality of 3D2D‐reconstruction should be similar to the quality of 2D1D‐reconstruction if the same number of equally spaced scans is used. But a doubled number of projections is required for 3D1D‐reconstruction. We have simulated 3D‐reconstruction of an optically thin emitting object. The present technique is a component of astrotomography and it has good prospects for a wide range of remote sensing. (© 2005 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

10.
平方公里阵列(Square Kilometre Array,SKA)项目是建设全球最大射电望远镜的国际合作项目,其灵敏度和测量速度将比当前所有的射电望远镜都要高出一个数量级.连续谱巡天是SKA的主要观测模式之一,基于连续谱成像建立巡天区域的标准星图,将能为后续天文科学研究奠定重要基础.银河系与河外星系全天默奇森宽场阵列拓展巡天(GaLactic and Extragalactic All-sky Murchison Widefield Array survey eXtended,GLEAM-X)是2018—2020年利用SKA先导望远镜默奇森宽场阵列(Murchison Wide-field Array,MWA)二期拓展阵列开展的新的射电连续谱巡天项目,观测期间积累了大量的低频巡天观测数据.海量观测数据的自动化、大批量处理是SKA望远镜项目所面临的的最大挑战和难题之一,基于分布式执行框架的成像管线优化经验将有助于解决海量数据处理问题.详细介绍了GLEAM-X成像管线并对其进行整合和改进,在中国SKA区域中心原型机(China SKA Regional Centre Prototype,...  相似文献   

11.
The ORAC‐DR data reduction pipeline has been used by the Joint Astronomy Centre since 1998. Originally developed for an infrared spectrometer and a submillimetre bolometer array, it has since expanded to support twenty instruments from nine different telescopes. By using shared code and a common infrastructure, rapid development of an automated data reduction pipeline for nearly any astronomical data is possible. This paper discusses the infrastructure available to developers and estimates the development timescales expected to reduce data for new instruments using ORAC‐DR. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
针对LAMOST (Large Sky Area Multi-Object Fiber Spectroscopic Telescope)2维光谱图像数据,对6种抽谱算法进行了分析与比较.比较的算法包括孔径法、轮廓拟合法、直接反卷积方法、基于Tikhonov正则化的反卷积抽谱算法、基于自适应Landweber迭代的反卷积抽谱算法以及基于Richardson-Lucy迭代的反卷积抽谱算法.通过实验对这些算法在信噪比和分辨率两个方面进行了比较,发现基于Tikhonov正则化的反卷积抽谱算法、基于自适应Landweber迭代的反卷积抽谱算法以及基于RichardsonLucy迭代的反卷积抽谱算法是6种算法中最为可靠的3种抽谱算法.最后,对今后的工作进行了展望.  相似文献   

13.
14.
The key features of the matphot algorithm for precise and accurate stellar photometry and astrometry using discrete point spread functions (PSFs) are described. A discrete PSF is a sampled version of a continuous PSF, which describes the two-dimensional probability distribution of photons from a point source (star) just above the detector. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or an FITS (Flexible Image Transport System) image file. Discrete PSFs are shifted within an observational model using a 21-pixel-wide damped sinc function, and position-partial derivatives are computed using a five-point numerical differentiation formula. Precise and accurate stellar photometry and astrometry are achieved with undersampled CCD (charge-coupled device) observations by using supersampled discrete PSFs that are sampled two, three or more times more finely than the observational data. The precision and accuracy of the matphot algorithm is demonstrated by using the c -language mpd code to analyse simulated CCD stellar observations; measured performance is compared with a theoretical performance model. Detailed analysis of simulated Next Generation Space Telescope observations demonstrate that millipixel relative astrometry and mmag photometric precision is achievable with complicated space-based discrete PSFs.  相似文献   

15.
The entropic prior for distributions with positive and negative values   总被引:1,自引:0,他引:1  
The maximum entropy method has been used to reconstruct images in a wide range of astronomical fields, but in its traditional form it is restricted to the reconstruction of strictly positive distributions. We present an extension of the standard method to include distributions that can take both positive and negative values. The method may therefore be applied to a much wider range of astronomical reconstruction problems. In particular, we derive the form of the entropy for positive/negative distributions and use direct counting arguments to find the form of the entropic prior. We also derive the measure on the space of positive/negative distributions, which allows the definition of probability integrals and hence the proper quantification of errors.  相似文献   

16.
A probabilistic technique for the joint estimation of background and sources with the aim of detecting faint and extended celestial objects is described. Bayesian probability theory is applied to gain insight into the co-existence of background and sources through a probabilistic two-component mixture model, which provides consistent uncertainties of background and sources. A multiresolution analysis is used for revealing faint and extended objects in the frame of the Bayesian mixture model. All the revealed sources are parametrized automatically providing source position, net counts, morphological parameters and their errors.
We demonstrate the capability of our method by applying it to three simulated data sets characterized by different background and source intensities. The results of employing two different prior knowledge on the source signal distribution are shown. The probabilistic method allows for the detection of bright and faint sources independently of their morphology and the kind of background. The results from our analysis of the three simulated data sets are compared with other source detection methods. Additionally, the technique is applied to ROSAT All-Sky Survey data.  相似文献   

17.
The photometric calibration of the Sloan Digital Sky Survey (SDSS) is a multi‐step process which involves data from three different telescopes: the 1.0‐m telescope at the US Naval Observatory (USNO), Flagstaff Station, Arizona (which was used to establish the SDSS standard star network); the SDSS 0.5‐m Photometric Telescope (PT) at the Apache Point Observatory (APO), NewMexico (which calculates nightly extinctions and calibrates secondary patch transfer fields); and the SDSS 2.5‐m telescope at APO (which obtains the imaging data for the SDSS proper). In this paper, we describe the Monitor Telescope Pipeline, MTPIPE, the software pipeline used in processing the data from the single‐CCD telescopes used in the photometric calibration of the SDSS (i.e., the USNO 1.0‐m and the PT). We also describe transformation equations that convert photometry on the USNO‐1.0m ugriz ′ system to photometry the SDSS 2.5m ugriz system and the results of various validation tests of the MTPIPE software. Further, we discuss the semi‐automated PT factory, which runs MTPIPE in the day‐to‐day standard SDSS operations at Fermilab. Finally, we discuss the use of MTPIPE in current SDSS‐related projects, including the Southern ugriz ′ Standard Star project, the ugriz ′ Open Star Clusters project, and the SDSS extension (SDSS‐II). (© 2006 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

18.
The precision study of dark matter using weak lensing by large-scale structure is strongly constrained by the accuracy with which one can measure galaxy shapes. Several methods have been devised but none has demonstrated the ability to reach the level of precision required by future weak lensing surveys. In this paper, we explore new avenues to the existing 'Shapelets' approach, combining a priori knowledge of the galaxy profile with the power of orthogonal basis function decomposition. This paper discusses the new issues raised by this matched filter approach and proposes promising alternatives to shape measurement techniques. In particular, it appears that the use of a matched filter (e.g. Sérsic profile) restricted to elliptical radial fitting functions resolves several well-known Shapelet issues.  相似文献   

19.
The numerical kernel approach to difference imaging has been implemented and applied to gravitational microlensing events observed by the PLANET collaboration. The effect of an error in the source-star coordinates is explored and a new algorithm is presented for determining the precise coordinates of the microlens in blended events, essential for accurate photometry of difference images. It is shown how the photometric reference flux need not be measured directly from the reference image but can be obtained from measurements of the difference images combined with the knowledge of the statistical flux uncertainties. The improved performance of the new algorithm, relative to isis2 , is demonstrated.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号