首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a new method for determining physical parameters of RRab variables exclusively from multicolour light curves. Our method is an inverse photometric Baade–Wesselink analysis which, using a non-linear least-squares algorithm, searches for the effective temperature ( T eff) and pulsational velocity ( V p) curves and other physical parameters that best fit the observed light curves, utilizing synthetic colours and bolometric corrections from static atmosphere models. The T eff and V p curves are initially derived from empirical relations then they are varied by the fitting algorithm. The method yields the variations and the absolute values of the radius, the effective temperature, the visual brightness and the luminosity of individual objects. Distance and mass are also determined. The method is tested on nine RRab stars subjected to Baade–Wesselink analyses earlier by several authors. The physical parameters derived by our method using only the light-curve data of these stars are well within their possible ranges defined by direct Baade–Wesselink and other techniques. A new empirical relation between the I C magnitude and the pulsational velocity is also presented, which allows to construct the V p curve of an RRab star purely from photometric observations to an accuracy of about 3.5 km s−1.  相似文献   

2.
A speedy pixon algorithm for image reconstruction is described. Two applications of the method to simulated astronomical data sets are also reported. In one case, galaxy clusters are extracted from multiwavelength microwave sky maps using the spectral dependence of the Sunyaev–Zel'dovich effect to distinguish them from the microwave background fluctuations and the instrumental noise. The second example involves the recovery of a sharply peaked emission profile, such as might be produced by a galaxy cluster observed in X-rays. These simulations show the ability of the technique both to detect sources in low signal-to-noise ratio data and to deconvolve a telescope beam in order to recover the internal structure of a source.  相似文献   

3.
We present a study of the dynamic range limitations in images produced with the proposed Square Kilometre Array (SKA) using the Cotton-Schwab CLEAN algorithm for data processing. The study is limited to the case of a small field of view and a snap-shot observation. A new modification of the Cotton-Schwab algorithm involving optimization of the position of clean components is suggested. This algorithm can reach a dynamic range as high as 106 even if the point source lies between image grid points, in contrast to about 103 for existing CLEAN-based algorithms in the same circumstances. It is shown that the positional accuracy of clean components, floating point precision and the w-term are extremely important at high dynamic range. The influence of these factors can be reduced if the variance of the gradient of the point spread function is minimized during the array design.  相似文献   

4.
We present a new cluster detection algorithm designed for finding high-redshift clusters using optical/infrared imaging data. The algorithm has two main characteristics. First, it utilizes each galaxy's full redshift probability function, instead of an estimate of the photometric redshift based on the peak of the probability function and an associated Gaussian error. Second, it identifies cluster candidates through cross-checking the results of two substantially different selection techniques (the name 2TecX representing the cross-check of the two techniques). These are adaptations of the Voronoi Tesselations and Friends-Of-Friends methods. Monte Carlo simulations of mock catalogues show that cross-checking the cluster candidates found by the two techniques significantly reduces the detection of spurious sources. Furthermore, we examine the selection effects and relative strengths and weaknesses of either method. The simulations also allow us to fine-tune the algorithm's parameters, and define completeness and mass limit as a function of redshift. We demonstrate that the algorithm isolates high-redshift clusters at a high level of efficiency and low contamination.  相似文献   

5.
分子云团块是恒星的诞生地. 分子团块的普查和其性质的全面研究将有助于了解恒星的形成乃至星系和宇宙的演化过程. 随着银河画卷计划(MWISP)项目的深入进行, 这类研究方案变得切实可行. 但是项目产生的分子云观测数据是海量的, 因此迫切需要一种能够自动识别和证认分子团块的方法. 目前应用广泛的3维分子云数据处理方法有很多, 典型的包括GaussClumps、ClumpFind、FellWalker、Reinhold等, 但都需要输入多个参数来控制它们的性能, 并且进行反复的参数优化和目测才能得到比较满意的结果. 对于大规模的观测数据, 利用现有方法进行分子团块的证认将是一项耗时耗力的任务. 为了克服传统分子云团块检测算法的局限性, 人工智能(AI)的方法将提供一个很好的解决方案. 提出了一种3D CNN (Convolutional Neural Network)方法, 它可以自动处理3D分子谱线数据, 整个过程分为检出和验证两个步骤. 首先, 通过设置较低阈值使用ClumpFind以检出候选对象, 然后通过训练好的3D CNN模型进行验证. 利用仿真数据所做的一系列的实验结果表明, 该方法的综合表现优于4种传统方法. 将该方法应用于实际的MWISP数据表明, 3D CNN方法的性能也令人满意.  相似文献   

6.
AST3-2 (Antarctic Survey Telescopes)光学巡天望远镜位于南极大陆最高点冰穹A,其产生的大量观测数据对数据处理的效率提出了较高要求.同时南极通信不便,数据回传有诸多困难,有必要在南极本地实现自动处理AST3-2观测数据,进行变源和暂现源观测的数据处理,但是受到低功耗计算机的限制,数据的快速自动处理的实现存在诸多困难.将已有的图像相减方案同机器学习算法相结合,并利用AST3-2 2016年观测数据作为测试样本,发展一套的暂现源及变源的筛选方法成为可行的选择.该筛选方法使用图像相减法初步筛选出可能的变源,再用主成分分析法抽取候选源的特征,并选择随机森林作为机器学习分类器,在测试中对正样本的召回率达到了97%,验证了这种方法的可行性,并最终在2016年观测数据中探测出一批变星候选体.  相似文献   

7.
The quantitative spectroscopy of stellar objects in complex environments is mainly limited by the ability of separating the object from the background. Standard slit spectroscopy, restricting the field of view to one dimension, is obviously not the proper technique in general. The emerging Integral Field (3D) technique with spatially resolved spectra of a two‐dimensional field of view provides a great potential for applying advanced subtraction methods. In this paper an image reconstruction algorithm to separate point sources and a smooth background is applied to 3D data. Several performance tests demonstrate the photometric quality of the method. The algorithm is applied to real 3D observations of a sample Planetary Nebula in M31, whose spectrum is contaminated by the bright and complex galaxy background. The ability of separating sources is also studied in a crowded field in M33. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
A new technique is presented for producing images from interferometric data. The method, 'smear fitting', makes the constraints necessary for interferometric imaging double as a model, with uncertainties, of the sky brightness distribution. It does this by modelling the sky with a set of functions and then convolving each component with its own elliptical Gaussian to account for the uncertainty in its shape and location that arises from noise. This yields much sharper resolution than clean for significantly detected features, without sacrificing any sensitivity. Using appropriate functional forms for the components provides both a scientifically interesting model and imaging constraints that tend to be better than those used by traditional deconvolution methods. This allows it to avoid the most serious problems that limit the imaging quality of those methods. Comparisons of smear fitting to clean and maximum entropy are given, using both real and simulated observations. It is also shown that the famous Rayleigh criterion (resolution = wavelength/baseline) is inappropriate for interferometers as it does not consider the reliability of the measurements.  相似文献   

9.
10.
AST3-2 (the second Antarctic Survey Telescope) is located in Antarctic Dome A, the loftiest ice dome on the Antarctic Plateau. It produces a huge amount of observational data which require a more efficient data reduction program to be developed. Also the data transmission in Antarctica is much difficult, thus it is necessary to perform data reduction and detect variable and transient sources remotely and automatically in Antarctica, but this attempt is restricted by the unsatisfactory performance of the low power consumption computer in Antarctica. For realizing this purpose, to develop a new method based on the existing image subtraction method and random forest algorithm, taking the AST3-2 2016 dataset as the test sample, becomes an alternative choice. This method performs image subtraction on the dataset, then applies the principle component analysis to extract the features of residual images. Random forest is used as a machine learning classifier, and in the test a recall rate of 97% is resulted for the positive sample. Our work has verified the feasibility and accuracy of this method, and finally found out a batch of candidates for variable stars in the AST3-2 2016 dataset.  相似文献   

11.
天体光谱分类是天文学研究的重要内容之一,其关键是从光谱数据中选择和提取对分类识别最有效的特征构建特征空间.提出一种新的基于2维傅里叶谱图像的特征提取方法,并应用于LAMOST (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope)恒星光谱数据的分类研究中.光谱数据来源于LAMOST Data Release 5(DR5),选取30000条F、 G和K型星光谱数据,利用短时傅里叶变换(Short-Time Fourier Transform, STFT)将1维光谱数据变换成2维傅里叶谱图像,对得到的2维傅里叶谱图像采用深度卷积网络模型进行分类,得到的分类准确率是92.90%.实验结果表明通过对LAMOST恒星光谱数据进行STFT可得到光谱的2维傅里叶谱图像,谱图像构成了新的光谱数据特征和特征空间,新的特征对于光谱数据分类是有效的.此方法是对光谱分类的一种全新尝试,对海量天体光谱的分类和挖掘处理有一定的开创意义.  相似文献   

12.
Attila Elteto  Owen B. Toon 《Icarus》2010,210(2):566-588
We present a new parameter retrieval algorithm for Mars Global Surveyor Thermal Emission Spectrometer data. The algorithm uses Newtonian first-order sensitivity functions of the infrared spectrum in response to variations in physical parameters to fit a model spectrum to the data at 499, 1099, and 1301 cm−1. The algorithm iteratively fits the model spectrum to data to simultaneously retrieve dust extinction optical depth, effective radius, and surface temperature. There are several sources of uncertainty in the results. The assumed dust vertical distribution can introduce errors in retrieved optical depth of a few tens of percent. The assumed dust optical constants can introduce errors in both optical depth and effective radius, although the systematic nature of these errors will not affect retrieval of trends in these parameters. The algorithm does not include the spectral signature of water ice, and hence data needs to be filtered against this parameter before the algorithm is applied. The algorithm also needs sufficient dust spectral signature, and hence surface-to-atmosphere temperature contrast, to successfully retrieve the parameters. After the application of data filters the algorithm is both relatively accurate and very fast, successfully retrieving parameters, as well as meaningful parameter variability and trends from tens of thousands of individual spectra on a global scale (Elteto, A., Toon, O.B. [2010]. Icarus, this issue). Our results for optical depth compare well with TES archive values when corrected by the single scattering albedo. Our results are on average 1–4 K higher in surface temperatures from the TES archive values, with greater differences at higher optical depths. Our retrieval of dust effective radii compare well with the retrievals of Wolff and Clancy (Wolff, M.J., Clancy, R.T. [2003]. J. Geophys. Res. 108 (E9), 5097) for the corresponding data selections from the same orbits.  相似文献   

13.
We present new determinations of bolometric corrections and effective temperature scales as a function of infrared optical colours, using a large data base of photometric observations of about 6500 Population II giants in Galactic globular clusters (GGCs), covering a wide range in metallicity (−2.0 < [Fe/H] < 0.0).   New relations for BC K versus ( V  −  K ) , ( J  −  K ) and BC V versus ( B  −  V ), ( V  −  I ), ( V  −  J ), and new calibrations for T eff, using both an empirical relation and model atmospheres, are provided.   Moreover, an empirical relation to derive the R parameter of the infrared flux method as a function of the stellar temperature is also presented.  相似文献   

14.
Machine learning has achieved great success in many areas today. The lifting algorithm has a strong ability to adapt to various scenarios with a high accuracy, and has played a great role in many fields. But in astronomy, the application of lifting algorithms is still rare. In response to the low classification accuracy of the dark star/galaxy source set in the Sloan Digital Sky Survey (SDSS), a new research result of machine learning, eXtreme Gradient Boosting (XGBoost), has been introduced. The complete photometric data set is obtained from the SDSS-DR7, and divided into a bright source set and a dark source set according to the star magnitude. Firstly, the ten-fold cross-validation method is used for the bright source set and the dark source set respectively, and the XGBoost algorithm is used to establish the star/galaxy classification model. Then, the grid search and other methods are used to adjust the XGBoost parameters. Finally, based on the galaxy classification accuracy and other indicators, the classification results are analyzed, by comparing with the models of function tree (FT), Adaptive boosting (Adaboost), Random Forest (RF), Gradient Boosting Decision Tree (GBDT), Stacked Denoising AutoEncoders (SDAE), and Deep Belief Nets (DBN). The experimental results show that, the XGBoost improves the classification accuracy of galaxies in the dark source classification by nearly 10% as compared to the function tree algorithm, and improves the classification accuracy of sources with the darkest magnitudes in the dark source set by nearly 5% as compared to the function tree algorithm. Compared with other traditional machine learning algorithms and deep neural networks, the XGBoost also has different degrees of improvement.  相似文献   

15.
An automatic Bayesian Kepler periodogram has been developed for identifying and characterizing multiple planetary orbits in precision radial velocity data. The periodogram is powered by a parallel tempering Markov chain Monte Carlo (MCMC) algorithm which is capable of efficiently exploring a multiplanet model parameter space. The periodogram employs an alternative method for converting the time of an observation to true anomaly that enables it to handle much larger data sets without a significant increase in computation time. Improvements in the periodogram and further tests using data from HD 208487 have resulted in the detection of a second planet with a period of 90982−92 d, an eccentricity of 0.370.26−0.20, a semimajor axis of 1.870.13−0.14 au and an M sin  i = 0.45+0.11−0.13 M J. The revised parameters of the first planet are period = 129.8 ± 0.4 d, eccentricity = 0.20 ± 0.09, semimajor axis = 0.51 ± 0.02 au and M sin  i = 0.41 ± 0.05  M J. Particular attention is paid to several methods for calculating the model marginal likelihood which is used to compare the probabilities of models with different numbers of planets.  相似文献   

16.
In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast‐Fourier‐Transform spectrometer (DFFT, based on FPGA‐technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two‐dimensional baseline fit in the time‐frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer‐generated RFI data as well as on real DFFT data recorded at the Effelsberg 100‐m telescope. At 21‐cm wavelength RFI signals can be identified down to the 4σ rms level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the astronomical line emission of the Milky Way, (2) interferences are polarised, (3) electronic devices in the neighbourhood of the telescope contribute significantly to the RFI radiation. We also show that the radiometer equation is no longer fulfilled in presence of RFI signals. (© 2007 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
1 INTRODUCTIONThe B eij lug- Ahs ona~ TaiP ei- C onnect lout Color S urvey of t he S by (Hereaft er B AT C ) ut driesthe 15 intermediate band filters to make CCD forage photometric ohs~ion. The BATCphotometric system ties its maghtude zero P~Oint to the spectro-photometric AB maghtudesystem. The AB system is a monochro~ic fi system fort introduced by Oke in 1969 with aprovisional calibration designated AB69.The AB system selects F subdwarfs around visual magnitude 9 as standa…  相似文献   

18.
Machine learning has achieved great success in many areas today, but the forecast effect of machine learning often depends on the specific problem. An ensemble learning forecasts results by combining multiple base classifiers. Therefore, its ability to adapt to various scenarios is strong, and the classification accuracy is high. In response to the low classification accuracy of the darkest source magnitude set of stars/galaxies in the Sloan Digital Sky Survey (SDSS), a star/galaxy classification algorithm based on the stacking ensemble learning is proposed in this paper. The complete photometric data set is obtained from the SDSS Data Release (DR) 7, and divided into the bright source magnitude set, dark source magnitude set, and darkest source magnitude set according to the stellar magnitude. Firstly, the 10-fold nested cross-validation method is used for the darkest source magnitude set, then the Support Vector Machine (SVM), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost) algorithms are used to establish the base-classifier model; the Gradient Boosting Decision Tree (GBDT) is used as the meta-classifier model. Finally, based on the classification accuracy of galaxies and other indicators, the classification results are analyzed and compared with the results obtained by the Function Tree (FT), SVM, RF, GBDT, Stacked Denoising Autoencoders (SDAE), Deep Belief Nets (DBN), and Deep Perception Decision Tree (DPDT) models. The experimental results show that the stacking ensemble learning model has improved the classification accuracy of galaxies in the darkest source magnitude set by nearly 10% compared to the function tree algorithm. Compared with other traditional machine learning algorithm, stronger lifting algorithm, and deep learning algorithm, the stacking ensemble learning model also has different degrees of improvement.  相似文献   

19.
The numerical kernel approach to difference imaging has been implemented and applied to gravitational microlensing events observed by the PLANET collaboration. The effect of an error in the source-star coordinates is explored and a new algorithm is presented for determining the precise coordinates of the microlens in blended events, essential for accurate photometry of difference images. It is shown how the photometric reference flux need not be measured directly from the reference image but can be obtained from measurements of the difference images combined with the knowledge of the statistical flux uncertainties. The improved performance of the new algorithm, relative to isis2 , is demonstrated.  相似文献   

20.
A new method of wavefront sensing that uses a pair of equally defocused images to derive the wavefront aberrations is presented. Unlike in conventional curvature-sensing systems, the sensor works in a near-focus regime where the transport of intensity equation is not valid, and, unlike in phase-diversity methods, a non-iterative algorithm is used to infer the wavefront aberrations. The sensor designs outlined only require a small number of detector pixels: two designs with five and nine pixels per plane are analysed, and the nine-element sensor (NES) is shown to have a competitive measurement sensitivity compared with existing low-order astronomical wavefront sensors. The NES is thus well suited to applications such as adaptive optics for the individual telescopes in an optical interferometer array.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号