首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
恒星大气物理参量的非参数估计方法   总被引:1,自引:0,他引:1  
恒星大气物理参量(有效温度、表面重力、化学丰度)是导致恒星光谱差异的主要因素.恒星大气物理参量的自动测量是LAMOST等大规模巡天望远镜所产生的海量天体光谱数据自动处理中一个重要研究内容.针对测量大样本的恒星光谱数据估计每个恒星的大气物理参量,提出了一种基于变窗宽核函数的估计算法:变窗宽算法是对固定窗宽算法的改进,分为3个步骤:(1)将历史恒星光谱数据进行PCA处理,得到光谱的低维特征数据;(2)利用特征数据与其物理参数的对应关系,建立一种变窗宽的非参数估计模型;(3)利用该估计模型,直接计算待测恒星光谱的3个物理参量(有效温度、表面重力、金属丰度).实验结果表明:该方法与固定窗宽估计模型以及在其他文献中报道的方法相比,具有较高的估计精度和鲁棒性.  相似文献   

2.
新一代大规模光谱巡天项目产生了近千万条低分辨率恒星光谱,基于这些光谱数据,介绍一种名为The Cannon的机器学习方法。该方法完全基于已知恒星大气参数(有效温度、表面重力加速度和金属丰度等)的光谱数据,通过数据驱动来构建特征向量,建立光谱流量特征和恒星参数的函数对应关系,进而应用到观测光谱数据中,实现对恒星光谱的大气参数求解。The Cannon的主要优势为不直接基于任何恒星物理模型,适用性更广;由于使用了全谱信息,即便对于低信噪比光谱也能得到较高可信度的参数结果,该算法在大规模恒星光谱的数据处理和参数求解方面具有明显的优势。此外,还利用The Cannon得到LAMOST光谱数据中K巨星和M巨星的恒星参数。  相似文献   

3.
大天区面积多目标光纤光谱天文望远镜(Large Sky Area Multi-Object Fiber Spectroscopy Telescope, LAMOST,又叫郭守镜望远镜)巡天项目提供了海量恒星光谱数据,DR5数据集中包含大量A型星谱线指数和有效温度的信息。机器学习算法可以发掘数据底层相互关系的神经网络模型,已广泛应用于多个学科。通过使用DR5数据集中的A型星19种谱线指数和有效温度,通过主成分分析法给出了每种谱线指数占整个数据信息的百分比,并以此为基础,选取与有效温度关系最紧密的12种谱线指数,利用有效温度误差小于100 K的数据训练得到有效温度的神经网络回归模型。模型在测试数据集上整体表现较好,程序给出的决定系数R~2为0.904,平均绝对误差为58.38 K。对比相关研究的模型,测量准确度有了明显提升。此外,通过建立模型,对有效温度误差大于100 K的原始数据重新进行测量,得到的有效温度绝对误差的平均值有了明显下降;同时DR5数据集中A5型恒星数据缺少有效温度参数,通过模型的测量,对这一部分数据进行了补充。  相似文献   

4.
孔旭  程福臻 《天文学进展》2001,19(3):375-386
演化的星族合成方法是在给定恒星形成率和初始质量函数的前提下,利用理论的恒星演化轨迹和恒星光谱库得到的组合特征(光谱,光度),拟合星系、星团等恒星复合天体的观测特征,给出其中星族组成的一种有效方法。对演化的星族合成方法在天体物理研究中的重要意义及其原理和算法以及影响演化星族合成方法结果的最主要的四个输入量:恒星演化轨迹、恒星光谱库、初始质量函数和恒星形成率进行了评述。  相似文献   

5.
赋形旋转抛物面天线母线拟合常用的方法包括多项式整体拟合和等间隔分段拟合,多项式整体拟合结果阶次高、计算量大,且高阶多项式在边缘处拟合的结果容易振荡;而等间隔分段拟合则是对母线数据按等间隔分段,分段方式具有一定的盲目性,容易造成拟合参数多、光滑性差等问题.针对上述问题,提出了一种基于母线拟合残差分布的自适应分段拟合方法,该方法包括初始整体拟合和分段拟合两步,初始整体拟合用以确定各离散点的拟合残差分布,分段拟合先参照残差分布情况对离散点数据分段,然后采用低阶多项式对各段数据进行拟合.经过实例拟合对比,该方法可避免高阶拟合的不稳定性,减少了分段数,更适用于赋形旋转抛物面天线母线的拟合.  相似文献   

6.
恒星的观测谱一般由连续谱、谱线和噪声组成,其中连续谱是黑体辐射导致的辐射流量随波长变化的光滑连续光谱。光谱分类及恒星物理参数估计等研究依赖于连续谱及谱线信息的准确提取。因此光谱数据处理的工作主要是拟合连续谱,并通过对光谱进行归一化来提取谱线特征。连续谱拟合的方法主要有多项式拟合、中值滤波、小波滤波等。已有的方法在低信噪比、宇宙线信号干扰、存在发射线等情况下,有不同程度的局限性,体现在鲁棒性和准确度上。目前,针对郭守敬望远镜的10~7条光谱没有自动化方法应用到归一化上的问题,研究并开发一种适用于不同的温度、信噪比及波长覆盖范围,并能够自动化处理的恒星光谱归一化方法,显得十分迫切。在仔细分析不同类型光谱的基础上,提出了一种基于固定窗口划分的连续谱拟合方法。该方法对光谱中能够体现连续谱特征的数据点进行筛选提取,通过细微地控制样条函数平滑度产生更加准确的连续谱。使用郭守敬望远镜中不同光谱型、温度范围、波长覆盖范围的光谱进行实验,结果表明,该方法具有良好的精度和普适性。  相似文献   

7.
恒星光谱分类是天文学中一个重要的研究问题.对于已经采集到的海量高维恒星光谱数据的分类,采用模式匹配方法对光谱型分类较为成功,但其缺点在于标准恒星模版之间的差异性在匹配实际观测数据中不能体现出来,尤其是当需要进行光谱型和光度型的二元分类时模版匹配法往往会失败.而采用谱线特征测量的光度型分类强烈地依赖谱线拟合的准确性.为了解决二元分类的问题,介绍了一种基于卷积神经网络的恒星光谱型和光度型分类模型(Classification model of Stellar Spectral type and Luminosity type based on Convolution Neural Network, CSSL CNN).这一模型使用卷积神经网络来提取光谱的特征,通过注意力模块学习到了重要的光谱特征,借助池化操作降低了光谱的维度并压缩了模型参数的数量,使用全连接层来学习特征并对恒星光谱进行分类.实验中使用了大天区面积多目标光纤光谱天文望远镜(Large Sky Area Multi-Object Fiber Spectroscopy Telescope, LAMOST)公开数据集Data Release 5 (DR5,用了其中71282条恒星光谱数据,每条光谱包含了3000多维的特征)对该模型的性能进行验证与评估.实验结果表明,基于卷积神经网络的模型在恒星的光谱型分类上准确率达到92.04%,而基于深度神经网络的模型(Celestial bodies Spectral Classification Model, CSC Model)只有87.54%的准确率; CSSL CNN在恒星的光谱型和光度型二元分类上准确率达到83.91%,而模式匹配方法MKCLASS仅有38.38%的准确率且效率较低.  相似文献   

8.
本文总结了如何将恒星大气模型计算结果与实测进行比较以获得恒星的一些重要物理参量的方法。这些物理参量包括:有效温度T_(eff)、表面重力加速度logg、元素丰度x_i、湍流速度ξ_t、恒星半径R、自转速度Vsini、角直径θ、质量M、光度L,关于恒星演化状态的信息等。  相似文献   

9.
本文从Mira变星的光谱型、周期以及与之成协的OH脉泽频谱出发,按辐射压驱动恒星风的质量损失机制,计算了42个有OH脉泽双峰频谱资料的Mira变星的质量损失率,从而找出了Mira变星质量损失率与恒星光度、脉动周期以及成协脉泽源速度的关系.在质量损失率和表面有效温度之间未见明显的依赖性.文中最后对所得结果进行了简单的讨论.  相似文献   

10.
我国预计2025年发射的巡天空间望远镜(Chinese Space Station Telescope, CSST),主要用于开展大规模的多色成像与无缝光谱巡天工作。发射前需要利用地基望远镜对空间望远镜的光学成像系统、探测器,以及设备长时间运行稳定性进行地面测试。设计了兴隆观测基地80 cm望远镜的无缝光谱地面测试,利用A型恒星、B型恒星和沃尔夫拉叶星HD4004的强吸收和发射线特征,拟合色散方程,并发现色散方程具有空间分布特征。对HR3173的53条数据的零级谱位置信息及色散方程系数进行了二次曲面拟合,并利用该曲面对HR3173零级像位置范围内的HR718数据进行了波长定标,得到的CCD上8×13 pixels范围内的平均视向速度精度为51 km/s。  相似文献   

11.
The physical parameters of stellar atmosphere, e.g. the effective temperature, surface gravity and chemical abundance, are the main factors for the differences in stellar spectra, and the automatic measurement of these parameters is an important content in the automatic processing of the immense amount of spectral data provided by LAMOST and other patrol telescopes. Aiming at the estimation of the physical parameters for every star in large samples of stellar spectral data, a variable window-width algorithm is proposed in this article. It consists of the following three steps: (1) A PCA (principal component analysis) treatment of historical stellar spectral data is carried out to obtain a low-dimensional characteristic data of the spectra. (2) Establish the correlation between the characteristic data and the physical parameters using a non-parametric estimator with variable window-width. (3) By means of this estimator, the three physical parameters of the star are directly calculated. As shown by results of experiments, in comparison with the fixed window-width estimator and other algorithms reported in literature, our algorithm is more accurate and robust.  相似文献   

12.
With the help of computer tools and algorithms, automatic stellar spectral classification has become an area of current interest. The process of stellar spectral classification mainly includes two steps: dimension reduction and classification. As a popular dimensionality reduction technique, Principal Component Analysis (PCA) is widely used in stellar spectra classification. Another dimensionality reduction technique, Locality Preserving Projections (LPP) has not been widely used in astronomy. The advantage of LPP is that it can preserve the local structure of the data after dimensionality reduction. In view of this, we investigate how to apply LPP+SVM in classifying the stellar spectral subclasses. In the comparative experiment, the performance of LPP is compared with PCA. The stellar spectral classification process is composed of the following steps. Firstly, PCA and LPP are respectively applied to reduce the dimension of spectra data. Then, Support Vector Machine (SVM) is used to classify the 4 subclasses of K-type and 3 subclasses of F-type spectra from Sloan Digital Sky Survey (SDSS). Lastly, the performance of LPP+SVM is compared with that of PCA+SVM in stellar spectral classification, and we found that LPP does better than PCA.  相似文献   

13.
Using the near-infrared spectral stellar library of Cenarro et al., the behaviour of the Mg  i line at 8807 Å and nearby TiO bands is analyzed in terms of the effective temperature, surface gravity and metallicity of the library stars. New spectroscopic indices for both spectral features – namely MgI and sTiO – are defined, and their sensitivities to different signal-to-noise ratios, spectral resolutions, flux calibrations and sky emission-line residuals are characterized. The two new indices exhibit interesting properties. In particular, MgI is a good indicator of the Mg abundance, whereas sTiO is a powerful dwarf-to-giant discriminator for cold spectral types. Empirical fitting polynomials that reproduce the strength of the new indices as a function of the stellar atmospheric parameters are computed, and a fortran routine with the fitting function predictions is made available. A thorough study of several error sources, non-solar [Mg/Fe] ratios and their influence on the fitting function residuals is also presented. From this analysis, an [Mg/Fe] underabundance of  ∼−0.04  is derived for the Galactic open cluster M67.  相似文献   

14.
With the availability of multi-object spectrometers and the design and operation of some large scale sky surveys, the issue of how to deal with enormous quantities of spectral data efficiently and accurately is becoming more and more important. This work investigates the classification problem of stellar spectra under the assumption that there is no perfect absolute flux calibration, for example, when considering spectra from the Guo Shou Jing Telescope(the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, LAMOST). The proposed scheme consists of the following two procedures: Firstly, a spectrum is normalized based on a 17 th order polynomial fitting; secondly, a random forest(RF) is utilized to classify the stellar spectra. Experiments on four stellar spectral libraries show that the RF has good classification performance. This work also studied the spectral feature evaluation problem based on RF. The evaluation is helpful in understanding the results of the proposed stellar classification scheme and exploring its potential improvements in the future.  相似文献   

15.
We present deep Near-Infrared (NIR) imaging of Blue Compact Dwarf Galaxies (BCDs), allowing for the first time to derive and systematize the NIR structural properties of their stellar low-surface brightness (LSB) host galaxies. Compared to optical data, NIR images, being less contamined by the extended stellar and ionized gas emission from the starburst, permit to study the LSB host galaxy closer to its center. We find that radial surface brightness profiles (SBPs) of the LSB hosts show at large radii a mostly exponential intensity distribution, in agreement with previous optical studies. At small to intermediate radii, however, the NIR data reveal an inwards flattening with respect to the outer exponential slope (`type V SBPs', Binggeli and Cameron, 1991) in the LSB component of more than one half of the sample BCDs. This result may constitute an important observational constraint to the dynamics and evolution of BCDs. We apply a modified exponential fitting function (Papaderos et al., 1996a) to parametrize and systematically study type V profiles in BCDs. A Sérsic law is found to be less suitable for studying the LSB component of BCDs, since it yields very uncertain solutions. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   

16.
The new generation of large sky area spectroscopic survey project has produced nearly 10 million low-resolution stellar spectra. Based on these spectroscopic data, this paper introduces a machine learning algorithm named The Cannon. This algorithm is completely based on the known spectroscopic data of stellar atmospheric parameters (effective temperature, surface gravity, and metal abundance, etc.), this algorithm builds the characteristic vector by means of data driving, and establishes the functional relation between spectral flux characteristics and stellar parameters. Then it is applied to the observed spectral data to calculate the atmospheric parameters. The main advantage of The Cannon is that it is not directly based on any stellar physical models, it has an even higher applicability. Moreover, because of the use of full-spectrum information, even for the spectra with a low signal-to-noise ratio (SNR), it still can obtain the parameter solutions of high reliability. This algorithm has significant advantages in the data processing and parameter determination of large-scale stellar spectra. In addition, this paper presents two examples of using The Cannon to obtain the stellar parameters of K and M giants from the LAMOST spectral data.  相似文献   

17.
A technique for obtaining information on the temperature structure of a stellar atmosphere from spectral line data where only flux observations are available is discussed. The direct inversion of the flux integral to obtain the line source function can be circumvented by making the physically plausible assumptions of (1) source function equality in a multiplet and (2) the dominance of line absorption over continuum absorption at line center. Consistency of the technique is demonstrated by treating a synthetic spectrum as input data and attempting to recover the temperature structure of the input atmosphere. Using high quality solar spectrum scans obtained from K.P.N.O. we demonstrate the accuracy of source function equality for several Fe i multiplets and use one of these multiplets to obtain an empirical outer atmosphere for the Sun. Our empirical atmosphere agrees well with current solar models.  相似文献   

18.
In this paper we present a method that combines evolution strategies (ES) and standard optimization algorithms to solve the problem of fitting line profiles of stellar spectra. This method provides a reliable decomposition and a reduction in computing time over conventional algorithms. Using a stellar spectrum as input, we implemented an evolution strategy to find an approximation of the continuum spectrum and spectral lines. After a few generations, the parameters found by ES are given as starting search point to a standard optimization algorithm, which then finds the correct spectral decomposition. We used Gaussian functions to fit spectral lines and the Planck function to represent the continuum spectrum. Our experimental results present the application of this method to real spectra, showing that they can be approximated very accurately. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号