首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
给出一种较好的观测资料动态误差估计的方法:单圈改进法,该方法从轨道理论出发,有明确的物理背景,避免了最小二乘拟合阶数和Vondrak平滑法平滑因子不确定性,较为准确地估计了观测资料的动态误差.另外该方法能够反映观测资料的不正常的跳动,对准确地评估观测仪器的性能,进一步改进观测仪器是有益处的.  相似文献   

2.
介绍了云南天文台卫星目视观测组新使用的一种卫星光电资料的平滑处理方法—多直线选优法。文中给出了此方法的基本原理, 计算公式, 计算结果以及实际应用情况。  相似文献   

3.
介绍了在云南天文台卫星目视观测组新使用的一种卫星光电资料的平滑处理方法一多直线选优法,文中给出了此方法的基本原理,计算公式,计算结果以及实际应用情况。  相似文献   

4.
季凯帆 《天文学报》1994,35(2):138-142
在光子计数资料的处理方法设计不完善时,有可能会大大影响光电等高仪的观测结果。本文提出了一套从平滑,搜索到定中心的光子计数资料处理方法,通过与原方法进行了9天(含满月夜)的同步处理比较,新的处理方法得到的星数是原方法结果的1.48倍,单星测定精度提高0″02,并且基本克服了天光的影响。  相似文献   

5.
提出应用于人造卫星观测中确定拖长星像中心的平均几何中心法,介绍了该方法的基本原理与实现步骤。将中值滤波应用于CCD数据的预处理并收到较好的效果。利用实际观测资料初步验证了平均几何中心法,结果表明,与通常采用的重心法相比,此方法对人卫观测中的拖长星像进行中心定位的精度较高。  相似文献   

6.
因为短的积分时间(0^s.001)和月球附近很强的背景光这度,月掩星观测资料通常需要进行某种处理以除去噪声,而α-彩好的消除噪声能力,可以用来平滑月掩星观测资料,很少甚至完全不影响掩星的衍射图象。本文介绍了α-修正平均方法并指出对α=0,ι=9的α-修下平均是对月掩星观测资料的最佳滤波器。  相似文献   

7.
GPS近实时共视观测资料处理算法研究   总被引:2,自引:0,他引:2  
GPS共视资料的高精度快速处理可实现近实时共视时间传递,常见的平滑方法不能满足近实时共视的要求。分析GPS共视资料特点,设计一种卡尔曼滤波算法,对共视资料进行近实时处理,以便削弱观测噪声,估计异地钟差,对相距2000多公里的中国科学院国家授时中心(NTSC)与日本通信综合研究所(CRL),和相距1000多公里的CRL与韩国计量科学研究院(KRIS)的共视观测资料处理结果表明:卡尔曼滤波算法所得钟差与根据BIPMT公报所得钟差的均方根误差分别优于2.9ns和2.6ns,为进一步提高比对精度,最后对近实时共视应用于多站点间相互比对的情况,提出在卡尔曼滤波算法基础上使用间接观测平差处理技术,根据共视网络中站点间距离设置观测权值,通过解矛盾方程组得到两站钟差,以NTSC、CRL和KRIS3站比对为例,以BIPMT公报得到的钟差为标准,对间接观测平差处理前后的数据比较表明,近实时比对精度可进一步提高。  相似文献   

8.
本文利用傅里叶变换解卷积积分的方法,分析了1990年5月21日北京天文台2840MHz和2640MHz的射电爆发资料,分别求出了它们的激励函数.从激励函数曲线图中明显看出:激励函数曲线与观测曲线相比明显变窄,并且复原了被传输过程所平滑的精细结构.  相似文献   

9.
本试图根据蝴蝶斑的分布特征,采用平滑滤波的方法,消除太阳射电高时间分辨率观测资料中,由于某种干扰引起的蝴蝶斑,从而达到在这种资料中提取有用太阳信息的目的。  相似文献   

10.
一种适用于长弧段的初轨计算方法   总被引:3,自引:3,他引:0  
陆本魁  马静远  夏益  张晹 《天文学报》2003,44(4):369-374
根据单位矢量法测轨原理,在人造卫星初轨计算的单位矢量法基础上,给出了一种适用于长弧段的初轨计算方法.该方法既适用于长弧段,也适用于短弧段;适用于各种不同类型的观测资料和任意偏心率、任意轨道倾角的人造地球卫星;有利于提高初轨测定精度;并改善整个计算收敛性.特别需要指出的是,该方法与有摄初轨计算的单位矢量法相结合,为单位矢量法从初轨计算推广到轨道改进打下了坚实的基础.  相似文献   

11.
A new method of nonradial pulsation mode identification is developed. This method is based on Fourier analysis of time series line profile variations that have been merged into a one-dimensional equally spaced dataset. In principle, this method is identical to that of two-dimensional Fourier transform of line profile time series, but it is much more convenient to use for most of astronomers who have experience in period analysis of light curves. The features of both temporal frequency and Doppler spatial frequency can be accurately retrieved. This method provides an easy way to carry out mode identification from line profiles and minimizes the uncertainty of mode determination caused by random noise. Comments and assessment of related methods of mode identification are given. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

12.
对适用于非等间隔时间序列的一维CLEAN谱分析方法进行了研究。用模拟信号进行了检验,证实了它也适用于有噪声的序列,但谱质量和噪声强度密切相关,把这种方法应用到激变变变星TT Ari的测光资料分析中,检测到近20min周期的准周期振荡。  相似文献   

13.
We have developed an exceptionally noise-resistant method for accurate and automatic identification of supergranular cell boundaries from velocity measurements. Because of its high noise tolerance the algorithm can produce reliable cell patterns with only very small amounts of smoothing of the source data in comparison to conventional methods. In this paper we describe the method and test it with simulated data. We then apply it to the analysis of velocity fields derived from high-resolution continuum data from MDI (Michelson Doppler Imager) on SOHO. From this, we can identify with high spatial resolution certain basic properties of supergranulation cells, such as their characteristic sizes, the flow speeds within cells, and their dependence on cell areas. The effect of the noise and smoothing on the derived cell boundaries is investigated and quantified by using simulated data. We show in detail the evolution of supergranular cells over their lifetime, including observations of emerging, splitting, and coalescing cells. A key result of our analysis of cell internal velocities is that there is a simple linear relation between cell size and cell internal velocity, rather than the power law usually suggested. Electronic Supplementary Material The online version of this article () contains supplementary material, which is available to authorized users.  相似文献   

14.
An efficient algorithm for adaptive kernel smoothing (AKS) of two-dimensional imaging data has been developed and implemented using the Interactive Data Language ( idl ). The functional form of the kernel can be varied (top-hat, Gaussian, etc.) to allow different weighting of the event counts registered within the smoothing region. For each individual pixel, the algorithm increases the smoothing scale until the signal-to-noise ratio (S/N) within the kernel reaches a pre-set value. Thus, noise is suppressed very efficiently, while at the same time real structure, that is, signal that is locally significant at the selected S/N level, is preserved on all scales. In particular, extended features in noise-dominated regions are visually enhanced. The asmooth algorithm differs from other AKS routines in that it allows a quantitative assessment of the goodness of the local signal estimation by producing adaptively smoothed images in which all pixel values share the same S/N above the background .
We apply asmooth to both real observational data (an X-ray image of clusters of galaxies obtained with the Chandra X-ray Observatory) and to a simulated data set. We find the asmooth ed images to be fair representations of the input data in the sense that the residuals are consistent with pure noise, that is, they possess Poissonian variance and a near-Gaussian distribution around a mean of zero, and are spatially uncorrelated.  相似文献   

15.
In this study, we look for the mid‐term variations in the daily average data of solar radius measurements made at the Solar Astrolabe Station of TUBITAK National Observatory (TUG) during solar cycle 23 for a time interval from 2000 February 26 to 2006 November 15. Due to the weather conditions and seasonal effect dependent on the latitude, the data series has the temporal gaps. For spectral analysis of the data series, thus, we use the Date Compensated Discrete Fourier Transform (DCDFT) and the CLEANest algorithm, which are powerful methods for irregularly spaced data. The CLEANest spectra of the solar radius data exhibit several significant mid‐term periodicities at 393.2, 338.9, 206.5, 195.2, 172.3 and 125.4 days which are consistent with periods detected in several solar time series by several authors during different solar cycles. The knowledge relating to the origin of solar radius variations is not yet present. To see whether these variations will repeat in next cycles and to understand how the amplitudes of such variations change with different phases of the solar cycles, we need more systematic efforts and the long‐term homogeneous data. Since most of the periodicities detected in the present study are frequently seen in solar activity indicators, it is thought that the physical mechanisms driving the periodicities of solar activity may also be effective in solar radius variations (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.
We propose a non-parametric method of smoothing supernova data over redshift using a Gaussian kernel in order to reconstruct important cosmological quantities including   H ( z )  and   w ( z )  in a model-independent manner. This method is shown to be successful in discriminating between different models of dark energy when the quality of data is commensurate with that expected from the future Supernova Acceleration Probe ( SNAP ). We find that the Hubble parameter is especially well determined and useful for this purpose. The look-back time of the Universe may also be determined to a very high degree of accuracy (≲0.2 per cent) using this method. By refining the method, it is also possible to obtain reasonable bounds on the equation of state of dark energy. We explore a new diagnostic of dark energy – the ' w -probe'– which can be calculated from the first derivative of the data. We find that this diagnostic is reconstructed extremely accurately for different reconstruction methods even if Ω0 m is marginalized over. The w -probe can be used to successfully distinguish between Λ cold dark matter and other models of dark energy to a high degree of accuracy.  相似文献   

17.
An aliasing effect brought up by mass assignment onto Fast Fourier Transformation (FFT) grids may bias measurement of the power spectrum of large scale structures. In this paper, based on the Beylkin's unequally spaced FFT technique, we propose a new precise method to extract the true power spectrum of a large discrete data set. We compare the traditional mass assignment schemes with the new method using the Daub6 and the 3rd-order B-spline scaling functions. Our measurement of Poisson samples and samples of N-body simulations shows that the B-spline scaling function is an optimal choice for mass assignment in the sense that (1) it has a compact support in real space and thus yields an efficient algorithm (2) without any extra corrections. The Fourier space behavior of the 3rd-order B-spline scaling function enables it to be able to accurately recover the true power spectrum with errors less than 5% up to k < kN. It is expected that such a method can be applied to higher order statistics in Fourier space and will enable us to have a precision capture of the non-Gaussian features in the large scale structure of the universe.  相似文献   

18.
The atomic time scale (such as TAI) obtained from atomic clocks of time laboratories distributed in various locations requires that the means of time comparison should have the same degree of stability as the atomic clocks in use. GPS CV is currently the most widely adopted means of time comparison in the world, and its data reduction and error analysis are indispensable means for improving the accuracy and precision of the technique. The data reduction and analysis of the GPS CV data of CRL-CSAO are taken as an example to illustrate various problems to be solved and the accuracy achieved.  相似文献   

19.
单站精密定位 (PrecisePointPositioning ,以下简称PPP)是在同时固定GPS精密星历和卫星钟的前提下 ,利用载波相位和伪距资料进行单台站的精密点定位 .采用该方法时不同台站之间不存在共同的待估参数 ,即各台站互不相关 ,这一特点大大降低了计算量 .采用美国喷气推进实验室JPL发展的数据处理软件GIPSY处理APSG联测资料 ,计算表明PPP的重复率相当于目前国内普遍采用的双差解算结果 .采用较好保持地面网构型的无基准算法 ,计算表明通过Helmert参考系转换后 ,PPP的解算结果与双差算法的外符精度大致相当 .解算表明 ,采用PPP处理 1 0 0个台站约需 3 .5小时 ,而处理同样的资料采用双差算法则需 1 8~2 0小时 .对于我国即将建成的大科学工程或地震监测的多达 2 0 0 0个接收机的GPS网而言 ,在保持精度前提下的节省计算资源和计算时间的PPP解算方案值得广泛的应用  相似文献   

20.
卫星双向法时间比对的归算   总被引:13,自引:0,他引:13  
李志刚  李焕信  张虹 《天文学报》2002,43(4):422-431
卫星双向法比对原理为两个台站同步发送和接收时间信号,可消除传递路径误差,因此比对精度高,它的缺点是效率低,占用大量卫星时间,也不适合于自动化操作,现正在研制的终端可多台站同时观测,克服了上述缺点,利用新的终端提出一种新的处理方法,研究表明多台站观测的同时性有更多的信息可提取,以提高比对精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号