首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 559 毫秒
1.
The near-infrared (NIR) extinction power-law index (β) and its uncertainty is derived from three different techniques based on star counts, colour excess and a combination of them. We have applied these methods to Two Micron All Sky Survey (2MASS) data to determine maps of β and NIR extinction of the small cloud IC 1396 W. The combination of star counts and colour excess results in the most reliable method to determine β. It is found that the use of the correct β map to transform colour excess values into extinction is fundamental for column density profile analysis of clouds. We describe how artificial photometric data, based on the model of stellar population synthesis of the Galaxy, can be used to estimate uncertainties and derive systematic effects of the extinction methods presented here. We find that all colour excess based extinction determination methods are subject to small but systematic offsets, which do not affect the star counting technique. These offsets occur since stars seen through a cloud do not represent the same population as stars in an extinction-free control field.  相似文献   

2.
Map making presents a significant computational challenge to the next generation of kilopixel cosmic microwave background polarization experiments. Years worth of time ordered data (TOD) from thousands of detectors will need to be compressed into maps of the T , Q and U Stokes parameters. Fundamental to the science goal of these experiments, the observation of B modes, is the ability to control noise and systematics. In this paper, we consider an alternative to the maximum likelihood method, called destriping , where the noise is modelled as a set of discrete offset functions and then subtracted from the time stream. We compare our destriping code (Descart: the DEStriping CARTographer) to a full maximum likelihood mapmaker, applying them to 200 Monte Carlo simulations of TOD from a ground-based, partial-sky polarization modulation experiment. In these simulations, the noise is dominated by either detector or atmospheric  1/ f   noise. Using prior information of the power spectrum of this noise, we produce destriped maps of T , Q and U which are negligibly different from optimal. The method does not filter the signal or bias the E- or B-mode power spectra. Depending on the length of the destriping baseline, the method delivers between five and 22 times improvement in computation time over the maximum likelihood algorithm. We find that, for the specific case of single detector maps, it is essential to destripe the atmospheric  1/ f   in order to detect B modes, even though the Q and U signals are modulated by a half-wave plate spinning at 5 Hz.  相似文献   

3.
The colour–magnitude diagrams of resolved single stellar populations, such as open and globular clusters, have provided the best natural laboratories to test stellar evolution theory. Whilst a variety of techniques have been used to infer the basic properties of these simple populations, systematic uncertainties arise from the purely geometrical degeneracy produced by the similar shape of isochrones of different ages and metallicities. Here we present an objective and robust statistical technique which lifts this degeneracy to a great extent through the use of a key observable: the number of stars along the isochrone. Through extensive Monte Carlo simulations we show that, for instance, we can infer the four main parameters (age, metallicity, distance and reddening) in an objective way, along with robust confidence intervals and their full covariance matrix. We show that systematic uncertainties due to field contamination, unresolved binaries, initial or present-day stellar mass function are either negligible or well under control. This technique provides, for the first time, a proper way to infer with unprecedented accuracy the fundamental properties of simple stellar populations, in an easy-to-implement algorithm.  相似文献   

4.
In the absence of any compelling physical model, cosmological systematics are often misrepresented as statistical effects and the approach of marginalizing over extra nuisance systematic parameters is used to gauge the effect of the systematic. In this article, we argue that such an approach is risky at best since the key choice of function can have a large effect on the resultant cosmological errors.
As an alternative we present a functional form-filling technique in which an unknown, residual, systematic is treated as such. Since the underlying function is unknown, we evaluate the effect of every functional form allowed by the information available (either a hard boundary or some data). Using a simple toy model, we introduce the formalism of functional form filling. We show that parameter errors can be dramatically affected by the choice of function in the case of marginalizing over a systematic, but that in contrast the functional form-filling approach is independent of the choice of basis set.
We then apply the technique to cosmic shear shape measurement systematics and show that a shear calibration bias of  | m ( z )| ≲ 10−3 (1 + z )0.7  is required for a future all-sky photometric survey to yield unbiased cosmological parameter constraints to per cent accuracy.
A module associated with the work in this paper is available through the open source icosmo code available at http://www.icosmo.org .  相似文献   

5.
We present a detrending algorithm for the removal of trends in time series. Trends in time series could be caused by various systematic and random noise sources such as cloud passages, changes of airmass, telescope vibration, CCD noise or defects of photometry. Those trends undermine the intrinsic signals of stars and should be removed. We determine the trends from subsets of stars that are highly correlated among themselves. These subsets are selected based on a hierarchical tree clustering algorithm. A bottom-up merging algorithm based on the departure from normal distribution in the correlation is developed to identify subsets, which we call clusters. After identification of clusters, we determine a trend per cluster by weighted sum of normalized light curves. We then use quadratic programming to detrend all individual light curves based on these determined trends. Experimental results with synthetic light curves containing artificial trends and events are presented. Results from other detrending methods are also compared. The developed algorithm can be applied to time series for trend removal in both narrow and wide field astronomy.  相似文献   

6.
We present ellipsoidal light-curve fits to the quiescent B , V , R and I light curves of GRO J1655–40 (Nova Scorpii 1994). The fits are based on a simple model consisting of a Roche-lobe-filling secondary and an accretion disc around the black hole primary. Unlike previous studies, no assumptions are made concerning the interstellar extinction or the distance to the source; instead these are determined self-consistently from the observed light curves. In order to obtain tighter limits on the model parameters, we used the distance determination from the kinematics of the radio jet as an additional constraint. We obtain a value for the extinction that is lower than was assumed previously; this leads to lower masses for both the black hole and the secondary star of  5.4±0.3  and  1.45±0.35 M  , respectively. The errors in the determination of the model parameters are dominated by systematic errors, in particular arising from uncertainties in the modelling of the disc structure and uncertainties in the atmosphere model for the chemically anomalous secondary in the system. A lower mass of the secondary naturally explains the transient nature of the system if it is in either a late case A or early case B mass-transfer phase.  相似文献   

7.
A search for gravitational waves from the millisecond pulsar PSR 0437-4715 has been initiated using the bar detector NIOBE which is located at the University of Western Australia. We present a detailed report on the data analysis algorithm, called phase plane rotation , which will be used in this search. A discussion of the actual implementation of the algorithm is presented. The data analysis algorithm mentioned above has the advantage that it requires minimal changes to the already-existing data acquisition facility of NIOBE but, at the same time, it is as efficient as optimal filtering in detecting a signal. This search involves a very long coherent integration of the bar output which may stretch over a few years. With some planned improvements in the detector, a three-year integration should be able to put an upper limit of h  ∼ 10−26 on the signal amplitude.  相似文献   

8.
Eddington-limited X-ray bursts from neutron stars can be used in conjunction with other spectroscopic observations to measure neutron star masses, radii and distances. In order to quantify some of the uncertainties in the determination of the Eddington limit, we analysed a large sample of photospheric radius-expansion thermonuclear bursts observed with the Rossi X-ray Timing Explorer . We identified the instant at which the expanded photosphere 'touches down' back on to the surface of the neutron star and compared the corresponding touchdown flux to the peak flux of each burst. We found that for the majority of sources, the ratio of these fluxes is smaller than ≃1.6, which is the maximum value expected from the changing gravitational redshift during the radius expansion episodes (for a  2 M  neutron star). The only sources for which this ratio is larger than ≃1.6 are high-inclination sources that include dippers and Cyg X-2. We discuss two possible geometric interpretations of this effect and show that the inferred masses and radii of neutron stars are not affected by this bias. On the other hand, systematic uncertainties as large as ∼50 per cent may be introduced to the distance determination.  相似文献   

9.
We present a study of the dynamic range limitations in images produced with the proposed Square Kilometre Array (SKA) using the Cotton-Schwab CLEAN algorithm for data processing. The study is limited to the case of a small field of view and a snap-shot observation. A new modification of the Cotton-Schwab algorithm involving optimization of the position of clean components is suggested. This algorithm can reach a dynamic range as high as 106 even if the point source lies between image grid points, in contrast to about 103 for existing CLEAN-based algorithms in the same circumstances. It is shown that the positional accuracy of clean components, floating point precision and the w-term are extremely important at high dynamic range. The influence of these factors can be reduced if the variance of the gradient of the point spread function is minimized during the array design.  相似文献   

10.
The numerical kernel approach to difference imaging has been implemented and applied to gravitational microlensing events observed by the PLANET collaboration. The effect of an error in the source-star coordinates is explored and a new algorithm is presented for determining the precise coordinates of the microlens in blended events, essential for accurate photometry of difference images. It is shown how the photometric reference flux need not be measured directly from the reference image but can be obtained from measurements of the difference images combined with the knowledge of the statistical flux uncertainties. The improved performance of the new algorithm, relative to isis2 , is demonstrated.  相似文献   

11.
This paper presents a study of the atmospheric refraction and its effect on the light coupling efficiency in an instrument using single-mode optical fibres. We show the analytical approach which allowed us to assess the need to correct the refraction in J and H bands while observing with an 8-m Unit Telescope. We then developed numerical simulations to go further in calculations. The hypotheses on the instrumental characteristics are those of AMBER (Astronomical Multi BEam combineR), the near-infrared focal beam combiner of the Very Large Telescope Interferometric mode, but most of the conclusions can be generalized to other single-mode instruments. We used the software package caos to take into account the atmospheric turbulence effect after correction by the European Southern Observatory system Multi-Application Curvature Adaptive Optics. The optomechanical study and design of the system correcting the atmospheric refraction on AMBER is then detailed. We showed that the atmospheric refraction becomes predominant over the atmospheric turbulence for some zenith angles z and spectral conditions: for z larger than 30° in J band for example. The study of the optical system showed that it allows to achieve the required instrumental performance in terms of throughput in J and H bands. First observations in J band of a bright star, α Cir star, at more than 30° from zenith clearly showed the gain to control the atmospheric refraction in a single-mode instrument, and validated the operating law.  相似文献   

12.
罗林  樊敏  沈忙作 《天文学报》2007,48(3):374-382
大气湍流极大限制了地基大口径望远镜观测天文目标图像的空间分辨率.根据最大似然估计原理,提出了用实际光学带宽约束的可有效减小天文观测图像中大气湍流影响的盲反卷积方法,通过共轭梯度优化算法使卷积误差函数趋向最小.建立了望远镜光学系统参数和图像频域带宽的关系,采用变量正性约束、点扩散函数带宽有限约束,提高算法的收敛性.为避免图像处理中有效傅立叶变换频率超出截止频率,要求采集望远镜焦面图像时单个成像单元(如CCD像素单元)应小于四分之一衍射斑直径.算法中未用目标支持域约束,所提出的方法适用于全视场天文图像恢复.用计算机模拟和对实际天文目标双鱼座图像数据的恢复结果验证了所提出方法的有效性.  相似文献   

13.
The long-term precise timing of Galactic millisecond pulsars holds great promise for measuring the long-period (months to years) astrophysical gravitational waves. Several gravitational-wave observational programs, called Pulsar Timing Arrays (PTA), are being pursued around the world.
Here, we develop a Bayesian algorithm for measuring the stochastic gravitational-wave background (GWB) from the PTA data. Our algorithm has several strengths: (i) it analyses the data without any loss of information; (ii) it trivially removes systematic errors of known functional form, including quadratic pulsar spin-down, annual modulations and jumps due to a change of equipment; (iii) it measures simultaneously both the amplitude and the slope of the GWB spectrum and (iv) it can deal with unevenly sampled data and coloured pulsar noise spectra. We sample the likelihood function using Markov Chain Monte Carlo simulations. We extensively test our approach on mock PTA data sets and find that the algorithm has significant benefits over currently proposed counterparts. We show the importance of characterizing all red noise components in pulsar timing noise by demonstrating that the presence of a red component would significantly hinder the detection of the GWB.
Lastly, we explore the dependence of the signal-to-noise ratio on the duration of the experiment, number of monitored pulsars and the magnitude of the pulsar timing noise. These parameter studies will help formulate observing strategies for the PTA experiments.  相似文献   

14.
Group delay fringe tracking using spectrally dispersed fringes is suitable for stabilizing the optical path difference in ground-based astronomical optical interferometers in low light level situations. We discuss the performance of group delay tracking algorithms when the effects of atmospheric dispersion, high-frequency atmospheric temporal phase variations, non-ideal path modulation, non-ideal spectral sampling, and the detection artifacts introduced by electron-multiplying CCDs are taken into account, and we present ways in which the tracking capability can be optimized in the presence of these effects.  相似文献   

15.
This paper describes a Monte Carlo simulation of type Ia supernova data. It was shown earlier that the data of SNe Ia might contain a possible correlation between the estimated luminosity distances and internal extinctions. This correlation was shown by different statistical investigations of the data. In order to remove observational biases (for example the effect of the detection limit of the observing instrument) and to test the reality of the effect found earlier we developed a simple routine which simulates extinction values, redshifts and absolute magnitudes for Ia supernovae. We pointed out that the correlation found earlier in the real data between the internal extinction and luminosity distance does not occur in the simulated sample. Furthermore, it became obvious that the detection limit of the observing devices used in supernova projects does not affect the far end of the redshift‐luminosity distance relationship of Ia supernovae. This result strengthens the earlier conclusions of the authors that SN Ia supernovae alone do not support the existence of dark energy. (© 2007 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.
The key features of the matphot algorithm for precise and accurate stellar photometry and astrometry using discrete point spread functions (PSFs) are described. A discrete PSF is a sampled version of a continuous PSF, which describes the two-dimensional probability distribution of photons from a point source (star) just above the detector. The shape information about the photon scattering pattern of a discrete PSF is typically encoded using a numerical table (matrix) or an FITS (Flexible Image Transport System) image file. Discrete PSFs are shifted within an observational model using a 21-pixel-wide damped sinc function, and position-partial derivatives are computed using a five-point numerical differentiation formula. Precise and accurate stellar photometry and astrometry are achieved with undersampled CCD (charge-coupled device) observations by using supersampled discrete PSFs that are sampled two, three or more times more finely than the observational data. The precision and accuracy of the matphot algorithm is demonstrated by using the c -language mpd code to analyse simulated CCD stellar observations; measured performance is compared with a theoretical performance model. Detailed analysis of simulated Next Generation Space Telescope observations demonstrate that millipixel relative astrometry and mmag photometric precision is achievable with complicated space-based discrete PSFs.  相似文献   

17.
In this paper we design and develop several filtering strategies for the analysis of data generated by a resonant bar gravitational wave (GW) antenna, with the goal of assessing the presence (or absence) therein of long-duration monochromatic GW signals, as well as the eventual amplitude and frequency of the signals, within the sensitivity band of the detector. Such signals are most likely generated in the fast rotation of slightly asymmetric spinning stars. We develop practical procedures, together with a study of their statistical properties, which will provide us with useful information on the performance of each technique. The selection of candidate events will then be established according to threshold-crossing probabilities, based on the Neyman–Pearson criterion. In particular, it will be shown that our approach, based on phase estimation, presents a better signal-to-noise ratio than does pure spectral analysis, the most common approach.  相似文献   

18.
We present an algorithm (MEAD, for 'Mapping Extinction Against Distance') which will determine intrinsic  ( r '− i ')  colour, extinction, and distance for early-A to K4 stars extracted from the IPHAS   r '/ i '/Hα  photometric data base. These data can be binned up to map extinction in three dimensions across the northern Galactic plane. The large size of the IPHAS data base (∼200 million unique objects), the accuracy of the digital photometry it contains and its faint limiting magnitude  ( r '∼ 20)  allow extinction to be mapped with fine angular (∼10 arcmin) and distance (∼ 0.1 kpc) resolution to distances of up to 10 kpc, outside the solar circle. High reddening within the solar circle on occasion brings this range down to ∼2 kpc. The resolution achieved, both in angle and depth, greatly exceeds that of previous empirical 3D extinction maps, enabling the structure of the Galactic Plane to be studied in increased detail. MEAD accounts for the effect of the survey magnitude limits, photometric errors, unresolved interstellar medium (ISM) substructure and binarity. The impact of metallicity variations, within the range typical of the Galactic disc is small. The accuracy and reliability of MEAD are tested through the use of simulated photometry created with Monte Carlo sampling techniques. The success of this algorithm is demonstrated on a selection of fields and the results are compared to the literature.  相似文献   

19.
A new fast Bayesian approach is introduced for the detection of discrete objects immersed in a diffuse background. This new method, called PowellSnakes, speeds up traditional Bayesian techniques by (i) replacing the standard form of the likelihood for the parameters characterizing the discrete objects by an alternative exact form that is much quicker to evaluate; (ii) using a simultaneous multiple minimization code based on Powell's direction set algorithm to locate rapidly the local maxima in the posterior and (iii) deciding whether each located posterior peak corresponds to a real object by performing a Bayesian model selection using an approximate evidence value based on a local Gaussian approximation to the peak. The construction of this Gaussian approximation also provides the covariance matrix of the uncertainties in the derived parameter values for the object in question. This new approach provides a speed up in performance by a factor of '100' as compared to existing Bayesian source extraction methods that use Monte Carlo Markov chain to explore the parameter space, such as that presented by Hobson & McLachlan. The method can be implemented in either real or Fourier space. In the case of objects embedded in a homogeneous random field, working in Fourier space provides a further speed up that takes advantage of the fact that the correlation matrix of the background is circulant. We illustrate the capabilities of the method by applying to some simplified toy models. Furthermore, PowellSnakes has the advantage of consistently defining the threshold for acceptance/rejection based on priors which cannot be said of the frequentist methods. We present here the first implementation of this technique (version I). Further improvements to this implementation are currently under investigation and will be published shortly. The application of the method to realistic simulated Planck observations will be presented in a forthcoming publication.  相似文献   

20.
Many physical properties of galaxies correlate with one another, and these correlations are often used to constrain galaxy formation models. Such correlations include the colour–magnitude relation, the luminosity–size relation, the fundamental plane, etc. However, the transformation from observable (e.g. angular size, apparent brightness) to physical quantity (physical size, luminosity) is often distance dependent. Noise in the distance estimate will lead to biased estimates of these correlations, thus compromising the ability of photometric redshift surveys to constrain galaxy formation models. We describe two methods which can remove this bias. One is a generalization of the V max method, and the other is a maximum-likelihood approach. We illustrate their effectiveness by studying the size–luminosity relation in a mock catalogue, although both methods can be applied to other scaling relations as well. We show that if one simply uses photometric redshifts one obtains a biased relation; our methods correct for this bias and recover the true relation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号