首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
DeForest  C.E. 《Solar physics》2004,219(1):3-23
Digital image data are now commonly used throughout the field of solar physics. Many steps of image data analysis, including image co-alignment, perspective reprojection of the solar surface, and compensation for solar rotation, require re-sampling original telescope image data under a distorting coordinate transformation. The most common image re-sampling methods introduce significant, unnecessary flaws into the data. More correct techniques have been known in the computer graphics community for some time but remain little known within the solar community and hence deserve further presentation. Furthermore, image distortion under specialized coordinate transformations is a powerful analysis technique with applications well beyond image resizing and perspective compensation. Here I give a brief overview of the mathematics of data re-sampling under arbitrary distortions, present a simple algorithm for optimized re-sampling, give some examples of distortion as an analysis tool, and introduce scientific image distortion software that is freely available over the Internet. ``First get your facts straight. Then you can distort them as you please.' – Mark Twain  相似文献   

2.
A technique for the analysis of photoelectrically scanned double star images is described. The method consists of comparing the Fourier transform of the double star profile with that of a single star profile imaged through the same telescope. If the measured profile of the double star image can be considered to be a linear superposition of two profiles, each identical in shape to the measured profile of a nearby single star, a comparison of the Fourier transforms of these profiles enables the parameters of the double star system to be determined. Certain features of the ratio of the moduli of the transforms yield both the separation and the magnitude difference between the components. A comparison of the phases of the transforms enables one to establish which of the two components is the brighter.  相似文献   

3.
We performed a LA-ICP-MS study of refractory lithophile trace elements in 32 individual objects selected from a single section of the reduced CV3 chondrite Leoville. Ingredients sampled include ferromagnesian type I and II chondrules, Al-rich chondrules (ARCs), calcium-aluminum-rich inclusions (CAIs), a single amoeboid olivine aggregate (AOA), and matrix. The majority of rare earth element (REE) signatures identified are either of the category “group II” or they are relatively flat, i.e., more or less unfractionated. Data derived for bulk Leoville exhibit characteristics of the group II pattern. The bulk REE inventory is essentially governed by those of CAIs (group II), ARCs (flat or group II), type I chondrules (about 90% flat, 10% group II), and matrix (group II). Leoville matrix also shows a superimposed positive Eu anomaly. The excess in Eu is possibly due to terrestrial weathering. The group II pattern, however, testifies to volatility-controlled fractional condensation from a residual gas of solar composition at still relatively high temperature. In principle, this signature (group II) is omnipresent in all types of constituents, suggesting that the original REE carrier of all components was CAI-like dust. In addition, single-element anomalies occasionally superimposing the group II signature reveal specific changes in redox conditions. We also determined the bulk chemical composition of all objects studied. For Mg/Si, Mg/Fe, and Al/Ca, Leoville's main ingredients—type I chondrules and matrix—display a complementary relationship. Both components probably formed successively in the same source region.  相似文献   

4.
The made-to-measure N -body method slowly adapts the particle weights of an N -body model, whilst integrating the trajectories in an assumed static potential, until some constraints are satisfied, such as optimal fits to observational data. I propose a novel technique for this adaption procedure, which overcomes several limitations and shortcomings of the original method. The capability of the new technique is demonstrated by generating realistic N -body equilibrium models for dark matter haloes with prescribed density profile, triaxial shape and slowly outwardly growing radial velocity anisotropy.  相似文献   

5.
基于CPLD的数字移相分频钟   总被引:3,自引:0,他引:3  
设计了一种数字移相分频钟 ,其中利用了先进的复杂可编程逻辑器件(CPLD -ComplexProgrammableLogicDevice)技术 ,将硬件电路模块化 ,把各功能模块集成在一个芯片中。与以往用分立元件设计硬件电路相比 ,具有电路简单 ,可靠性高 ,便于调试的特点  相似文献   

6.
7.
8.
云南天文台新型斑点象探测系统   总被引:1,自引:0,他引:1  
斑点成象技术能有效地消除地球大气湍流的不良影响,实现地基大型天文望远镜的衍射受限分辨率成象,其所需的原始数据是天文目标及参考星的一系列的短曝光斑点象,它们取自望远镜的终端设备:斑点象探测系统。文中对比该技术对原始数据的要求:介绍了云南天文台研制的新型斑点象探测系统的结构和性能。实际观测结果表明,该系统基本能满足要求。  相似文献   

9.
A technique to detect man-made interference in the visibility data of the Mauritius Radio Telescope (MRT) has been developed. This technique is based on the understanding that the interference is generally ‘spiky’ in nature and has Fourier components beyond the maximum frequency which can arise from the radio sky and can therefore be identified. We take the sum of magnitudes of visibilities on all the baselines measured at a given time to improve detectability. This is then high-pass filtered to get a time series from which the contribution of the sky is removed. Interference is detected in the high-pass data using an iterative scheme. In each iteration, interference with amplitudes beyond a certain threshold is detected. These points are then removed from the original time series and the resulting data are high-pass filtered and the process repeated. We have also studied the statistics of the strength, numbers, time of occurrence and duration of the interference at the MRT. The statistics indicate that most often the interference excision can be carried out while post-integrating the visibilities by giving a zero weight to the interference points.  相似文献   

10.
Recently, Park &38; Gott claimed that there is a statistically significant, strong, negative correlation between the image separation Δθ and source redshift z s for gravitational lenses. This is somewhat puzzling if one believes in a flat ( k  = 0) universe, since in this case the typical image separation is expected to be independent of the source redshift, while one expects a negative correlation in a k  = −1 universe and a positive one in a k  = +1 universe. Park &38; Gott explored several effects that could cause the observed correlation, but no combination of these can explain the observations with a realistic scenario. Here, I explore this test further in three ways. First, I show that in an inhomogeneous universe a negative correlation is expected regardless of the value of k . Secondly, I test whether the Δθ– z s relation can be used as a test to determine λ0 and Ω0, rather than just the sign of k . Thirdly, I compare the results of the test from the Park &38; Gott sample with those using other samples of gravitational lenses, which can illuminate (unknown) selection effects and probe the usefulness of the Δθ– z s relation as a cosmological test.  相似文献   

11.
We describe a method for deriving the position and flux of point and compact sources observed by a scanning survey mission. Results from data simulated to test our method are presented, which demonstrate that at least a 10-fold improvement is achievable over that of extracting the image parameters, position and flux, from the equivalent data in the form of pixel maps. Our method achieves this improvement by analysing the original scan data and performing a combined, iterative solution for the image parameters. This approach allows for a full and detailed account of the point-spread function (PSF), or beam profile, of the instrument. Additionally, the positional information from different frequency channels may be combined to provide the flux-detection accuracy at each frequency for the same sky position. Ultimately, a final check and correction of the geometric calibration of the instrument may also be included. The Planck mission was used as the basis for our simulations, but our method will be beneficial for most scanning satellite missions, especially those with non-circularly symmetric PSFs.  相似文献   

12.
David R. Klassen 《Icarus》2009,204(1):32-47
Principal components analysis and target transformation are applied to near-infrared image cubes of Mars in a study to disentangle the spectra into a small number of spectral endmembers and characterize the spectral information. The image cubes are ground-based telescopic data from the NASA Infrared Telescope Facility during the 1995 and 1999 near-aphelion oppositions when ice clouds were plentiful [ [Clancy et al., 1996] and [56]], and the 2003 near-perihelion opposition when ice clouds are generally limited to topographically high regions (volcano cap clouds) but airborne dust is more common [Martin, L.J., Zurek, R.W., 1993. J. Geophys. Res. 98 (E2), 3221-3246]. The heart of the technique is to transform the data into a vector space along the dimensions of greatest spectral variance and then choose endmembers based on these new “trait” dimensions. This is done through a target transformation technique, comparing linear combinations of the principal components to a mineral spectral library. In general Mars can be modeled, on the whole, with only three spectral endmembers which account for almost 99% of the data variance. This is similar to results in the thermal infrared with Mars Global Surveyor Thermal Emission Spectrometer data [Bandfield, J.L., Hamilton, V.E., Christensen, P.R., 2000. Science 287, 1626-1630]. The globally recovered surface endmembers can be used as inputs to radiative transfer modeling in order to measure ice abundance in martian clouds [Klassen, D.R., Bell III, J.F., 2002. Bull. Am. Astron. Soc. 34, 865] and a preliminary test of this technique is also presented.  相似文献   

13.
Giant gaseous layers(termed “superdisks”) have been hypothesized in the past to account for the strip-like radio emission gap(or straight-edged central brightness depression) observed between twin radio lobes, in over a dozen relatively nearby powerful Fanaroff-Riley Class II radio galaxies. They could also provide a plausible alternative explanation for a range of observations. Although a number of explanations have been proposed for the origin of the superdisks, little is known about their material content. Some X-ray observations of superdisk candidates indicate the presence of hot gas, but a cool dusty medium also seems to be common. If they are entirely or partly composed of neutral gas, then it may be directly detectable and we report here a first attempt to detect/image any neutral hydrogen gas present in the superdisks that are inferred to be present in four nearby radio galaxies. We have not found a positive H I signal in any of the four sources, resulting in tight upper limits on the H I number density in the postulated superdisks,estimated directly from the central rms noise values of the final radio continuum subtracted image. The estimated ranges of the upper limit on neutral hydrogen number density and column density are 10^-4-10^-3 atoms per cm3 and 10^19-10^20 atoms per cm^2, respectively. No positive H I signal is detected even after combining all the four available H I images(with inverse variance weighting). This clearly rules out an H I dominated superdisk as a viable model to explain these structures, however, the possibility of a superdisk being composed of warm/hot gas still remains open.  相似文献   

14.
Lucky imaging is a high-resolution astronomical image recovery technique with two classic implementation algorithms, i.e. image selecting, shifting and adding in image space, and data selecting and image synthesizing in Fourier space. This paper proposes a novel lucky imaging algorithm where with space-domain and frequency-domain selection rates as a link, the two classic algorithms are combined successfully, making each algorithm a proper subset of the novel hybrid algorithm. Experimental results show that with the same experiment dataset and platform, the high-resolution image obtained by the proposed algorithm is superior to that obtained by the two classic algorithms. This paper also proposes a new lucky image selection and storage scheme, which can greatly save computer memory and enable lucky imaging algorithm to be implemented in a common desktop or laptop with small memory and to process astronomical images with more frames and larger size. In addition, through simulation analysis,this paper discusses the binary star detection limits of the novel lucky imaging algorithm and traditional ones under different atmospheric conditions.  相似文献   

15.
We present the measurements of positional parameters of 194 nearby binary stars performed at the 6-m BTA telescope of the SAO RAS from 2002 through 2006 applying the speckle interferometric technique. The observations were conducted at central filter wavelengths ranging from 545 to 800 nm using a speckle interferometer equipped with a fast CCD and a three-stage image intensifier. A significant part of the observed systems (80 stars) are pairs, the duality of which was discovered or suspected from the Hipparcos satellite observations. The remaining stars are visual binaries and interferometric binary systems with orbital motion period estimates from several to tens of years, as well as the pairs with slow relative motion of the components, used for gaging the image scales and positional angle calibrations.  相似文献   

16.
The use of atmospheric transfer functions is common in image reconstruction techniques such as speckle interferometry to calibrate the Fourier amplitudes of the reconstructed images. Thus, an accurate model is needed to ensure proper photometry in the reconstruction. The situation complicates when adaptive optics (AO) are used during data acquisition. I propose a novel technique to derive two‐dimensional transfer functions from data collected using AO simultaneously with the observations. The technique is capable to compute the relevant transfer functions within a short time for the prevailing atmospheric conditions and AO performance during data acquisition (© 2010 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

17.
Radarclinometry     
A mathematical theory and a corresponding algorithm have been developed to derive topographic maps from radar images as photometric arrays. Thus, as radargrammetry is to photogrammetry, so radarclinometry is to photoclinometry. Photoclinometry is endowed with a fundamental indeterminacy principle even for terrain homogeneous in normal albedo. This arises from the fact that the geometric locus of orientations of the local surface normal that is consistent with a given reflected specific-intensity of radiation is more complicated than a fixed line in space. For a radar image, the locus is a cone whose half-angle is the incidence angle and whose axis contains the radar. The indeterminacy is removed throughout a region if one possesses a control profile as a boundary-condition. In the absence of such ground-truth, a point-boundary-condition will suffice only in conjunction with a heuristic assumption, such as that the strike-line runs perpendicularly to the line-of-sight. In the present study I have implemented a more reasonable assumption which I call the hypothesis of local cylindricity.Firstly, a general theory is derived, based solely on the implicit mathematical determinacy. This theory would be directly indicative of procedure if images were completely devoid of systematic error and noise. The theory produces topography by an area integration of radar brightness, starting from a control profile, without need of additional idealistic assumptions. But we have also theorized separately a method of forming this control profile, which method does require an additional assumption about the terrain. That assumption is that the curvature properties of the terrain are locally those of a cylinder of inferable orientation, within a second-order mathematical neighborhood of every point of the terrain. While local strike-and-dip completely determine the radar brightness itself, the terrain curvature determines the brightness-gradient in the radar image. Therefore, the control profile is formed as a line integration of brightness and its local gradient starting from a single point of the terrain where the local orientation of the strike-line is estimated by eye.Secondly, and independently, the calibration curve for pixel brightness versus incidence-angle is produced. I assume that an applicable curve can be found from the literature or elsewhere so that our problem is condensed to that of properly scaling the brightness-axis of the calibration curve. A first estimate is found by equating the average image brightness to the point on the brightness axis corresponding to the complement of the effective radar depression-angle, an angle assumed given. A statistical analysis is then used to correct, on the one hand, for the fact that the average brightness is not the brightness that corresponds to the average incidence angle, as a result of the non-linearity of the calibration curve; and on the other hand, we correct for the fact that the average incidence angle is not the same for a rough surface as it is for a flat surface (and therefore not the complement of the depression angle).Lastly, the practical modifications that were interactively evolved to produce an operational algorithm for treating real data are developed. They are by no means considered optimized at present. Such a possibility is thus far precluded by excessive computer-time. Most noteworthy in this respect is the abandonment of area integration away from a control profile. Instead, the topography is produced as a set of independent line integrations down each of the parallel range lines of the image, using the theory for control-profile formation. An adaptive technique, which now appears excessive, was also employed so that SEASAT images of sand dunes could be processed. In this, the radiometric calibration was iterated to force the endpoints of each profile to zero elevation. A secondary algorithm then employed line-averages of appropriate quantities to adjust the mean tilt and the mean height of each range profile. Following this step, a sequence of fairly ordinary filtering techniques was applied to the topography. An application is shown for a Motorola image of Crazy Jug Point in the Grand Canyon. Unfortunately, a radiometric calibration curve is unavailable. But a fictitious calibration curve has provided an encouraging qualitative test of these efforts.  相似文献   

18.
自适应光学系统受观测条件与自身条件的限制,通常只能对受大气湍流影响的降质图像进行部分校正.提出一种基于帧选择与多帧降质图像盲解卷积的事后处理方法进行自适应光学图像高分辨力恢复.该方法通过帧选择技术筛选出合适的多帧降质图像参与迭代盲解卷积运算,不需要除正性限制外的任何先验知识,并已应用于云南天文台1.2m望远镜61单元自适应光学系统所观测到的星体目标图像恢复中.实验结果表明:该方法可以有效补偿自适应光学系统校正残差对图像的影响,恢复出达到衍射极限的图像.  相似文献   

19.
We have developed a general framework for modeling gyrosynchrotron and free–free emission from solar flaring loops and used it to test the premise that 2D maps of source parameters, particularly the magnetic field, can be deduced from spatially resolved microwave spectropolarimetry data. We show quantitative results for a flaring loop with a realistic magnetic geometry, derived from a magnetic-field extrapolation, and containing an electron distribution with typical thermal and nonthermal parameters, after folding through the instrumental profile of a realistic interferometric array. We compare the parameters generated from forward-fitting a homogeneous source model to each line of sight through the folded image data cube both with the original parameters used in the model and with parameters generated from forward-fitting a homogeneous source model to the original (unfolded) image data cube. We find excellent agreement in general, but with systematic effects that can be understood as due to the finite resolution in the folded images and the variation of parameters along the line of sight, which are ignored in the homogeneous source model. We discuss the use of such 2D parameter maps within a larger framework of 3D modeling, and the prospects for applying these methods to data from a new generation of multifrequency radio arrays now or soon to be available.  相似文献   

20.
《Planetary and Space Science》2007,55(10):1310-1318
We present a method for labelling and locating organic material in extraterrestrial samples in order to determine its spatial arrangement in relation to inorganic components. We have taken a suite of meteorites including carbonaceous chondrites (CC) and Shergottite–Nakhlite–Chassignites (SNCs) and attempted to locate organic material within them by labelling with osmium tetroxide vapour impregnation. This technique is limited by the abundance of organic material within each sample, and the degree of terrestrial contamination, which may contribute additional organic components or surface contaminants that may mask indigenous organic components. Results confirm previous studies that phyllosilicates are key to organic matter accumulation in extraterrestrial environments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号