首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents the approach of using complex multiplier-accumulators (CMACs) with multiple accumulators to reduce the total number of memory operations in an input-buffered architecture for the X part of an FX correlator. A processing unit of this architecture uses an array of CMACs that are reused for different groups of baselines. The disadvantage of processing correlations in this way is that each input data sample has to be read multiple times from the memory because each input signal is used in many of these baseline groups. While a one-accumulator CMAC cannot switch to a different baseline until it is finished integrating the current one, a multiple-accumulator CMAC can. Thus, the array of multiple-accumulator CMACs can switch between processing different baselines that share some input signals at any moment to reuse the current data in the processing buffers. In this way significant reductions in the number of memory read operations are achieved with only a few accumulators per CMAC. For example, for a large number of input signals three-accumulator CMACs reduce the total number of memory operations by more than a third. Simulated energy measurements of four VLSI designs in a high-performance 28 nm CMOS technology are presented in this paper to demonstrate that using multiple accumulators can also lead to reduced power dissipation of the processing array. Using three accumulators as opposed to one has been found to reduce the overall energy of 8-bit CMACs by 1.4% through the reduction of the switching activity within their circuits, which is in addition to a more than 30% reduction in the memory.  相似文献   

2.
The orbits of the asteroids crossing the orbit of the Earth and other planets are chaotic and cannot be computed in a deterministic way for a time span long enough to study the probability of collisions. It is possible to study the statistical behaviour of a large number of such orbits over a long span of time, provided enough computing resources and intelligent post processing software are available. The former, problem can be handled by exploiting the enormous power of parallel computing systems. The orbit of the asteroids can be studied as a restricted (N+M)-body problem which is suitable for the use of parallel processing, by using one processor to compute the orbits of the planets and the others to compute the orbits the asteroids. This scheme has been implemented on LCAP-2, an array of IBM and FPS processors with shared memory designed by E. Clementi (IBM). The parallelisation efficiency has been over 80%, and the overall speed over 90 MegaFLOPS; the orbits of all the asteroids with perihelia lower than the aphelion of Mars (410 objects) have been computed for 200,000, years (Project SPACEGUARD). The most difficult step of the project is the post processing of the very large amount of output data and to gather qualitative information on the behaviour of so many orbits without resorting to the traditional technique, i.e. human examination of output in graphical form. Within Project SPACEGUARD we have developed a qualitative classification of the orbits of the planet crossers. To develop an entirely automated classification of the qualitative orbital behaviour-including crossing behaviour, resonances (mean motion and secular), and protection mechanisms avoiding collisions-is a challenge to be met.  相似文献   

3.
A new architecture is presented for a Networked Signal Processing System (NSPS) suitable for handling the real-time signal processing of multi-element radio telescopes. In this system, a multi-element radio telescope is viewed as an application of a multi-sensor, data fusion problem which can be decomposed into a general set of computing and network components for which a practical and scalable architecture is enabled by current technology. The need for such a system arose in the context of an ongoing program for reconfiguring the Ooty Radio Telescope (ORT) as a programmable 264-element array, which will enable several new observing capabilities for large scale surveys on this mature telescope. For this application, it is necessary to manage, route and combine large volumes of data whose real-time collation requires large I/O bandwidths to be sustained. Since these are general requirements of many multi-sensor fusion applications, we first describe the basic architecture of the NSPS in terms of a Fusion Tree before elaborating on its application for the ORT. The paper addresses issues relating to high speed distributed data acquisition, Field Programmable Gate Array (FPGA) based peer-to-peer networks supporting significant on-the fly processing while routing, and providing a last mile interface to a typical commodity network like Gigabit Ethernet. The system is fundamentally a pair of two co-operative networks, among which one is part of a commodity high performance computer cluster and the other is based on Commercial-Off The-Shelf (COTS) technology with support from software/firmware components in the public domain.  相似文献   

4.
The new era of software signal processing has a large impact on radio astronomy instrumentation. Our design and implementation of a 32 antennae, 33 MHz, dual polarization, fully real-time software backend for the GMRT, using only off-the-shelf components, is an example of this. We have built a correlator and a beamformer, using PCI-based ADC cards and a Linux cluster of 48 nodes with dual gigabit inter-node connectivity for real-time data transfer requirements. The highly optimized compute pipeline uses cache efficient, multi-threaded parallel code, with the aid of vectorized processing. This backend allows flexibility in final time and frequency resolutions, and the ability to implement algorithms for radio frequency interference rejection. Our approach has allowed relatively rapid development of a fairly sophisticated and flexible backend receiver system for the GMRT, which will greatly enhance the productivity of the telescope. In this paper we describe some of the first lights using this software processing pipeline. We believe this is the first instance of such a real-time observatory backend for an intermediate sized array like the GMRT.  相似文献   

5.
The new generation of radio telescopes, such as the proposed Square Kilometer Array (SKA) and the Low-Frequency Array (LOFAR) rely heavily on the use of very large phased aperture arrays operating over wide band-widths at frequency ranges up to approximately 1.4?GHz. The SKA in particular will include aperture arrays consisting of many thousands of elements per station providing un-paralleled survey speeds. Currently two different arrays (from nominally 70?MHz to 450?MHz and from 400?MHz to 1.4?GHz) are being studied for inclusion within the overall SKA configuration. In this paper we aim to analyze the array contribution to system temperature for a number of regular and irregular planar antenna array configurations which are possible geometries for the low-frequency SKA (sparse disconnected arrays). We focus on the sub-500?MHz band where the real sky contribution to system temperature (T sys ) is highly significant and dominants the overall system noise temperature. We compute the sky noise contribution to T sys by simulating the far field response of a number of SKA stations and then convolve that with the sky brightness temperature distribution from the Haslam 408?MHz survey which is then scaled to observations at 100?MHz. Our analysis of array temperature is carried out by assuming observations of three cold regions above and below the Galactic plane. The results show the advantages of regular arrays when sampled at the Nyquist rate as well as their disadvantages in the form of grating lobes when under-sampled in comparison to non-regular arrays.  相似文献   

6.
Abstract— A genetic algorithm is employed to perform the pairing of meteorite fragments based on various characteristics measured from thin sections using an image analysis program and from analyses routinely carried out during classification. The genetic algorithm searches for best group pairings by generating a population of trial pairs, linking them together to form groups, and evolving the population, so that only pairs that are members of likely pairing groups survive to the next generation of the population. In this way, meaningful pairing groups will emerge from the population, as long as characteristics from within real pairing groups have variance sufficiently small compared to the variance between groups. What constitutes “sufficiently small” is discussed and investigated by testing the genetic algorithm method on artificial data, which shows that, in principle, the method can achieve a 100% success rate. The method is then tested on real data whose pairing groups are definitely known. This is achieved by gathering data from the image processing of several scenes of the same meteorite thin section, treating each scene as a separate fragment. Using thin sections from the Reg el Acfer meteorite population, we find that the genetic algorithm identifies almost all of the main pairing groups, with about half the groups being found in their entirety; the pairwise success rate being 76%. Although this methodology requires some refinement before it could be applied to a population of meteorite fragments, these preliminary results are encouraging. The potential benefit of an automated approach lies in the tremendous savings in time and effort, allowing meaningful and reproducible pairings to be made from data sets that are prohibitively large for a human being.  相似文献   

7.
In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA’s mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1,000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer “Marenostrum” in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.  相似文献   

8.
Most often, astronomers are interested in a source (e.g., moving, variable, or extreme in some colour index) that lies on a few pixels of an image. However, the classical approach in astronomical data processing is the processing of the entire image or set of images even when the sole source of interest may exist on only a few pixels of one or a few images. This is because pipelines have been written and designed for instruments with fixed detector properties (e.g., image size, calibration frames, overscan regions, etc.). Furthermore, all metadata and processing parameters are based on an instrument or a detector. Accordingly, out of many thousands of images for a survey, this can lead to unnecessary processing of data that is both time-consuming and wasteful. We describe the architecture and an implementation of sub-image processing in Astro-WISE. The architecture enables a user to select, retrieve and process only the relevant pixels in an image where the source exists. We show that lineage data collected during the processing and analysis of datasets can be reused to perform selective reprocessing (at sub-image level) on datasets while the remainder of the dataset is untouched, a difficult process to automate without lineage.  相似文献   

9.
Observation data from radio telescopes is typically stored in three (or higher) dimensional data cubes, the resolution, coverage and size of which continues to grow as ever larger radio telescopes come online. The Square Kilometre Array, tabled to be the largest radio telescope in the world, will generate multi-terabyte data cubes – several orders of magnitude larger than the current norm. Despite this imminent data deluge, scalable approaches to file access in Astronomical visualisation software are rare: most current software packages cannot read astronomical data cubes that do not fit into computer system memory, or else provide access only at a serious performance cost. In addition, there is little support for interactive exploration of 3D data.We describe a scalable, hierarchical approach to 3D visualisation of very large spectral data cubes to enable rapid visualisation of large data files on standard desktop hardware. Our hierarchical approach, embodied in the AstroVis prototype, aims to provide a means of viewing large datasets that do not fit into system memory. The focus is on rapid initial response: our system initially rapidly presents a reduced, coarse-grained 3D view of the data cube selected, which is gradually refined. The user may select sub-regions of the cube to be explored in more detail, or extracted for use in applications that do not support large files. We thus shift the focus from data analysis informed by narrow slices of detailed information, to analysis informed by overview information, with details on demand. Our hierarchical solution to the rendering of large data cubes reduces the overall time to complete file reading, provides user feedback during file processing and is memory efficient. This solution does not require high performance computing hardware and can be implemented on any platform supporting the OpenGL rendering library.  相似文献   

10.
The uv-faceting imaging is one of the widely used large field of view imaging technologies, and will be adopted for the data processing of the low-frequency array in the first stage of the Square Kilometre Array (SKA1). Due to the scale of the raw data of SKA1 is unprecedentedly large, the efficiency of data processing directly using the original uv-faceting imaging will be very low. Therefore, a uv-faceting imaging algorithm based on the MPI (Message Passing Interface)+OpenMP (Open Multi-Processing) and a uv-faceting imaging algorithm based on the MPI+CUDA (Compute Unified Device Architecture) are proposed. The most time-consuming data reading and gridding in the algorithm are optimized in parallel. The verification results show that the results of the proposed two algorithms are basically consistent with that obtained by the current mainstream data processing software CASA (Common Astronomy Software Applications), which indicates that the proposed two algorithms are basically correct. Further analysis of the accuracy and total running time shows that the MPI+CUDA method is better than the MPI+OpenMP method in both the correctness rate and running speed. The performance test results show that the proposed algorithms are effective and have certain extensibility.  相似文献   

11.
The last decade has seen a dramatic change in the way astronomy is carried out. The dawn of the the new microelectronic devices, like CCDs has dramatically extended the amount of observed data. Large, in some cases all sky surveys emerged in almost all the wavelength ranges of the observable spectrum of electromagnetic waves. This large amount of data has to be organized, published electronically and a new style of data retrieval is essential to exploit all the hidden information in the multiwavelength data. Many statistical algorithms required for these tasks run reasonably fast when using small sets of in‐memory data, but take noticeable performance hits when operating on large databases that do not fit into memory. We utilize new software technologies to develop and evaluate fast multidimensional indexing schemes that inherently follow the underlying, highly non‐uniform distribution of the data: they are layered uniform indices, hierarchical binary space partitioning, and sampled flat Voronoi tessellation of the data. These techniques can dramatically speed up operations such as finding similar objects by example, classifying objects or comparing extensive simulation sets with observations. (© 2007 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

12.
For satellite conjunction prediction containing many objects, timely processing can be a concern. Various filters are used to identify orbiting pairs that cannot come close enough over a prescribed time period to be considered hazardous. Such pairings can then be eliminated from further computation to quicken the overall processing time. One such filter is the orbit path filter (also known as the geometric pre-filter), designed to eliminate pairs of objects based on characteristics of orbital motion. The goal of this filter is to eliminate pairings where the distance (geometry) between their orbits remains above some user-defined threshold, irrespective of the actual locations of the satellites along their paths. Rather than using a single distance bound, this work presents a toroid approach, providing a measure of versatility by allowing the user to specify different in-plane and out-of-plane bounds for the path filter. The primary orbit is used to define a focus-centered elliptical ring torus with user-defined thresholds. An assessment is then made to determine if the secondary orbit can touch or penetrate this torus. The method detailed here can be used on coplanar, as well as non-coplanar, orbits.  相似文献   

13.
An FX correlator implementation for the SKAMP project is presented. The completed system will provide capabilities that match those proposed for the aperture plane array concept for the SKA. Through novel architecture, expansion is possible to accommodate larger arrays such as the 600-station cylindrical reflector proposals. In contrast to many current prototypes, it will use digital transmission from the antenna, requiring digital filterbanks and beamformers to be located at the antenna. This will demonstrate the technologies needed for all long baseline antennas in the SKA.  相似文献   

14.
Aiming to study the relationship between Venus surface heights and surface roughness, the Pioneer Venus surface altitude map and map of r.m.s. slope in m-dkm scale have been analy sed for the Beta and Ishtar regions using a system of digital image processing. To integrate the data obtained, the results of geomorphological analysis of Venera 9 and 10 TV panoramas as well as gamma-spectrometric and photometric measurements were used. The analysis gives proof that Venera 9 and 10 landing sites represent geologic-morphologic situations typical of Venus, thus enabling the results of observations made at landing sites to be extended to large provinces. Apparently this conclusion is also applicable to the Venera 8 landing site. No strong relationship exists between the roughness of the surface and its altitude or the amount of a regional slope; neither for the Beta nor for the Ishtar region. A weak direct correlation observable for roughness-altitude pairs for the Beta region and roughness-altitude, roughness-slope pairs for the Ishtar region are quite obviously a consequence of regional roughness control, i.e. of an overall character of geological structure. On Venus the factors contributing to higher surface roughness on the m-dkm scale are, obviously, mostly volcanic and tectonic in their nature whilst those responsible for smoothing-out of the surface are chiefly exogenic. The rate of exogenic transformation of the Cytherean surface may be fairly high. On Venus, similarly as on the Earth, active tectono-magmatic processes have possibly taken place in recent geological epochs. One of the places where they are manifest is an extensive zone running from north to south across the Beta, Phoebe and Themis highlands. Within its limits occur both the process of basaltic shield-type volcanism and areal basalt effusions at low hypsometric levels accounting for the formation of lowland plains at the expense of ancient rolling plains. The basalts of the shield volcano Beta show some differences in composition compared to those of areal effusions at low hypsometric levels. The overall character of Cytherean tectonics in the recent geologic epoch is apparently block-type with a predominance of vertical movements. Against the background of the sinking of some of the blocks the other ones are rising and, possibly, such compensation upheavals have been responsible for the formation of the Ishtar region.  相似文献   

15.
Stellar dynamics     
This review attempts to place stellar dynamics in relation to other dynamical fields and to describe some of its important techniques and present-day problems. Stellar dynamics has some parallels, in increasing order of closeness, with celestial mechanics, statistical mechanics, kinetic theory, and plasma theory; but even in the last case the parallels are not very close. Stellar dynamics describes, usually through distribution functions, the motions of a large number of bodies as they all act on each other gravitationally. To a good approximation each star can be considered to move in the smoothed-out field of all the others, with random encounters between pairs of stars adding a slow statistical change to these smooth motions. Smooth-field dynamics has a well-developed theory, and the state of smooth stellar systems can be described in some detail. The ‘third integral’ presents an outstanding problem, however. Stellar encounters also have a well-developed theory, but close encounters and encounters of a single star with a binary pose serious problems for the statistical treatment. Star-cluster dynamics can be approached through a theory of smooth-field dynamics plus changes due to encounters, or alternatively through numerical simulations. The relation between the two methods is not yet close enough. The dynamical evolution of star clusters is still not fully understood.  相似文献   

16.
状态转移矩阵的差分算法及其应用   总被引:2,自引:0,他引:2  
胡小工  黄珹  廖新浩 《天文学报》2000,41(2):113-122
指出用数值积分方法计算状态转移矩阵在程序实现时存在的困难。根据精密定轨和参数解算的实际需要,提出用差分算法,即通过两条接近的轨道的差来计算状态转移矩阵差分算法的优点是程序具有良好的结构且编程简单,其不足之处是差分时可能损失精度。将差分算法与数值积分方法的结果进行比较,提出克服其不足处的方法。  相似文献   

17.
NASA is proposing a new receiving facility that needs to beamform broadband signals from hundreds of antennas. This is a similar problem to SKA beamforming with the added requirement that the processing should not add significant noise or distortion that would interfere with processing spacecraft telemetry data. The proposed solution is based on an FX correlator architecture and uses oversampling polyphase filterbanks to avoid aliasing. Each beamformer/correlator module processes a small part of the total bandwidth for all antennas, eliminating interconnection problems. Processing the summed frequency data with a synthesis polyphase filterbank reconstructs the time series. Choice of suitable oversampling ratio, and analysis and synthesis filters can keep aliasing below −39 dB while keeping the passband ripple low. This approach is readily integrated into the currently proposed SKA correlator architecture.  相似文献   

18.
一种扩大FAST视场的方法   总被引:1,自引:0,他引:1  
所有的大口径射电望远镜都存在这样一个问题:在其分辨率和灵敏度提高的同时,视场变小.而且口径越大,视场越小.这成为大口径望远镜不可回避的矛盾.要解决这个矛盾,可以在望远镜的焦平面上放置Ⅳ个分立馈源.让它们同时工作,这样可以看作把视场扩大了Ⅳ倍.望远镜的工作效率提高Ⅳ倍.但是这样做的缺点是——视场不连续.且馈源数目Ⅳ受到望远镜焦比(F/D)的限制.采用致密焦面阵(dense focal plane array)就可以很好地解决这个问题.致密焦面阵的单元不是喇叭口天线,而是无方向性的Vivaldi天线(Vivaldi antenna).要把Vivaldi阵列应用到望远镜上,需要对单个Vivaldi天线和Vivaldi阵列的电性能有清楚的认识,并能根据需要来设计照明方向图.还要知道大望远镜的焦面上电磁场的分布情况,借此判断能否应用Vivaldi阵列,以及给出Vivaldi单元的分路赋权网络.主要给出了FAST的焦面场的分布情况.并说明应用Vivaldi阵列的可能性.  相似文献   

19.
Fred L. Whipple 《Icarus》1977,30(4):736-746
Although the common genetic origin of the Kreutz family of Sun-grazing comets has generally been accepted, there remains uncertainty with regard to genetic identity among other groups of comets whose orbital elements are nearly alike. Porter has listed a number of such grouds and Öpik has made a statistical study of the orbits of 472 comets with aphelion distance beyond Saturn. He lists 97 groups that show similarities among their three angular elements. He calculates an overall probability of some 10?39 that these similarities could have occurred by chance, and thus concludes that 60% or more of such comets fall into genetic groups containing from two to seven members. This paper explores the statistical reality of Öpik's groups utilizing the Monte Carlo method of statistics as well as ordinary probability theory. The conclusion is reached that except for a few pairs the similarity among orbital elements within the groups is no greater than random expectation.  相似文献   

20.
We present a large set of radio observations of the luminous blue variable P Cygni. These include two 6-cm images obtained with MERLIN which spatially resolve the 6-cm photosphere, monitoring observations obtained at Jodrell Bank every few days over a period of two months, and VLA observations obtained every month for seven years. This combination of data shows that the circumstellar environment of P Cyg is highly inhomogeneous, that there is a radio nebula extending to almost an arcminute from the star at 2 and 6 cm, and that the radio emission is variable on a time-scale no longer than one month, and probably as short as a few days. This short-time-scale variability is difficult to explain. We present a model for the radio emission with which we demonstrate that the star has probably been losing mass at a significant rate for at least a few thousand years, and that it has undergone at least two major outbursts of increased mass loss during the past two millenia.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号