首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The New Vacuum Solar Telescope (NVST) is a 1-m solar telescope that aims to observe the fine structures in both the photosphere and the chromosphere of the Sun. The observational data acquired simultaneously from one channel for the chromosphere and two channels for the photosphere bring great challenges to the data storage of NVST. The multi-channel instruments of NVST, including scientific cameras and multi-band spectrometers, generate at least 3 terabytes data per day and require high access performance while storing massive short-exposure images. It is worth studying and implementing a storage system for NVST which would balance the data availability, access performance and the cost of development. In this paper, we build a distributed data storage system (DDSS) for NVST and then deeply evaluate the availability of real-time data storage on a distributed computing environment. The experimental results show that two factors, i.e., the number of concurrent read/write and the file size, are critically important for improving the performance of data access on a distributed environment. Referring to these two factors, three strategies for storing FITS files are presented and implemented to ensure the access performance of the DDSS under conditions of multi-host write and read simultaneously. The real applications of the DDSS proves that the system is capable of meeting the requirements of NVST real-time high performance observational data storage. Our study on the DDSS is the first attempt for modern astronomical telescope systems to store real-time observational data on a low-cost distributed system. The research results and corresponding techniques of the DDSS provide a new option for designing real-time massive astronomical data storage system and will be a reference for future astronomical data storage.  相似文献   

3.
A Radiation Belt Monitor (RBM) sensitive to protons and electrons with energy 0.5 MeV1 has been designed for the High Energy Transient Experiment (HETE) satellite in order to: first, control the on-off configuration of the experiments (i.e. those susceptible to proton damage); and second, to indicate the presence of proton and/or electron events that could masquerade as legitimate high energy photon events. One of the two RBM channels has an enhanced sensitivity to electrons. Each channel of the RBM, based on a PIN silicon diode, requires a typical power of 6 milliwatts. Tests have been performed with protons with energies from 0.1 to 2.5 MeV (generated by a Cockcroft-Walton linear accelerator via the d(d,p)t reaction), and with electrons with energies up to 1 MeV (from a 1.0 µCi207Bi source).  相似文献   

4.
P. Zasche 《New Astronomy》2009,14(2):129-132
Twenty eclipsing binaries were selected for an analysis from a huge database of observations made by the INTEGRAL/OMC camera. The photometric data were processed and analyzed, resulting in a first light-curve study of these neglected eclipsing binaries. Most of the selected systems are the detached ones. The system ET Vel was discovered to be an eccentric one. Due to missing spectroscopic study of these stars, further detailed analyses are still needed.  相似文献   

5.
P. Zasche 《New Astronomy》2010,15(1):150-154
Thirty-three eclipsing binaries were selected for an analysis from a huge database of observations made by the INTEGRAL/OMC camera. The photometric data were processed and analyzed, resulting in a first light-curve study of these neglected eclipsing binaries. The system CY Lac was discovered to be an eccentric one. In several systems from this sample even their orbital periods have been confirmed or modified. Due to missing spectroscopic study of these stars, further detailed analyses are still needed.  相似文献   

6.
P. Zasche 《New Astronomy》2011,16(3):157-160
Twenty-one eclipsing binaries were selected for an analysis from a huge database of observations made by the INTEGRAL/OMC camera. The photometric data were processed and analyzed, resulting in a first light-curve study of these neglected eclipsing binaries. In several systems from this sample even their orbital periods have been confirmed or modified. Thirty-two new minima times of these binaries have been derived.  相似文献   

7.
The 'algorithm driven by the density estimate for the identification of clusters' ( DEDICA ) is applied to the A3558 cluster complex in order to find substructures. This complex, located at the centre of the Shapley Concentration supercluster, is a chain formed by the ACO clusters A3556, A3558 and A3562 and the two poor clusters SC 1327-312 and SC 1329-313. We find a large number of clumps, indicating that strong dynamical processes are active. In particular, it is necessary to use a fully three-dimensional sample (i.e. using the galaxy velocity as third coordinate) in order also to recover the clumps superimposed along the line of sight. Even though a large number of detected substructures was already found in a previous analysis, this method is more efficient and faster when compared with a wide battery of tests, and permits the direct estimate of the detection significance. Almost all subclusters previously detected by the wavelet analyses found in the literature are recognized by DEDICA . On the basis of the substructure analysis, we also briefly discuss the origin of the A3558 complex by comparing two hypotheses: (i) the structure is a cluster–cluster collision seen just after the first core–core encounter; or (ii) this complex is the result of a series of incoherent group–group and cluster–group mergings, focused in that region by the presence of the surrounding supercluster. We studied the fraction of blue galaxies in the detected substructures and found that the bluest groups reside between A3562 and A3558, i.e. in the expected position for the scenario of cluster–cluster collision.  相似文献   

8.
We deal with the problem of a zero mass body oscillating perpendicular to a plane in which two heavy bodies of equal mass orbit each other on Keplerian ellipses. The zero mass body intersects the primaries plane at the systems barycenter. This problem is commonly known as theSitnikov Problem. In this work we are looking for a first integral related to the oscillatory motion of the zero mass body. This is done by first expressing the equation of motion by a second order polynomial differential equation using a Chebyshev approximation techniques. Next we search for an autonomous mapping of the canonical variables over one period of the primaries. For that we discretize the time dependent coefficient functions in a certain number of Dirac Delta Functions and we concatenate the elementary mappings related to the single Delta Function Pulses. Finally for the so obtained polynomial mapping we look for an integral also in polynomial form. The invariant curves in the two dimensional phase space of the canonical variables are investigated as function of the primaries eccentricity and their initial phase. In addition we present a detailed analysis of the linearized Sitnikov Problem which is valid for infinitesimally small oscillation amplitudes of the zero mass body. All computations are performed automatically by the FORTRAN program SALOME which has been designed for stability considerations in high energy particle accelerators.  相似文献   

9.
In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA’s mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1,000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer “Marenostrum” in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.  相似文献   

10.
A measured calibrated solar radiance in the range 1.2-, with the spectral sampling of does not exist. When studying the measured Planetary Fourier Spectrometer (PFS) spectra of the Earth's or Mars's atmosphere we discover that the most used solar spectrum contains several important errors. Here we present a “calibrated” solar radiance in the wavelength range 1.2-, with the spectral resolution of PFS , which we are going to use for studying Martian spectra. This spectrum has been assembled using measurements from Kitt Peak and from ATMOS Spacelab experiment (uncalibrated high resolution) and theoretical results, together with low resolution calibrated continuum. This is the best we can have in this moment to be used with PFS, while waiting to have good solar calibrated radiances. Examples of solar lines at Mars are given.  相似文献   

11.
We present the results of a systematic search for outbursts in the narrow positron annihilation line on various time scales (5 × 104–106 s) based on the SPI/INTEGRAL data obtained from 2003 to 2008. We show that no outbursts were detected with a statistical significance higher than ∼6σ for any of the time scales considered over the entire period of observations. We also show that, given the large number of independent trials, all of the observed spikes could be associated with purely statistical flux fluctuations and, in part, with a small systematic prediction error of the telescope’s instrumental background. Based on the exposure achieved in ∼6 yr of INTEGRAL operation, we provide conservative upper limits on the rate of outbursts with a given duration and flux in different parts of the sky.  相似文献   

12.
The mineralogy of Mars is well understood on a qualitative level at a global scale due to satellite data. Quantitative analysis of visible and near-infrared (VNIR) satellite data is a desirable but nontrivial task, due partly to the nonlinearity of VNIR reflectance spectra from the mineral mixtures of the Martian surface. In this study, we investigated the use of the Hapke radiative transfer model to generate linearly mixed single scattering albedo data from nonlinearly mixed VNIR reflectance data and then quantitatively analyzed them using the linear spectral mixture model. Simplifications to the Hapke equation were tested accounting for variables that would be unknown when using satellite data. Mineral mixture spectra from the RELAB spectral library were degraded to test the robustness of the unmixing technique in the face of data that mimic some of the complexities of satellite spectral data collected at Mars. A final test was performed on spectra from shergottite meteorites to assess the technique against real Martian mineral mixtures. The simplified Hapke routine produced robust abundance estimates within 5–10% accuracy when applied to laboratory standard spectra from the synthetic mixtures of igneous minerals in agreement with previous studies. The results of tests involving degraded data to mimic the low spectral contrast of the Martian surface and the lack of a priori knowledge of the constituent mineral spectral endmembers, however, were less encouraging, with errors in abundance estimation greater than 25%. These results cast doubt on the utility of Hapke unmixing for the quantitative analysis of VNIR data of the surface of Mars.  相似文献   

13.
14.
We address the problem of encoding and compressing data dominated by noise. Information is decomposed into 'reference' sequences plus arrays containing noisy differences susceptible to being described by a known probability distribution. One can then give reliable estimates of the optimal compression rates by estimating the corresponding Shannon entropy. As a working example, this idea is applied to an idealized model of the cosmic microwave background (CMB) data on board the Planck satellite. Data reduction is a critical issue in space missions because the total information that can be downloaded to Earth is sometimes limited by telemetry allocation. Similar limitations might arise in remotely operated ground based telescopes. This download-rate limitation could reduce the amount of diagnostics sent on the stability of the instruments and, as a consequence, curb the final sensitivity of the scientific signal. Our proposal for Planck consists of taking differences of consecutive circles at a given sky pointing. To a good approximation, these differences could be made independent of the external signal, so that they are dominated by thermal (white) instrumental noise, which is simpler to model than the sky signal. Similar approaches can be found in other individual applications. Generic simulations and analytical predictions show that high compression rates,     can be obtained with minor or zero loss of sensitivity. Possible effects of digital distortion are also analysed. The proposed scheme is flexible and reliable enough to be optimized in relation to other critical aspects of the corresponding application. For Planck , this study constitutes an important step towards a more realistic modelling of the final sensitivity of the CMB temperature anisotropy maps.  相似文献   

15.
16.
The recent completion and operation of the High Energy Stereoscopic System [1], an array of ground based imaging Cherenkov telescopes, has provided a survey with unprecedented sensitivity of the inner part of the Galaxy and revealed a new population of very high energy gamma-rays sources emitting at E > 100 GeV. Most of them were reported to have no known radio or X-ray counterpart and hypothesised to be representative of a new class of dark nucleonic cosmic sources. In fact, very high energy gamma-rays with energies E > 1011 eV are the best proof of non-thermal processes in the universe and provide a direct in-site view of matter-radiation interaction at energies by far greater than producible in ground accelerators. At lower energy INTEGRAL has regularly observed the entire galactic plane during the first 1000 day in orbit providing a survey in the 20–100 keV range resulted in a soft gamma-ray sky populated with more than 200 sources, most of them being galactic binaries, either Black Hole Candidates (BHC) or Neutron Stars (NS) [5]. Very recently, the INTEGRAL new source IGR J18135-1751 has been identified as the soft gamma-ray counterpart of HESS J1813-178 [18] and AXJ1838.0-0655 as the X/gamma-ray counterpart of HESS J1837-069 [14].Detection of non-thermal radio, X and gamma-ray emission from these TeV sources is very important to discriminate between various emitting scenarios and, in turn, to fully understand their nature.The implications of these new findings in the high energy Galactic population will be addressed.On behalf of the IBIS Survey Team  相似文献   

17.
Comparison of the INTEGRAL upper limits on the hard X-ray flux before and after the low-energy GRB 031203 with the XMM measurements of the dust-scattered radiation at lower energies suggests that a significant fraction of the total burst energy could be released in the form of soft X-ray radiation at an early afterglow stage with a characteristic duration of ~100–1000 s. The overall time evolution of the afterglow from GRB 031203 may have not differed qualitatively from the behavior of standard (i.e., more intense) bursts studied by the SWIFT observatory. The available data also admit the possibility that the dust-scattered radiation was associated with an additional soft component in the spectrum of the gamma-ray burst itself.  相似文献   

18.
19.
A timewise kinematic method for satellite gradiometry: GOCE simulations   总被引:2,自引:0,他引:2  
We have defined new algorithms for the data processing of a satellite geodesy mission with gradiometer (such as the next European mission GOCE) to extract the information on the gravity field coefficients with a realistic estimate of their accuracy. The large scale data processing can be managed by a multistage decomposition. First the spacecraft position is determined, i.e., a kinematic method is normally used. Second we use a new method to perform the necessary digital calibration of the gradiometer. Third we use a multiarc approach to separately solve for the global gravity field parameters. Fourth we use an approximate resonant decomposition, that is we partition in a new way the harmonic coefficients of the gravity field. Thus the normal system is reduced to blocks of manageable size without neglecting significant correlations. Still the normal system is badly conditioned because of the polar gaps in the spatial distribution of the data. We have shown that the principal components of the uncertainty correspond to harmonic anomalies with very small signal in the region where GOCE is flying; these uncertainties cannot be removed by any data processing method. This allows a complete simulation of the GOCE mission with affordable computer resources. We show that it is possible to solve for the harmonic coefficients up to degree 200–220 with signal to error ratio ≥1, taking into account systematic measurement errors. Errors in the spacecraft orbit, as expected from state of the art satellite navigation, do not degrade the solution. Gradiometer calibration is the main problem. By including a systematic error model, we have shown that the results are sensitive to spurious gradiometer signals at frequencies close to the lower limit of the measurement band. If these spurious effects grow as the inverse of the frequency, then the actual error is larger than the formal error only by a factor ≃2, that is the results are not compromised.  相似文献   

20.
A mission to Mars including two Small Stations, two Penetrators and an Orbiter was launched at Baikonur, Kazakhstan, on 16 November 1996. This was called the Mars-96 mission. The Small Stations were expected to land in September 1997 (Ls approximately 178 degrees), nominally to Amazonis-Arcadia region on locations (33 N, 169.4 W) and (37.6 N, 161.9 W). The fourth stage of the Mars-96 launcher malfunctioned and hence the mission was lost. However, the state of the art concept of the Small Station can be applied to future Martian lander missions. Also, from the manufacturing and performance point of view, the Mars-96 Small Station could be built as such at low cost, and be fairly easily accommodated on almost any forthcoming Martian mission. This is primarily due to the very simple interface between the Small Station and the spacecraft. The Small Station is a sophisticated piece of equipment. With the total available power of approximately 400 mW the Station successfully supports an ambitious scientific program. The Station accommodates a panoramic camera, an alpha-proton-x-ray spectrometer, a seismometer, a magnetometer, an oxidant instrument, equipment for meteorological observations, and sensors for atmospheric measurement during the descent phase, including images taken by a descent phase camera. The total mass of the Small Station with payload on the Martian surface, including the airbags, is only 32 kg. Lander observations on the surface of Mars combined with data from Orbiter instruments will shed light on the contemporary Mars and its evolution. As in the Mars-96 mission, specific science goals could be exploration of the interior and surface of Mars, investigation of the structure and dynamics of the atmosphere, the role of water and other materials containing volatiles and in situ studies of the atmospheric boundary layer processes. To achieve the scientific goals of the mission the lander should carry a versatile set of instruments. The Small Station accommodates devices for atmospheric measurements, geophysical and geochemical studies of the Martian surface and interior, and cameras for descent phase and panoramic views. These instruments would be able to contribute remarkably to the process of solving some of the scientific puzzles of Mars.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号