首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Aiming at the problem of the precise design of the direct transfer trajectory of Mars probe, this paper proposes an algorithm of fast differential correction. It is based on the mathematical model of the difference between the control and target parameters, henceforth the matrix of partial derivatives of the system is solved. This can effectively reduce the number of times of integration in the process of solution. Taking the opportunity of the 2018 Mars probe as an example, the algorithm is verified. The results of emulation show that by using the initial values yielded by the method of patched conical curves, only 6-9 orbit integral iterations are needed to get a standard trajectory. Via the STK (satellite tool kit) technique, the results of computation are compared and justified.  相似文献   

2.
In this article, we discuss four fundamental scientific problems of lunar research: (1) lunar chronology, (2) the internal structure of the Moon, (3) the lunar polar regions, and (4) lunar volcanism. After formulating the scientific problems and their components, we proceed to outlining a list of technical solutions and priority lunar regions for research. Solving the listed problems requires investigations on the lunar surface using lunar rovers, which can deliver a set of analytical equipment to places where geological conditions are known from a detailed analysis of orbital information. The most critical research methods, which can answer some of the key questions, are analysis of local geological conditions from panoramic photographs, determination of the chemical, isotopic, and mineral composition of the soil, and deep seismic sounding. A preliminary list is given of lunar regions with high scientific priority.  相似文献   

3.
幸运成像技术是一种从大量短曝光图像中选取少量幸运好图进行配准、叠加的高分辨率图像恢复技术,能够有效减小大气湍流对图像质量的影响,但传统的基于中央处理器(Central Processing Unit,CPU)的幸运成像算法难以实现实时化.利用现场可编程门阵列(Field Programmable Gate Array,...  相似文献   

4.
The broadband spectral energy distribution(SED) of blazars is generally interpreted as radiation arising from synchrotron and inverse Compton mechanisms. Traditionally,the underlying source parameters responsible for these emission processes,like particle energy density,magnetic field,etc.,are obtained through simple visual reproduction of the observed fluxes. However,this procedure is incapable of providing confidence ranges for the estimated parameters. In this work,we propose an efficient algorithm to perform a statistical fit of the observed broadband spectrum of blazars using different emission models. Moreover,we use the observable quantities as the fit parameters,rather than the direct source parameters which govern the resultant SED. This significantly improves the convergence time and eliminates the uncertainty regarding initial guess parameters. This approach also has an added advantage of identifying the degenerate parameters,which can be removed by including more observable information and/or additional constraints. A computer code developed based on this algorithm is implemented as a user-defined routine in the standard X-ray spectral fitting package,XSPEC. Further,we demonstrate the efficacy of the algorithm by fitting the well sampled SED of blazar 3 C 279 during its gamma ray flare in 2014.  相似文献   

5.
In a previous paper it was shown how cosmological models can be characterized by few initial data on the observer's past light cone. Part of the inner geometry of the past cone as well as some matter variables taken on the cone (in the simplest case the velocity and density of a cosmological fluid) determine a world model uniquely within the past light cone. Since the initial data to some degree enter the relations between observable quantities such as redshift, luminosity, angular diameter, number counts, distortion parameters and background radiation intensity, it might be possible to determine the required initial data from observations. This problem is discussed in the present article for some observable relatios involving the quantities just mentioned.  相似文献   

6.
In deep space exploration,many engineering and scientific requirements require the accuracy of the measured Doppler frequency to be as high as possible.In our paper,we analyze the possible frequency measurement points of the third-order phase-locked loop(PLL) and find a new Doppler measurement strategy.Based on this finding,a Doppler frequency measurement algorithm with significantly higher measurement accuracy is obtained.In the actual data processing,compared with the existing engineering software,the accuracy of frequency of 1 second integration is about 5.5 times higher when using the new algorithm.The improved algorithm is simple and easy to implement.This improvement can be easily combined with other improvement methods of PLL,so that the performance of PLL can be further improved.  相似文献   

7.
时间尺度是通过综合众多精密时钟得到的。时间尺度的计算目前主要采用类ALGOS算法 ,这类算法的缺点是权值没有准确反映精密时钟噪声参数以及不能使五种噪声分量同时达到优化综合和不能形成实时的时间尺度。从精密时钟综合的优化算法原理出发 ,探讨了精密时钟噪声参数的估计、精密时钟噪声中噪声分量的分解等问题的解决方法 ,并由此提出了较优化的时间尺度算法 (精密时钟综合算法 ) ,还提出了对现有算法的改进意见。  相似文献   

8.
The increasing number of space debris has created an orbital debris environment that poses increasing impact risks to existing space systems and human space flights. For the safety of in-orbit spacecrafts, we should optimally schedule surveillance tasks for the existing facilities to allocate resources in a manner that most significantly improves the ability to predict and detect events involving affected spacecrafts. This paper analyzes two criteria that mainly affect the performance of a scheduling scheme and introduces an artificial intelligence algorithm into the scheduling of tasks of the space debris surveillance network. A new scheduling algorithm based on the particle swarm optimization algorithm is proposed, which can be implemented in two different ways: individual optimization and joint optimization. Numerical experiments with multiple facilities and objects are conducted based on the proposed algorithm, and simulation results have demonstrated the effectiveness of the proposed algorithm.  相似文献   

9.
The colour–magnitude diagrams of resolved single stellar populations, such as open and globular clusters, have provided the best natural laboratories to test stellar evolution theory. Whilst a variety of techniques have been used to infer the basic properties of these simple populations, systematic uncertainties arise from the purely geometrical degeneracy produced by the similar shape of isochrones of different ages and metallicities. Here we present an objective and robust statistical technique which lifts this degeneracy to a great extent through the use of a key observable: the number of stars along the isochrone. Through extensive Monte Carlo simulations we show that, for instance, we can infer the four main parameters (age, metallicity, distance and reddening) in an objective way, along with robust confidence intervals and their full covariance matrix. We show that systematic uncertainties due to field contamination, unresolved binaries, initial or present-day stellar mass function are either negligible or well under control. This technique provides, for the first time, a proper way to infer with unprecedented accuracy the fundamental properties of simple stellar populations, in an easy-to-implement algorithm.  相似文献   

10.
Using the definition of four important events in the evolution of Massive Close Binary systems, we define five observable evolutionary phases in the life of a Massive Close Binary system: OB+OB, WR+OB, C+OB, C+WR, and WR+WR. We define and compute a number of observable phenomena for large groups of Massive Close Binaries. For one burst of star formation, we compute the number of systems in different evolutionary phases, and the total mass loss as functions of the time. For continuous star formation, we determine the fraction of WR binary stars, occurring in different phases of massive close binary evolution, and a number of average quantities (mass, mass ratio) of WR binary systems. The results are compared with observations of WR binaries in the Galaxy and in Open Clusters.This research is supported by the national Foundation of Collective Fundamental Research (FKFO) of Belgium under contract Nr. 2.9009.79.  相似文献   

11.
《Planetary and Space Science》2007,55(14):2097-2112
We briefly describe the history of landings on Venus, the acquired geochemical data and their potential petrologic interpretations. We suggest a new approach to Venus landing site selection that would avoid the potential contamination by ejecta from upwind impact craters. We also describe candidate units to be sampled in both in situ measurement and sample return missions. For the in situ measurements, the “true” tessera terrain (tt) material is considered as the highest priority goal with the second priority given to transitional tessera terrain (ttt), shield plains (psh) and lobate plains (pl) materials. For the sample return mission, the material of regional plains with wrinkle ridges (pwr) is considered as the highest priority goal with the second priority given to tessera terrain (tt) material. Combining the desire to study materials of specific geologic units with the problem of avoiding potential contamination by ejecta from upwind impact craters, we have suggested several candidate landing sites for each of the geologic units. Although spacecraft ballistics and other constraints of specific mission profiles (VEP or others) may lead to the selection of different candidate sites, we believe that the approaches outlined in this paper can be helpful approach in optimizing mission science return.  相似文献   

12.
A two-point boundary value problem of the Kepler orbit similar to Lambert’s problem is proposed. The problem is to find a Kepler orbit that will travel through the initial and final points in a specified flight time given the radial distances of the two points and the flight-direction angle at the initial point. The Kepler orbits that meet the geometric constraints are parameterized via the universal variable z introduced by Bate. The formula for flight time of the orbits is derived. The admissible interval of the universal variable and the variation pattern of the flight time are explored intensively. A numerical iteration algorithm based on the analytical results is presented to solve the problem. A large number of randomly generated examples are used to test the reliability and efficiency of the algorithm.  相似文献   

13.
望远镜调度是望远镜运行中的关键组成部分, 用于辅助科研人员进行合理的观测计划安排, 提高望远镜的运行效率, 获取高质量的观测数据. 然而, 由于不同观测项目的科学需求不同, 望远镜的调度过程十分复杂. 针对短周期多目标的观测项目, 考虑望远镜换源时转动时长、观测高度角等因素进行建模, 采用贪心算法对中国科学院新疆天文台南山26m望远镜脉冲星到达时间观测列表进行调度. 通过模拟表明, 使用算法完成的观测列表可以有效地减少观测过程中的平均转动时长, 提升观测数据的质量, 提高望远镜时间利用率, 减少科研人员对观测列表编排的负担.  相似文献   

14.
To calculate the dynamics of celestial bodies, we suggest the nonclassic interval algorithm based on taking into account explicitly the limitations of the resolving capacity of instrumental observation facilities. This algorithm is consistent with the correspondence principle (has a classic limit) and is a system of integer mappings of a recurrent type which is free of the effect of the round-off error accumulation. Another feature is its relative simplicity, which allowed us to make the calculations less time-consuming as compared to the classic approach. This algorithm was used to calculate the evolution of the Solar System's planetary orbits over times of about 500 million years. The calculations support the results obtained earlier by classic methods and make it possible to conclude that the suggested interval approach can be adequately applied to the planetary problem under consideration.  相似文献   

15.
毫秒脉冲星的自转频率非常稳定,提供了一种独立的基于遥远自然天体并能持续数百万乃至数十亿年的时间基准,具有稳定性强、运行时间长、服务范围广等特点.为了减弱毫秒脉冲星计时观测中各种高斯噪声对脉冲星时的影响,研究了一种基于双谱滤波的综合脉冲星时构建算法,处理分析了国际脉冲星计时阵(International Pul-sar Timing Array,IPTA)最新发布的4颗毫秒脉冲星(PSR J0437-4715、J0613-0200、J1713+0747和J1909-3744)的观测数据,分析了不同时间尺度综合脉冲星时的稳定性,并与构成国际原子时(International Atomic Time,TAI)的4家授时单位原子钟稳定性进行了比较.结果表明:双谱滤波算法能够较好地抑制观测噪声,提高综合脉冲星时的稳定性.相比于经典加权算法,综合脉冲星时1 yr、10 yr稳定度从7.77×10-14、8.56×10-16分别提高到1.50×10-14、3.50×10-16,单脉冲星时稳定性的提升也类似.同时发现,综合脉冲星时稳定性在5 yr及以上时间尺度上优于原子钟稳定性,可用于改善当前原子时的长期稳定性.  相似文献   

16.
By considering prominent events that are observable from both Earth and nearby stellar systems it is possible to establish common clocks that may be useful in estimating arrival times for signals of intelligent extraterrestrial origin. The geometry and statistics of a timing strategy are developed together with quantitative estimates of its effectiveness and limits on its application. Effectiveness is measured by comparing the timing strategy with one randomized in time. Limitations arise from inaccuracies inherent in the determination of stellar parallaxes and result in standard deviations of the order of weeks to months for time estimates. The problem can be alleviated by choosing clocks close to Sender in angular distance. Signal opportunities for several nearby Sun-like stars are calculated using the bright Nova Cygni 1975 as a clock.  相似文献   

17.
3D numerical simulations have been very useful for the understanding of mantle convection of the earth. In almost all previous simulations of mantle convection, the (extended) Boussinesq approximation has been used. This method is implicit in the sense that buoyancy force and viscosity are balanced, and allows the use of long timesteps that are not limited by the CFL condition. However, the resulting matrix is ill-conditioned, in particular since the viscosity strongly depends on the temperature. It is not well-suited to modern large-scale parallel machines.In this paper, we propose an explicit method which can be used to solve the mantle convection problem. If we can reduce the sound speed without changing the characteristics of the flow, we can increase the timestep and thus can use the explicit method. In order to reduce the sound speed, we multiplied the inertia term of the equation of motion by a large and viscosity-dependent coefficient. Theoretically, we can expect that this modification would not change the flow as long as the Reynolds number and the Mach number are sufficiently smaller than unity. We call this method the variable inertia method (VIM).We have performed an extensive set of numerical tests of the proposed method for thermal convection, and concluded that it works well. In particular, it can handle differences in viscosity of more than five orders of magnitude.  相似文献   

18.
This paper presents the approach of using complex multiplier-accumulators (CMACs) with multiple accumulators to reduce the total number of memory operations in an input-buffered architecture for the X part of an FX correlator. A processing unit of this architecture uses an array of CMACs that are reused for different groups of baselines. The disadvantage of processing correlations in this way is that each input data sample has to be read multiple times from the memory because each input signal is used in many of these baseline groups. While a one-accumulator CMAC cannot switch to a different baseline until it is finished integrating the current one, a multiple-accumulator CMAC can. Thus, the array of multiple-accumulator CMACs can switch between processing different baselines that share some input signals at any moment to reuse the current data in the processing buffers. In this way significant reductions in the number of memory read operations are achieved with only a few accumulators per CMAC. For example, for a large number of input signals three-accumulator CMACs reduce the total number of memory operations by more than a third. Simulated energy measurements of four VLSI designs in a high-performance 28 nm CMOS technology are presented in this paper to demonstrate that using multiple accumulators can also lead to reduced power dissipation of the processing array. Using three accumulators as opposed to one has been found to reduce the overall energy of 8-bit CMACs by 1.4% through the reduction of the switching activity within their circuits, which is in addition to a more than 30% reduction in the memory.  相似文献   

19.
Schrijver  Carolus J.  Title  Alan M. 《Solar physics》2002,207(2):223-240
We study the statistical properties of the connectivity of the corona over the quiet Sun by analyzing the potential magnetic field above the central area of source planes sprinkled randomly with some 300 magnetic monopoles each. We find that the field is generally more complex than one might infer from a study of the field within the source plane alone, or from a study of the 3D field around a small number of sources. Whereas a given source most commonly connects to only its nearest neighbors, it may connect to up to several dozen sources; only a weak trend relates the source strength and the number of connections. The connections between pairs of sources define volumes, or domains, of connectivity. Domains that have a finite cross section with the source plane are enclosed by surfaces that contain a pair of null points. In contrast, most of the bounding surfaces of domains that lie above the source plane appear not to contain null points. We argue that the above findings imply (i) that we should expect at best a weak correlation between coronal brightness and the flux in an underlying flux concentration, and (ii) that the low-lying chromospheric field lines (such as are observable in H) provide information on source connections that are largely complementary to those traced by the higher-reaching coronal field lines (observable in the extreme ultraviolet). We compare sample TRACE and SOHO/MDI observations of the quiet corona and photosphere with our finding that the number density of null points within the source plane closely matches that of the sources; because we find essentially no foci of coronal brightening away from significant photospheric magnetic flux concentrations, we conclude that coronal heating at such null points does not contribute significantly to the overall heating. We argue that the divergence of field lines towards multiple sources restricts the propagation of braids and twists, so that any coronal heating that is associated with the dissipation of braids induced by footpoint shuffling in mixed-polarity network is likely (a) to occur predominantly low in the corona, and (b) to be relatively more efficient in quiet Sun than in active regions for a given field strength and loop length.  相似文献   

20.
Contemporary surveys provide a huge number of detections of small solar system bodies, mostly asteroids. Typically, the reported astrometry is not enough to compute an orbit and/or perform an identification with an already discovered object. The classical methods for preliminary orbit determination fail in such cases: a new approach is necessary. When the observations are not enough to compute an orbit we represent the data with an attributable (two angles and their time derivatives). The undetermined variables range and range rate span an admissible region of solar system orbits, which can be sampled by a set of Virtual Asteroids (VAs) selected by an optimal triangulation. The attributable results from a fit and has an uncertainty represented by a covariance matrix, thus the predictions of future observations can be described by a quasi-product structure (admissible region times confidence ellipsoid), which can be approximated by a triangulation with each node surrounded by a confidence ellipsoid. The problem of identifying two independent short arcs of observations has been solved. For each VA in the admissible region of the first arc we consider prediction at the time of the second arc and the corresponding covariance matrix, and we compare them with the attributable of the second arc with its own covariance. By using the penalty (increase in the sum of squares, as in the algorithms for identification) we select the VAs which can fit together both arcs and compute a preliminary orbit. Even two attributables may not be enough to compute an orbit with a convergent differential corrections algorithm. The preliminary orbits are used as first guess for constrained differential corrections, providing solutions along the Line Of Variations (LOV) which can be used as second generation VAs to further predict the observations at the time of a third arc. In general the identification with a third arc will ensure a least squares orbit, with uncertainty described by the covariance matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号