首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
VMF3/GPT3: refined discrete and empirical troposphere mapping functions   总被引:1,自引:0,他引:1  
Incorrect modeling of troposphere delays is one of the major error sources for space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). Over the years, many approaches have been devised which aim at mapping the delay of radio waves from zenith direction down to the observed elevation angle, so-called mapping functions. This paper contains a new approach intended to refine the currently most important discrete mapping function, the Vienna Mapping Functions 1 (VMF1), which is successively referred to as Vienna Mapping Functions 3 (VMF3). It is designed in such a way as to eliminate shortcomings in the empirical coefficients b and c and in the tuning for the specific elevation angle of \(3^{\circ }\). Ray-traced delays of the ray-tracer RADIATE serve as the basis for the calculation of new mapping function coefficients. Comparisons of modeled slant delays demonstrate the ability of VMF3 to approximate the underlying ray-traced delays more accurately than VMF1 does, in particular at low elevation angles. In other words, when requiring highest precision, VMF3 is to be preferable to VMF1. Aside from revising the discrete form of mapping functions, we also present a new empirical model named Global Pressure and Temperature 3 (GPT3) on a \(5^{\circ }\times 5^{\circ }\) as well as a \(1^{\circ }\times 1^{\circ }\) global grid, which is generally based on the same data. Its main components are hydrostatic and wet empirical mapping function coefficients derived from special averaging techniques of the respective (discrete) VMF3 data. In addition, GPT3 also contains a set of meteorological quantities which are adopted as they stand from their predecessor, Global Pressure and Temperature 2 wet. Thus, GPT3 represents a very comprehensive troposphere model which can be used for a series of geodetic as well as meteorological and climatological purposes and is fully consistent with VMF3.  相似文献   

2.
The AUSTRAL observing program was started in 2011, performing geodetic and astrometric very long baseline interferometry (VLBI) sessions using the new Australian AuScope VLBI antennas at Hobart, Katherine, and Yarragadee, with contribution from the Warkworth (New Zealand) 12 m and Hartebeesthoek (South Africa) 15 m antennas to make a southern hemisphere array of telescopes with similar design and capability. Designed in the style of the next-generation VLBI system, these small and fast antennas allow for a new way of observing, comprising higher data rates and more observations than the standard observing sessions coordinated by the International VLBI Service for Geodesy and Astrometry (IVS). In this contribution, the continuous development of the AUSTRAL sessions is described, leading to an improvement of the results in terms of baseline length repeatabilities by a factor of two since the start of this program. The focus is on the scheduling strategy and increased number of observations, aspects of automated operation, and data logistics, as well as results of the 151 AUSTRAL sessions performed so far. The high number of the AUSTRAL sessions makes them an important contributor to VLBI end-products, such as the terrestrial and celestial reference frames and Earth orientation parameters. We compare AUSTRAL results with other IVS sessions and discuss their suitability for the determination of baselines, station coordinates, source coordinates, and Earth orientation parameters.  相似文献   

3.
Continental hydrology loading observed by VLBI measurements   总被引:1,自引:1,他引:0  
Variations in continental water storage lead to loading deformation of the crust with typical peak-to-peak variations at very long baseline interferometry (VLBI) sites of 3–15 mm in the vertical component and 1–2 mm in the horizontal component. The hydrology signal at VLBI sites has annual and semi-annual components and clear interannual variations. We have calculated the hydrology loading series using mass loading distributions derived from the global land data assimilation system (GLDAS) hydrology model and alternatively from a global grid of equal-area gravity recovery and climate experiment (GRACE) mascons. In the analysis of the two weekly VLBI 24-h R1 and R4 network sessions from 2003 to 2010 the baseline length repeatabilities are reduced in 79 % (80 %) of baselines when GLDAS (GRACE) loading corrections are applied. Site vertical coordinate repeatabilities are reduced in about 80 % of the sites when either GLDAS or GRACE loading is used. In the horizontal components, reduction occurs in 70–80 % of the sites. Estimates of the annual site vertical amplitudes were reduced for 16 out of 18 sites if either loading series was applied. We estimated loading admittance factors for each site and found that the average admittances were 1.01 \(\pm \) 0.05 for GRACE and 1.39 \(\pm \) 0.07 for GLDAS. The standard deviations of the GRACE admittances and GLDAS admittances were 0.31 and 0.68, respectively. For sites that have been observed in a set of sufficiently temporally dense daily sessions, the average correlation between VLBI vertical monthly averaged series and GLDAS or GRACE loading series was 0.47 and 0.43, respectively.  相似文献   

4.
Missing or incorrect consideration of azimuthal asymmetry of troposphere delays is a considerable error source in space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). So-called horizontal troposphere gradients are generally utilized for modeling such azimuthal variations and are particularly required for observations at low elevation angles. Apart from estimating the gradients within the data analysis, which has become common practice in space geodetic techniques, there is also the possibility to determine the gradients beforehand from different data sources than the actual observations. Using ray-tracing through Numerical Weather Models (NWMs), we determined discrete gradient values referred to as GRAD for VLBI observations, based on the standard gradient model by Chen and Herring (J Geophys Res 102(B9):20489–20502, 1997.  https://doi.org/10.1029/97JB01739) and also for new, higher-order gradient models. These gradients are produced on the same data basis as the Vienna Mapping Functions 3 (VMF3) (Landskron and Böhm in J Geod, 2017.  https://doi.org/10.1007/s00190-017-1066-2), so they can also be regarded as the VMF3 gradients as they are fully consistent with each other. From VLBI analyses of the Vienna VLBI and Satellite Software (VieVS), it becomes evident that baseline length repeatabilities (BLRs) are improved on average by 5% when using a priori gradients GRAD instead of estimating the gradients. The reason for this improvement is that the gradient estimation yields poor results for VLBI sessions with a small number of observations, while the GRAD a priori gradients are unaffected from this. We also developed a new empirical gradient model applicable for any time and location on Earth, which is included in the Global Pressure and Temperature 3 (GPT3) model. Although being able to describe only the systematic component of azimuthal asymmetry and no short-term variations at all, even these empirical a priori gradients slightly reduce (improve) the BLRs with respect to the estimation of gradients. In general, this paper addresses that a priori horizontal gradients are actually more important for VLBI analysis than previously assumed, as particularly the discrete model GRAD as well as the empirical model GPT3 are indeed able to refine and improve the results.  相似文献   

5.
The consistent estimation of terrestrial reference frames (TRF), celestial reference frames (CRF) and Earth orientation parameters (EOP) is still an open subject and offers a large field of investigations. Until now, source positions resulting from Very Long Baseline Interferometry (VLBI) observations are not routinely combined on the level of normal equations in the same way as it is a common process for station coordinates and EOPs. The combination of source positions based on VLBI observations is now integrated in the IVS combination process. We present the studies carried out to evaluate the benefit of the combination compared to individual solutions. On the level of source time series, improved statistics regarding weighted root mean square have been found for the combination in comparison with the individual contributions. In total, 67 stations and 907 sources (including 291 ICRF2 defining sources) are included in the consistently generated CRF and TRF covering 30 years of VLBI contributions. The rotation angles \(A_1\), \(A_2\) and \(A_3\) relative to ICRF2 are ?12.7, 51.7 and 1.8 \({\upmu }\) as, the drifts \(D_\alpha \) and \(D_\delta \) are ?67.2 and 19.1 \(\upmu \) as/rad and the bias \(B_\delta \) is 26.1 \(\upmu \) as. The comparison of the TRF solution with the IVS routinely combined quarterly TRF solution shows no significant impact on the TRF, when the CRF is estimated consistently with the TRF. The root mean square value of the post-fit station coordinate residuals is 0.9 cm.  相似文献   

6.
Three-dimensional ray tracing through a numerical weather model has been applied to a global precise point positioning (PPP) campaign for modeling both the elevation angle- and azimuth-dependence of the tropospheric delay. Rather than applying the ray-traced slant delays directly, the delay has been parameterized in terms of slant factors, which are applied in a similar manner to traditional mapping functions, but which can account for the azimuthal asymmetry of the delay. Five strategies are considered: (1) Vienna Mapping Functions 1 (VMF1) and estimation of a residual zenith delay parameter; (2) VMF1, estimation of a residual zenith delay and estimation of two tropospheric gradient parameters; (3) three-dimensional ray-traced slant factors and estimation of a residual zenith delay; (4) using only ray-traced slant factors and no estimation of any tropospheric parameters and; (5) using both ray-traced slant factors and estimating a residual zenith delay and two tropospheric gradient parameters. The use of the ray-traced slant factors (solution 3) showed a 3.8% improvement in the repeatability of the up component when compared to the assumption of a symmetric atmosphere (solution 1), while the estimation of two tropospheric gradient parameters gave the best results showing an 7.6% improvement over solution 1 in the up component. Solution 4 performed well in the horizontal domain, allowing for sub-centimeter repeatability but the up component was degraded due to deficiencies in the modeling of the zenith delay, particularly for stations located at equatorial latitudes. The magnitude of the differences in the mean coordinates between solution 2 and solution 3, and the strong correlation with the differences between the north component and the ray-traced gradients (coefficient of correlation of 0.83), as well as the impact of observation geometry on the gradient solution indicate that the use of the ray-traced slant factors could have an implication on the realization of reference frames. The estimated tropospheric products from the PPP solutions were compared to those derived from ray tracing. For the zenith delay, a root mean square (RMS) of 5.4 mm was found, while for the gradient terms, a correlation coefficient of 0.46 for the N–S and 0.42 for the E–W was found for the north–south and east–west components, suggesting that there are still important differences in the gradient parameters which could be due to either errors in the NWM or to non-tropospheric error sources leaking into the PPP-estimated gradients.  相似文献   

7.
We can map zenith wet delays onto precipitable water with a conversion factor, but in order to calculate the exact conversion factor, we must precisely calculate its key variable $T_\mathrm{m}$ . Yao et al. (J Geod 86:1125–1135, 2012. doi:10.1007/s00190-012-0568-1) established the first generation of global $T_\mathrm{m}$ model (GTm-I) with ground-based radiosonde data, but due to the lack of radiosonde data at sea, the model appears to be abnormal in some areas. Given that sea surface temperature varies less than that on land, and the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship are accurate enough to describe the surface temperature and $T_\mathrm{m}$ , this paper capitalizes on the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship to provide simulated $T_\mathrm{m}$ at sea, as a compensation for the lack of data. Combined with the $T_\mathrm{m}$ from radiosonde data, we recalculated the GTm model coefficients. The results show that this method not only improves the accuracy of the GTm model significantly at sea but also improves that on land, making the GTm model more stable and practically applicable.  相似文献   

8.
We examine the relationship between source position stability and astrophysical properties of radio-loud quasars making up the International Celestial Reference Frame (ICRF2). Understanding this relationship is important for improving quasar selection and analysis strategies, and therefore reference frame stability. We construct flux density time series, known as light curves, for 95 of the most frequently observed ICRF2 quasars at both the 2.3 and 8.4 GHz geodetic very long baseline interferometry (VLBI) observing bands. Because the appearance of new quasar components corresponds to an increase in quasar flux density, these light curves alert us about potential changes in source structure before they appear in VLBI images. We test how source position stability depends on three astrophysical parameters: (1) flux density variability at X band; (2) time lag between flares in S and X bands; (3) spectral index root-mean-square (rms), defined as the variability in the ratio between S and X band flux densities. We find that the time lag between S and X band light curves provides a good indicator of position stability: sources with time lags $<$ 0.06 years are significantly more stable ( $>$ 20 % improvement in weighted rms) than sources with larger time lags. A similar improvement is obtained by observing sources with low $(<$ 0.12) spectral index variability. On the other hand, there is no strong dependence of source position stability on flux density variability in a single frequency band. These findings can be understood by interpreting the time lag between S and X band light curves as a measure of the size of the source structure. Monitoring of source flux density at multiple frequencies therefore appears to provide a useful probe of quasar structure on scales important to geodesy. The observed astrometric position of the brightest quasar component (the core) is known to depend on observing frequency. We show how multi-frequency flux density monitoring may allow the dependence on frequency of the relative core positions along the jet to be elucidated. Knowledge of the position–frequency relation has important implications for current and future geodetic VLBI programs, as well as the alignment between the radio and optical celestial reference frames.  相似文献   

9.
Earth orientation parameters estimated from VLBI during the CONT11 campaign   总被引:1,自引:1,他引:0  
In this paper we investigate the accuracy of the earth orientation parameters (EOP) estimated from the continuous VLBI campaign CONT11. We first estimated EOP with daily resolution and compared these to EOP estimated from GNSS data. We find that the WRMS differences are about 31  $\upmu $ as for polar motion and 7  $\upmu $ s for length of day. This is about the precision we could expect, based on Monte Carlo simulations and the results of the previous CONT campaigns. We also estimated EOP with hourly resolution to study the sub-diurnal variations. The results confirm the results of previous studies, showing that the current IERS model for high-frequency EOP variations does not explain all the sub-diurnal variations seen in the estimated time series. We then compared our results to various empirical high-frequency EOP models. However, we did not find that any of these gave any unambiguous improvement. Several simulations testing the impact of various aspects of, e.g. the observing network were also made. For example, we made simulations assuming that all CONT11 stations were equipped with fast VLBI2010 antennas. We found that the WRMS error decreased by about a factor five compared to the current VLBI system. Furthermore, the simulations showed that it is very important to have a homogenous global distribution of the stations for achieving the highest precision for the EOP.  相似文献   

10.
Precise transformation between the celestial reference frames (CRF) and terrestrial reference frames (TRF) is needed for many purposes in Earth and space sciences. According to the Global Geodetic Observing System (GGOS) recommendations, the accuracy of positions and stability of reference frames should reach 1 mm and 0.1 mm year\(^{-1}\), and thus, the Earth Orientation Parameters (EOP) should be estimated with similar accuracy. Different realizations of TRFs, based on the combination of solutions from four different space geodetic techniques, and CRFs, based on a single technique only (VLBI, Very Long Baseline Interferometry), might cause a slow degradation of the consistency among EOP, CRFs, and TRFs (e.g., because of differences in geometry, orientation and scale) and a misalignment of the current conventional EOP series, IERS 08 C04. We empirically assess the consistency among the conventional reference frames and EOP by analyzing the record of VLBI sessions since 1990 with varied settings to reflect the impact of changing frames or other processing strategies on the EOP estimates. Our tests show that the EOP estimates are insensitive to CRF changes, but sensitive to TRF variations and unmodeled geophysical signals at the GGOS level. The differences between the conventional IERS 08 C04 and other EOP series computed with distinct TRF settings exhibit biases and even non-negligible trends in the cases where no differential rotations should appear, e.g., a drift of about 20 \(\upmu \)as year\(^{-1 }\)in \(y_{\mathrm{pol }}\) when the VLBI-only frame VTRF2008 is used. Likewise, different strategies on station position modeling originate scatters larger than 150 \(\upmu \)as in the terrestrial pole coordinates.  相似文献   

11.
Many kinematic GPS applications rely on high accuracy, which usually requires the ambiguities to be fixed. Normally, a reference station in the rover’s vicinity is needed for successful ambiguity resolution. Alternatively, a network surrounding the rover and allowing one to derive area correction parameters is needed. Unfortunately, both approaches are not feasible in certain situations. This paper is a contribution to precise kinematic positioning over long baselines. Atmospheric refraction becomes critical in the error budget, but progress has been made to use numerical weather models to derive tropospheric corrections, for instance. The spatial correlation of both ionospheric and tropospheric propagation delays is investigated in this paper and special attention is paid on the systematic error behavior of tropospheric refraction. The principles developed are applied to an extended reliability test of the ambiguities. Finally, it is demonstrated in positioning experiments that kinematic positioning retrieval with fixed ambiguities is actually possible for baselines between 150 and 300 km with an accuracy of approximately 2 cm in post-mission processing.
Torben SchülerEmail: Phone: +49-89-60042587Fax: +49-89-60043019
  相似文献   

12.
The study areas Tikovil and Payppara sub-watersheds of Meenachil river cover 158.9 and 111.9 km2, respectively. These watersheds are parts of Western Ghats, which is an ecologically sensitive region. The drainage network of the sub-watersheds was delineated using SOI topographical maps on 1:50,000 scale using the Arc GIS software. The stream orders were calculated using the method proposed by Strahler's (1964 Strahler, A. N. 1964. “Quantitative geomorphology of drainage basins and channel networks”. In Hand book of applied hydrology. Vol. 4, Edited by: Chow, V. T. Vol. 4, 3944.  [Google Scholar]). The drainage network shows that the terrain exhibits dendritic to sub-dendritic drainage pattern. Stream order ranges from the fifth to the sixth order. Drainage density varies between 1.69 and 2.62 km/km2. The drainage texture of the drainage basins are 2.3 km–1 and 6.98 km–1 and categorized as coarse to very fine texture. Stream frequency is low in the case of Payappara sub-watershed (1.78 km–2). Payappara sub-watershed has the highest constant of channel maintenance value of 0.59 indicating much fewer structural disturbances and fewer runoff conditions. The form factor value varies in between 0.42 and 0.55 suggesting elongated shape formed for Payappara sub-watershed and a rather more circular shape for Tikovil sub-watershed. The mean bifurcation ratio (3.5) indicates that both the sub-watersheds are within the natural stream system. Hence from the study it can be concluded that GIS techniques prove to be a competent tool in morphometric analysis.  相似文献   

13.
The Celestial Reference System (CRS) is currently realized only by Very Long Baseline Interferometry (VLBI) because it is the space geodetic technique that enables observations in that frame. In contrast, the Terrestrial Reference System (TRS) is realized by means of the combination of four space geodetic techniques: Global Navigation Satellite System (GNSS), VLBI, Satellite Laser Ranging (SLR), and Doppler Orbitography and Radiopositioning Integrated by Satellite. The Earth orientation parameters (EOP) are the link between the two types of systems, CRS and TRS. The EOP series of the International Earth Rotation and Reference Systems Service were combined of specifically selected series from various analysis centers. Other EOP series were generated by a simultaneous estimation together with the TRF while the CRF was fixed. Those computation approaches entail inherent inconsistencies between TRF, EOP, and CRF, also because the input data sets are different. A combined normal equation (NEQ) system, which consists of all the parameters, i.e., TRF, EOP, and CRF, would overcome such an inconsistency. In this paper, we simultaneously estimate TRF, EOP, and CRF from an inter-technique combined NEQ using the latest GNSS, VLBI, and SLR data (2005–2015). The results show that the selection of local ties is most critical to the TRF. The combination of pole coordinates is beneficial for the CRF, whereas the combination of \(\varDelta \hbox {UT1}\) results in clear rotations of the estimated CRF. However, the standard deviations of the EOP and the CRF improve by the inter-technique combination which indicates the benefits of a common estimation of all parameters. It became evident that the common determination of TRF, EOP, and CRF systematically influences future ICRF computations at the level of several \(\upmu \)as. Moreover, the CRF is influenced by up to \(50~\upmu \)as if the station coordinates and EOP are dominated by the satellite techniques.  相似文献   

14.
Deformations of radio telescopes used in geodetic and astrometric very long baseline interferometry (VLBI) observations belong to the class of systematic error sources which require correction in data analysis. In this paper we present a model for all path length variations in the geometrical optics of radio telescopes which are due to gravitational deformation. The Effelsberg 100 m radio telescope of the Max Planck Institute for Radio Astronomy, Bonn, Germany, has been surveyed by various terrestrial methods. Thus, all necessary information that is needed to model the path length variations is available. Additionally, a ray tracing program has been developed which uses as input the parameters of the measured deformations to produce an independent check of the theoretical model. In this program as well as in the theoretical model, the illumination function plays an important role because it serves as the weighting function for the individual path lengths depending on the distance from the optical axis. For the Effelsberg telescope, the biggest contribution to the total path length variations is the bending of the main beam located along the elevation axis which partly carries the weight of the paraboloid at its vertex. The difference in total path length is almost \(-\) 100 mm when comparing observations at 90 \(^\circ \) and at 0 \(^\circ \) elevation angle. The impact of the path length corrections is validated in a global VLBI analysis. The application of the correction model leads to a change in the vertical position of \(+120\)  mm. This is more than the maximum path length, but the effect can be explained by the shape of the correction function.  相似文献   

15.
The present paper deals with the least-squares adjustment where the design matrix (A) is rank-deficient. The adjusted parameters \(\hat x\) as well as their variance-covariance matrix ( \(\sum _{\hat x} \) ) can be obtained as in the “standard” adjustment whereA has the full column rank, supplemented with constraints, \(C\hat x = w\) , whereC is the constraint matrix andw is sometimes called the “constant vector”. In this analysis only the inner adjustment constraints are considered, whereC has the full row rank equal to the rank deficiency ofA, andAC T =0. Perhaps the most important outcome points to the three kinds of results
  1. A general least-squares solution where both \(\hat x\) and \(\sum _{\hat x} \) are indeterminate corresponds tow=arbitrary random vector.
  2. The minimum trace (least-squares) solution where \(\hat x\) is indeterminate but \(\sum _{\hat x} \) is detemined (and trace \(\sum _{\hat x} \) corresponds tow=arbitrary constant vector.
  3. The minimum norm (least-squares) solution where both \(\hat x\) and \(\sum _{\hat x} \) are determined (and norm \(\hat x\) , trace \(\sum _{\hat x} \) corresponds tow?0
  相似文献   

16.
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density, $N_m \mathrm{F2}$ N m F 2 , and the height, $h_m \mathrm{F2}$ h m F 2 . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve $N_m \mathrm{F2}$ N m F 2 and $h_m \mathrm{F2}$ h m F 2 values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between $0.5\times 10^{10}$ 0.5 × 10 10 and $3.6\times 10^{10}$ 3.6 × 10 10 elec/m $^{-3}$ ? 3 for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height ( $\sim $ 2 %).  相似文献   

17.
M-estimation with probabilistic models of geodetic observations   总被引:1,自引:1,他引:0  
The paper concerns \(M\) -estimation with probabilistic models of geodetic observations that is called \(M_{\mathcal {P}}\) estimation. The special attention is paid to \(M_{\mathcal {P}}\) estimation that includes the asymmetry and the excess kurtosis, which are basic anomalies of empiric distributions of errors of geodetic or astrometric observations (in comparison to the Gaussian errors). It is assumed that the influence function of \(M_{\mathcal {P}}\) estimation is equal to the differential equation that defines the system of the Pearson distributions. The central moments \(\mu _{k},\, k=2,3,4\) , are the parameters of that system and thus, they are also the parameters of the chosen influence function. The \(M_{\mathcal {P}}\) estimation that includes the Pearson type IV and VII distributions ( \(M_{\mathrm{PD(l)}}\) method) is analyzed in great detail from a theoretical point of view as well as by applying numerical tests. The chosen distributions are leptokurtic with asymmetry which refers to the general characteristic of empirical distributions. Considering \(M\) -estimation with probabilistic models, the Gram–Charlier series are also applied to approximate the models in question ( \(M_{\mathrm{G-C}}\) method). The paper shows that \(M_{\mathcal {P}}\) estimation with the application of probabilistic models belongs to the class of robust estimations; \(M_{\mathrm{PD(l)}}\) method is especially effective in that case. It is suggested that even in the absence of significant anomalies the method in question should be regarded as robust against gross errors while its robustness is controlled by the pseudo-kurtosis.  相似文献   

18.
The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515–534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the $\Lambda $ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The $\Lambda $ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the $\Lambda $ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given $\Lambda $ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel $\Lambda $ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C $^*$ conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93–101, 2012).  相似文献   

19.
For science applications of the gravity recovery and climate experiment (GRACE) monthly solutions, the GRACE estimates of \(C_{20}\) (or \(J_{2}\)) are typically replaced by the value determined from satellite laser ranging (SLR) due to an unexpectedly strong, clearly non-geophysical, variation at a period of \(\sim \)160 days. This signal has sometimes been referred to as a tide-like variation since the period is close to the perturbation period on the GRACE orbits due to the spherical harmonic coefficient pair \(C_{22}/S_{22}\) of S2 ocean tide. Errors in the S2 tide model used in GRACE data processing could produce a significant perturbation to the GRACE orbits, but it cannot contribute to the \(\sim \)160-day signal appearing in \(C_{20}\). Since the dominant contribution to the GRACE estimate of \(C_{20}\) is from the global positioning system tracking data, a time series of 138 monthly solutions up to degree and order 10 (\(10\times 10\)) were derived along with estimates of ocean tide parameters up to degree 6 for eight major tides. The results show that the \(\sim \)160-day signal remains in the \(C_{20}\) time series. Consequently, the anomalous signal in GRACE \(C_{20}\) cannot be attributed to aliasing from the errors in the S2 tide. A preliminary analysis of the cross-track forces acting on GRACE and the cross-track component of the accelerometer data suggests that a temperature-dependent systematic error in the accelerometer data could be a cause. Because a wide variety of science applications relies on the replacement values for \(C_{20}\), it is essential that the SLR estimates are as reliable as possible. An ongoing concern has been the influence of higher degree even zonal terms on the SLR estimates of \(C_{20}\), since only \(C_{20}\) and \(C_{40}\) are currently estimated. To investigate whether a better separation between \(C_{20}\) and the higher-degree terms could be achieved, several combinations of additional SLR satellites were investigated. In addition, a series of monthly gravity field solutions (\(60\times 60\)) were estimated from a combination of GRACE and SLR data. The results indicate that the combination of GRACE and SLR data might benefit the resonant orders in the GRACE-derived gravity fields, but it appears to degrade the recovery of the \(C_{20}\) variations. In fact, the results suggest that the poorer recovery of \(C_{40}\) by GRACE, where the annual variation is significantly underestimated, may be affecting the estimates of \(C_{20}\). Consequently, it appears appropriate to continue using the SLR-based estimates of \(C_{20}\), and possibly also \(C_{40}\), to augment the existing GRACE mission.  相似文献   

20.
The ionosphere effective height (IEH) is a very important parameter in total electron content (TEC) measurements under the widely used single-layer model assumption. To overcome the requirement of a large amount of simultaneous vertical and slant ionospheric observations or dense “coinciding” pierce points data, a new approach comparing the converted vertical TEC (VTEC) value using mapping function based on a given IEH with the “ground truth” VTEC value provided by the combined International GNSS Service Global Ionospheric Maps is proposed for the determination of the optimal IEH. The optimal IEH in the Chinese region is determined using three different methods based on GNSS data. Based on the ionosonde data from three different locations in China, the altitude variation of the peak electron density (hmF2) is found to have clear diurnal, seasonal and latitudinal dependences, and the diurnal variation of hmF2 varies from approximately 210 to 520 km in Hainan. The determination of the optimal IEH employing the inverse method suggested by Birch et al. (Radio Sci 37, 2002. doi: 10.1029/2000rs002601) did not yield a consistent altitude in the Chinese region. Tests of the method minimizing the mapping function errors suggested by Nava et al. (Adv Space Res 39:1292–1297, 2007) indicate that the optimal IEH ranges from 400 to 600 km, and the height of 450 km is the most frequent IEH at both high and low solar activities. It is also confirmed that the IEH of 450–550 km is preferred for the Chinese region instead of the commonly adopted 350–450 km using the determination method of the optimal IEH proposed in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号