首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The International GNSS Service (IGS) issues four sets of so-called ultra-rapid products per day, which are based on the contributions of the IGS Analysis Centers. The traditional (“old”) ultra-rapid orbit and earth rotation parameters (ERP) solution of the Center for Orbit Determination in Europe (CODE) was based on the output of three consecutive 3-day long-arc rapid solutions. Information from the IERS Bulletin A was required to generate the predicted part of the old CODE ultra-rapid product. The current (“new”) product, activated in November 2013, is based on the output of exactly one multi-day solution. A priori information from the IERS Bulletin A is no longer required for generating and predicting the orbits and ERPs. This article discusses the transition from the old to the new CODE ultra-rapid orbit and ERP products and the associated improvement in reliability and performance. All solutions used in this article were generated with the development version of the Bernese GNSS Software. The package was slightly extended to meet the needs of the new CODE ultra-rapid generation.  相似文献   

2.
Results are presented for Michibiki, the first satellite of Japan’s Quasi-Zenith Satellite System. Measurements for the analysis have been collected with five GNSS tracking stations in the service area of QZSS, which track five of the six signals transmitted by the satellite. The analysis discusses the carrier-to-noise density ratio as measured by the receiver for the different signals. Pseudorange noise and multipath are evaluated with dual-frequency and triple-frequency combinations. QZSS uses two separate antennas for signal transmission, which allows the determination of the yaw orientation of the spacecraft. Yaw angle estimation results for an attitude mode switch from yaw-steering to orbit-normal orientation are presented. Estimates of differential code biases between QZSS and GPS observations are shown in the analysis of the orbit determination results for Michibiki. The estimated orbits are compared with the broadcast ephemerides, and their accuracy is assessed with overlap comparisons.  相似文献   

3.
4.
The paper deals with data filtering on closed surfaces using linear and nonlinear diffusion equations. We define a surface finite-volume method to approximate numerically parabolic partial differential equations on closed surfaces, namely on a sphere, ellipsoid or the Earth’s surface. The closed surface as a computational domain is approximated by a polyhedral surface created by planar triangles and we construct a dual co-volume grid. On the co-volumes we define a weak formulation of the problem by applying Green’s theorem to the Laplace–Beltrami operator. Then the finite-volume method is applied to discretize the weak formulation. Weak forms of elliptic operators are expressed through surface gradients. In our numerical scheme we use a piece-wise linear approximation of a solution in space and the backward Euler time discretization. Furthermore, we extend a linear diffusion on surface to the regularized surface Perona–Malik model. It represents a nonlinear diffusion equation, which at the same time reduces noise and preserves main edges and other details important for a correct interpretation of the real data. We present four numerical experiments. The first one has an illustrative character showing how an additive noise is filtered out from an artificial function defined on a sphere. Other three examples deal with the real geodetic data on the Earth’s surface, namely (i) we reduce a stripping noise from the GOCE satellite only geopotential model up to degree 240, (ii) we filter noise from the real GOCE measurements (the component $T_{zz})$ , and (iii) we reduce a stripping noise from the satellite only mean dynamic topography at oceans. In all experiments we focus on a comparison of the results obtained by both the linear and nonlinear models presenting advantages of the nonlinear diffusion.  相似文献   

5.
6.
Error analyses of CHAMP data for recovery of the Earth’s gravity field   总被引:1,自引:0,他引:1  
A preliminary commission error analysis whereby orbit perturbation theory and other techniques are used to assess and predict the recovery of the Earths gravity field from the challenging microsatellite payload (CHAMP) mission is developed and implemented. With CHAMP launched in July 2000, accumulated evidence is now available to quantify the errors in the recovery procedure including the orbital precision from GPS, attitude errors, accelerometer noise and thruster mismatch/misalignment. For the latter, numerical integrations using a variable length single-step Runge–Kutta integrator and a fixed length multi-step method are compared to assess the error associated with assuming that the thruster misalignment can be spread uniformly across a step interval. Error degree variances from simulated studies are compared to results from a recently released CHAMP-based gravity field, EIGEN-1S. It is seen that the orbital positioning, as derived from the onboard GPS receiver, is critical, with accelerometer noise contributing at a lower level. Attitude error, at currently quoted accuracy, is not significant as an error source. AcknowledgementsThe authors would like to thank the UK Natural Environment Research Council (Grant No. NER/A/0000/00612) for financing this study and GFZ for supplying the data and technical support.  相似文献   

7.
The Dixon resultant is proposed as an alternative to Gröbner basis or multipolynomial resultant approaches for solving systems of polynomial equations inherent in geodesy. Its smallness in size, high density (ratio on the number of nonzero elements to the number of all elements), speed, and robustness (insensitive to combinatorial sequence and monomial order, e.g., Gröbner basis) makes it extremely attractive compared to its competitors. Using 3D-intersection and conformal C 7 datum transformation problems, we compare its performance to those of the Sturmfels’s resultant and Gröbner basis. For the 3D-intersection problem, Sturmfels’s resultant needed 0.578 s to solve a 6  ×  6 resultant matrix whose density was 0.639, the Dixon resultant on the other hand took 0.266 s to solve a 4  ×  4 resultant matrix whose density was 0.870. For the conformal C 7 datum transformation problem, the Dixon resultant took 2.25 s to compute a quartic polynomial in scale parameter whereas the computaton of the Gröbner basis fails. Using relative coordinates to compute the quartic polynomial in scale parameter, the Gröbner basis needed 0.484 s, while the Dixon resultant took 0.016 s. This highlights the robustness of the Dixon resultant (i.e., the capability to use both absolute and relative coordinates with any order of variables) as opposed to Gröbner basis, which only worked well with relative coordinates, and was sensitive to the combinatorial sequence and order of variables. Geodetic users uncomfortable with lengthy expressions of Gröbner basis or multipolynomial resultants, and who aspire to optimize on the attractive features of Dixon resultant, may find it useful.  相似文献   

8.
9.
The existing spatiotemporal analysis methods suppose that the involved time series are complete and have the same data interval. However missing data inevitably occur in the position time series of Global Navigation Satellite Systems networks for many reasons. In this paper, we develop a modified principal component analysis to extract the Common Mode Error (CME) from the incomplete position time series. The principle of the proposed method is that a time series can be reproduced from its principle components. The method is equivalent to the method of Dong et al. (J Geophys Res 111:3405–3421, 2006) in case of no missing data in the time series and to the extended ‘stacking’ approach under the assumption of a uniformly spatial response. The new method is first applied to extract the CME from the position time series of the Crustal Movement Observation Network of China (CMONOC) over the period of 1999–2009 where the missing data occur in all stations with the different gaps. The results show that the CMEs are significant in CMONOC. The size of the first principle components for the North, East and Up coordinates are as large as 40, 41 and 37 % of total principle components and their spatial responses are not uniform. The minimum amplitudes of the first eigenvectors are only 41, 15 and 29 % for the North, East and Up coordinate components, respectively. The extracted CMEs of our method are close to the data filling method, and the Root Mean Squared error (RMS) values computed from the differences of maximum CMEs between two methods are only 0.31, 0.52 and 1.55 mm for North, East and Up coordinates, respectively. The RMS of the position time series is greatly reduced after filtering out the CMEs. The accuracies of the reconstructed missing data using the two methods are also comparable. To further comprehensively test the efficiency of our method, the repeated experiments are then carried out by randomly deleting different percentages of data at some stations. The results show that the CMEs can be extracted with high accuracy at the non missing-data epochs. And at the missing-data epochs, the accuracy of extracted CMEs has a strong dependence on the number of stations with missing data.  相似文献   

10.
G. Bourda 《Journal of Geodesy》2008,82(4-5):295-305
The temporal variations of the Earth’s gravity field, nowadays routinely determined from satellite laser ranging (SLR) and GRACE (Gravity Recovery And Climate Experiment), are related to changes in the Earth’s rotation rate through the Earth’s inertia tensor. We study this connection from actual data by comparing the traditional length-of-day (LOD) measurements provided by the International Earth Rotation and Reference Systems Service (IERS) to the variations of the degree-2 and order-0 Stokes coefficient of the gravity field determined from fitting the orbits of the LAGEOS-1 and −2 satellites since 1985. The two series show a good correlation (0.62) and similar annual and semi-annual signals, indicating that the gravity-field-derived LOD is valuable. Our analysis also provides evidence for additional signals common to both series, especially at a period near 120 days, which could be due to hydrological effects.  相似文献   

11.
Monitoring agricultural land is important for understanding and managing food production, environmental conservation efforts, and climate change. The United States Department of Agriculture's Cropland Data Layer (CDL), an annual satellite imagery-derived land cover map, has been increasingly used for this application since complete coverage of the conterminous United States became available in 2008. However, the CDL is designed and produced with the intent of mapping annual land cover rather than tracking changes over time, and as a result certain precautions are needed in multi-year change analyses to minimize error and misapplication. We highlight scenarios that require special considerations, suggest solutions to key challenges, and propose a set of recommended good practices and general guidelines for CDL-based land change estimation. We also characterize a problematic issue of crop area underestimation bias within the CDL that needs to be accounted for and corrected when calculating changes to crop and cropland areas. When used appropriately and in conjunction with related information, the CDL is a valuable and effective tool for detecting diverse trends in agriculture. By explicitly discussing the methods and techniques for post-classification measurement of land-cover and land-use change using the CDL, we aim to further stimulate the discourse and continued development of suitable methodologies. Recommendations generated here are intended specifically for the CDL but may be broadly applicable to additional remotely-sensed land cover datasets including the National Land Cover Database (NLCD), Moderate Resolution Imaging Spectroradiometer (MODIS)-based land cover products, and other regional, national, and global land cover classification maps.  相似文献   

12.
Are the National Geodetic Survey’s surface gravity data sufficient for supporting the computation of a 1 cm-accurate geoid? This paper attempts to answer this question by deriving a few measures of accuracy for this data and estimating their effects on the US geoid. We use a data set which comprises ${\sim }1.4$ million gravity observations collected in 1,489 surveys. Comparisons to GRACE-derived gravity and geoid are made to estimate the long-wavelength errors. Crossover analysis and $K$ -nearest neighbor predictions are used for estimating local gravity biases and high-frequency gravity errors, and the corresponding geoid biases and high-frequency geoid errors are evaluated. Results indicate that 244 of all 1,489 surface gravity surveys have significant biases ${>}2$  mGal, with geoid implications that reach 20 cm. Some of the biased surveys are large enough in horizontal extent to be reliably corrected by satellite-derived gravity models, but many others are not. In addition, the results suggest that the data are contaminated by high-frequency errors with an RMS of ${\sim }2.2$  mGal. This causes high-frequency geoid errors of a few centimeters in and to the west of the Rocky Mountains and in the Appalachians and a few millimeters or less everywhere else. Finally, long-wavelength ( ${>}3^{\circ }$ ) surface gravity errors on the sub-mGal level but with large horizontal extent are found. All of the south and southeast of the USA is biased by +0.3 to +0.8 mGal and the Rocky Mountains by $-0.1$ to $-0.3$  mGal. These small but extensive gravity errors lead to long-wavelength geoid errors that reach 60 cm in the interior of the USA.  相似文献   

13.
14.
We present a global static model of the Earth’s gravity field entitled DGM-1S based on GRACE and GOCE data. The collection of used data sets includes nearly 7 years of GRACE KBR data and 10 months of GOCE gravity gradient data. The KBR data are transformed with a 3-point differentiation into quantities that are approximately inter-satellite accelerations. Gravity gradients are processed in the instrumental frame. Noise is handled with a frequency-dependent data weighting. DGM-1S is complete to spherical harmonic degree 250 with a Kaula regularization being applied above degree 179. Its performance is compared with a number of other satellite-only GRACE/GOCE models by confronting them with (i) an independent model of the oceanic mean dynamic topography, and (ii) independent KBR and gravity gradient data. The tests reveal a competitive quality for DGM-1S. Importantly, we study added value of GOCE data by comparing the performance of satellite-only GRACE/GOCE models with models produced without GOCE data: either ITG-Grace2010s or EGM2008 depending on which of the two performs better in a given region. The test executed based on independent gravity gradients quantifies this added value as 25–38 % in the continental areas poorly covered with terrestrial gravimetry data (Equatorial Africa, Himalayas, and South America), 7–17 % in those with a good coverage with these data (Australia, North America, and North Eurasia), and 14 % in the oceans. This added value is shown to be almost entirely related to coefficients below degree 200. It is shown that this gain must be entirely attributed to gravity gradients acquired by the mission. The test executed based on an independent model of the mean dynamic topography suggests that problems still seem to exist in satellite-only GRACE/GOCE models over the Pacific ocean, where noticeable deviations between these models and EGM2008 are detected, too.  相似文献   

15.
By using observed CHAMP orbit ephemeredes and MSISE-90 dry air model and regarding the earth as a sphere and an ellipsoid respectively, phase delays are simulated and the simulated data are retrieved under different schemes. The comparison between the inverted temperature profiles and the model temperature profiles shows that by inverting observed data, we will get temperature results with large errors if the effect of Earth’s oblateness is omitted. The correction method is proved to be effective because the temperature errors decreased obviously with this method.  相似文献   

16.
17.
The Gravity Recovery and Climate Experiment (GRACE) satellite mission measures the Earth’s gravity field since March 2002. We propose a new filtering procedure for post-processing GRACE-based monthly gravity field solutions provided in the form of spherical harmonic coefficients. The procedure is tuned for the optimal estimation of linear trends and other signal components that show a systematic behavior over long time intervals. The key element of the developed methodology is the statistically optimal Wiener-type filter which makes use of the full covariance matrices of noise and signal. The developed methodology is applied to determine the mass balance of the Greenland ice sheet, both per drainage system and integrated, as well as the mass balance of the ice caps on the islands surrounding Greenland. The estimations are performed for three 2-year time intervals (2003–2004, 2005–2006, and 2007–2008), as well as for the 6-year time interval (2003–2008). The study confirms a significant difference in the behavior of the drainage systems over time. The average 6-year rate of mass loss in Greenland is estimated as 165 ± 15 Gt/year. The rate of mass loss of the ice caps on Ellesmere Island (together with Devon Island), Baffin Island, Iceland, and Svalbard is found to be 22 ± 4, 21 ± 6, 17 ± 9, and 6 ± 2 Gt/year, respectively. All these estimates are corrected for the effect of glacial isostatic adjustment.  相似文献   

18.
Geographical Information Science is essentially computational geography and has its own research program, namely all aspects of formal models for spatial natural processes and the interaction of humans with the environment in space and time. This is not a question of technology and technology-related research; but technology influences what questions can be researched effectively. Collection of data in the field and the simulation of field experience through Virtual Reality are just two questions of how spatial reality and human experience are linked. The focus on human spatial cognition is similarly found in software engineering for interoperable Geographic Information Systems.  相似文献   

19.
ABSTRACT

Bertin’s first book, Semiology of Graphics, was published in 1967. His second book, Graphics and Graphic Information Processing, was subsequently published in 1977. The word “processing” in the title of the second book is interesting because in those days there were no personal computers with an interactive display system. But in Bertin’s laboratory there were many kinds of tool kits – basically manually developed thematic maps and data analysis. Bertin’s methods were concerned with making a thematic map and data visualization. Maps, and more generally graphics, were represented by sets of cartographic symbols. Thus, they are abstractions that demand both theoretical and technical literacy to represent and understand them. If the representation is systematic, a sort of tool kit might be necessary, because the representation demands consistency based on the theory. Otherwise a cartographer faces the risk of an unstable and unintelligible representation. In this paper, we discuss the discrimination between tool kits intended either for an automated system or a process assisting system. The latter process might be useful and necessary to develop a graphic way of thinking. This investigation refers to Bertin’s books, materials conserved at the National Archives in Paris, and other related software developed later.

Abbreviation: EHESS: Ecole des Hautes Etudes en Sciences Sociales inherited Ecole Pratique des Hautes Etudes since 1975  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号