首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
M-estimation with probabilistic models of geodetic observations   总被引:1,自引:1,他引:0  
The paper concerns \(M\) -estimation with probabilistic models of geodetic observations that is called \(M_{\mathcal {P}}\) estimation. The special attention is paid to \(M_{\mathcal {P}}\) estimation that includes the asymmetry and the excess kurtosis, which are basic anomalies of empiric distributions of errors of geodetic or astrometric observations (in comparison to the Gaussian errors). It is assumed that the influence function of \(M_{\mathcal {P}}\) estimation is equal to the differential equation that defines the system of the Pearson distributions. The central moments \(\mu _{k},\, k=2,3,4\) , are the parameters of that system and thus, they are also the parameters of the chosen influence function. The \(M_{\mathcal {P}}\) estimation that includes the Pearson type IV and VII distributions ( \(M_{\mathrm{PD(l)}}\) method) is analyzed in great detail from a theoretical point of view as well as by applying numerical tests. The chosen distributions are leptokurtic with asymmetry which refers to the general characteristic of empirical distributions. Considering \(M\) -estimation with probabilistic models, the Gram–Charlier series are also applied to approximate the models in question ( \(M_{\mathrm{G-C}}\) method). The paper shows that \(M_{\mathcal {P}}\) estimation with the application of probabilistic models belongs to the class of robust estimations; \(M_{\mathrm{PD(l)}}\) method is especially effective in that case. It is suggested that even in the absence of significant anomalies the method in question should be regarded as robust against gross errors while its robustness is controlled by the pseudo-kurtosis.  相似文献   

2.
We show that the current levels of accuracy being achieved for the precise orbit determination (POD) of low-Earth orbiters demonstrate the need for the self-consistent treatment of tidal variations in the geocenter. Our study uses as an example the POD of the OSTM/Jason-2 satellite altimeter mission based upon Global Positioning System (GPS) tracking data. Current GPS-based POD solutions are demonstrating root-mean-square (RMS) radial orbit accuracy and precision of \({<}1\)  cm and 1 mm, respectively. Meanwhile, we show that the RMS of three-dimensional tidal geocenter variations is \({<}6\)  mm, but can be as large as 15 mm, with the largest component along the Earth’s spin axis. Our results demonstrate that GPS-based POD of Earth orbiters is best performed using GPS satellite orbit positions that are defined in a reference frame whose origin is at the center of mass of the entire Earth system, including the ocean tides. Errors in the GPS-based POD solutions for OSTM/Jason-2 of \({<}4\)  mm (3D RMS) and \({<}2\)  mm (radial RMS) are introduced when tidal geocenter variations are not treated consistently. Nevertheless, inconsistent treatment is measurable in the OSTM/Jason-2 POD solutions and manifests through degraded post-fit tracking data residuals, orbit precision, and relative orbit accuracy. For the latter metric, sea surface height crossover variance is higher by \(6~\hbox {mm}^{2}\) when tidal geocenter variations are treated inconsistently.  相似文献   

3.
We develop a slope correction model to improve the accuracy of mean sea surface topography models as well as marine gravity models. The correction is greatest above ocean trenches and large seamounts where the slope of the geoid exceeds 100  \(\upmu \) rad. In extreme cases, the correction to the mean sea surface height is 40 mm and the correction to the along-track altimeter slope is 1–2  \(\upmu \) rad which maps into a 1–2 mGal gravity error. Both corrections are easily applied using existing grids of sea surface slope from satellite altimetry.  相似文献   

4.
The present paper deals with the least-squares adjustment where the design matrix (A) is rank-deficient. The adjusted parameters \(\hat x\) as well as their variance-covariance matrix ( \(\sum _{\hat x} \) ) can be obtained as in the “standard” adjustment whereA has the full column rank, supplemented with constraints, \(C\hat x = w\) , whereC is the constraint matrix andw is sometimes called the “constant vector”. In this analysis only the inner adjustment constraints are considered, whereC has the full row rank equal to the rank deficiency ofA, andAC T =0. Perhaps the most important outcome points to the three kinds of results
  1. A general least-squares solution where both \(\hat x\) and \(\sum _{\hat x} \) are indeterminate corresponds tow=arbitrary random vector.
  2. The minimum trace (least-squares) solution where \(\hat x\) is indeterminate but \(\sum _{\hat x} \) is detemined (and trace \(\sum _{\hat x} \) corresponds tow=arbitrary constant vector.
  3. The minimum norm (least-squares) solution where both \(\hat x\) and \(\sum _{\hat x} \) are determined (and norm \(\hat x\) , trace \(\sum _{\hat x} \) corresponds tow?0
  相似文献   

5.
We can map zenith wet delays onto precipitable water with a conversion factor, but in order to calculate the exact conversion factor, we must precisely calculate its key variable $T_\mathrm{m}$ . Yao et al. (J Geod 86:1125–1135, 2012. doi:10.1007/s00190-012-0568-1) established the first generation of global $T_\mathrm{m}$ model (GTm-I) with ground-based radiosonde data, but due to the lack of radiosonde data at sea, the model appears to be abnormal in some areas. Given that sea surface temperature varies less than that on land, and the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship are accurate enough to describe the surface temperature and $T_\mathrm{m}$ , this paper capitalizes on the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship to provide simulated $T_\mathrm{m}$ at sea, as a compensation for the lack of data. Combined with the $T_\mathrm{m}$ from radiosonde data, we recalculated the GTm model coefficients. The results show that this method not only improves the accuracy of the GTm model significantly at sea but also improves that on land, making the GTm model more stable and practically applicable.  相似文献   

6.
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density, $N_m \mathrm{F2}$ N m F 2 , and the height, $h_m \mathrm{F2}$ h m F 2 . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve $N_m \mathrm{F2}$ N m F 2 and $h_m \mathrm{F2}$ h m F 2 values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between $0.5\times 10^{10}$ 0.5 × 10 10 and $3.6\times 10^{10}$ 3.6 × 10 10 elec/m $^{-3}$ ? 3 for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height ( $\sim $ 2 %).  相似文献   

7.
Error analysis of the NGS’ surface gravity database   总被引:1,自引:1,他引:0  
Are the National Geodetic Survey’s surface gravity data sufficient for supporting the computation of a 1 cm-accurate geoid? This paper attempts to answer this question by deriving a few measures of accuracy for this data and estimating their effects on the US geoid. We use a data set which comprises ${\sim }1.4$ million gravity observations collected in 1,489 surveys. Comparisons to GRACE-derived gravity and geoid are made to estimate the long-wavelength errors. Crossover analysis and $K$ -nearest neighbor predictions are used for estimating local gravity biases and high-frequency gravity errors, and the corresponding geoid biases and high-frequency geoid errors are evaluated. Results indicate that 244 of all 1,489 surface gravity surveys have significant biases ${>}2$  mGal, with geoid implications that reach 20 cm. Some of the biased surveys are large enough in horizontal extent to be reliably corrected by satellite-derived gravity models, but many others are not. In addition, the results suggest that the data are contaminated by high-frequency errors with an RMS of ${\sim }2.2$  mGal. This causes high-frequency geoid errors of a few centimeters in and to the west of the Rocky Mountains and in the Appalachians and a few millimeters or less everywhere else. Finally, long-wavelength ( ${>}3^{\circ }$ ) surface gravity errors on the sub-mGal level but with large horizontal extent are found. All of the south and southeast of the USA is biased by +0.3 to +0.8 mGal and the Rocky Mountains by $-0.1$ to $-0.3$  mGal. These small but extensive gravity errors lead to long-wavelength geoid errors that reach 60 cm in the interior of the USA.  相似文献   

8.
This paper describes the historical sea level data that we have rescued from a tide gauge, especially devised originally for geodesy. This gauge was installed in Marseille in 1884 with the primary objective of defining the origin of the height system in France. Hourly values for 1885–1988 have been digitized from the original tidal charts. They are supplemented by hourly values from an older tide gauge record (1849–1851) that was rediscovered during a survey in 2009. Both recovered data sets have been critically edited for errors and their reliability assessed. The hourly values are thoroughly analysed for the first time after their original recording. A consistent high-frequency time series is reported, increasing notably the length of one of the few European sea level records in the Mediterranean Sea spanning more than one hundred years. Changes in sea levels are examined, and previous results revisited with the extended time series. The rate of relative sea level change for the period 1849–2012 is estimated to have been \(1.08\pm 0.04\)  mm/year at Marseille, a value that is slightly lower but in close agreement with the longest time series of Brest over the common period ( \(1.26\pm 0.04\)  mm/year). The data from a permanent global positioning system station installed on the roof of the solid tide gauge building suggests a remarkable stability of the ground ( \(-0.04\pm 0.25\)  mm/year) since 1998, confirming the choice made by our predecessor geodesists in the nineteenth century regarding this site selection.  相似文献   

9.
We present new insights on the time-averaged surface velocities, convergence and extension rates along arc-normal transects in Kumaon, Garhwal and Kashmir–Himachal regions in the Indian Himalaya from 13 years of high-precision Global Positioning System (GPS) time series (1995–2008) derived from GPS data at 14 GPS permanent and 42 campaign stations between $29.5{-}35^{\circ }\hbox {N}$ and $76{-}81^{\circ }\hbox {E}$ . The GPS surface horizontal velocities vary significantly from the Higher to Lesser Himalaya and are of the order of 30 to 48 mm/year NE in ITRF 2005 reference frame, and 17 to 2 mm/year SW in an India fixed reference frame indicating that this region is accommodating less than 2 cm/year of the India–Eurasia plate motion ( ${\sim }4~\hbox {cm/year}$ ). The total arc-normal shortening varies between ${\sim }10{-}14~\hbox {mm/year}$ along the different transects of the northwest Himalayan wedge, between the Indo-Tsangpo suture to the north and the Indo-Gangetic foreland to the south indicating high strain accumulation in the Himalayan wedge. This convergence is being accommodated differentially along the arc-normal transects; ${\sim } 5{-}10~\hbox {mm/year}$ in Lesser Himalaya and 3–4 mm/year in Higher Himalaya south of South Tibetan Detachment. Most of the convergence in the Lesser Himalaya of Garhwal and Kumaon is being accommodated just south of the Main Central Thrust fault trace, indicating high strain accumulation in this region which is also consistent with the high seismic activity in this region. In addition, for the first time an arc-normal extension of ${\sim }6~\hbox {mm/year}$ has also been observed in the Tethyan Himalaya of Kumaon. Inverse modeling of GPS-derived surface deformation rates in Garhwal and Kumaon Himalaya using a single dislocation indicate that the Main Himalayan Thrust is locked from the surface to a depth of ${\sim }15{-}20~\hbox {km}$ over a width of 110 km with associated slip rate of ${\sim }16{-}18~\hbox {mm/year}$ . These results indicate that the arc-normal rates in the Northwest Himalaya have a complex deformation pattern involving both convergence and extension, and rigorous seismo-tectonic models in the Himalaya are necessary to account for this pattern. In addition, the results also gave an estimate of co-seismic and post-seismic motion associated with the 1999 Chamoli earthquake, which is modeled to derive the slip and geometry of the rupture plane.  相似文献   

10.
Reducing the draconitic errors in GNSS geodetic products   总被引:2,自引:2,他引:0  
Systematic errors at harmonics of the GPS draconitic year have been found in diverse GPS-derived geodetic products like the geocenter $Z$ -component, station coordinates, $Y$ -pole rate and orbits (i.e. orbit overlaps). The GPS draconitic year is the repeat period of the GPS constellation w.r.t. the Sun which is about 351 days. Different error sources have been proposed which could generate these spurious signals at the draconitic harmonics. In this study, we focus on one of these error sources, namely the radiation pressure orbit modeling deficiencies. For this purpose, three GPS+GLONASS solutions of 8 years (2004–2011) were computed which differ only in the solar radiation pressure (SRP) and satellite attitude models. The models employed in the solutions are: (1) the CODE (5-parameter) radiation pressure model widely used within the International GNSS Service community, (2) the adjustable box-wing model for SRP impacting GPS (and GLONASS) satellites, and (3) the adjustable box-wing model upgraded to use non-nominal yaw attitude, specially for satellites in eclipse seasons. When comparing the first solution with the third one we achieved the following in the GNSS geodetic products. Orbits: the draconitic errors in the orbit overlaps are reduced for the GPS satellites in all the harmonics on average 46, 38 and 57 % for the radial, along-track and cross-track components, while for GLONASS satellites they are mainly reduced in the cross-track component by 39 %. Geocenter $Z$ -component: all the odd draconitic harmonics found when the CODE model is used show a very important reduction (almost disappearing with a 92 % average reduction) with the new radiation pressure models. Earth orientation parameters: the draconitic errors are reduced for the $X$ -pole rate and especially for the $Y$ -pole rate by 24 and 50 % respectively. Station coordinates: all the draconitic harmonics (except the 2nd harmonic in the North component) are reduced in the North, East and Height components, with average reductions of 41, 39 and 35 % respectively. This shows, that part of the draconitic errors currently found in GNSS geodetic products are definitely induced by the CODE radiation pressure orbit modeling deficiencies.  相似文献   

11.
Homogeneous reprocessing of GPS,GLONASS and SLR observations   总被引:3,自引:2,他引:1  
The International GNSS Service (IGS) provides operational products for the GPS and GLONASS constellation. Homogeneously processed time series of parameters from the IGS are only available for GPS. Reprocessed GLONASS series are provided only by individual Analysis Centers (i. e. CODE and ESA), making it difficult to fully include the GLONASS system into a rigorous GNSS analysis. In view of the increasing number of active GLONASS satellites and a steadily growing number of GPS+GLONASS-tracking stations available over the past few years, Technische Universität Dresden, Technische Universität München, Universität Bern and Eidgenössische Technische Hochschule Zürich performed a combined reprocessing of GPS and GLONASS observations. Also, SLR observations to GPS and GLONASS are included in this reprocessing effort. Here, we show only SLR results from a GNSS orbit validation. In total, 18 years of data (1994–2011) have been processed from altogether 340 GNSS and 70 SLR stations. The use of GLONASS observations in addition to GPS has no impact on the estimated linear terrestrial reference frame parameters. However, daily station positions show an RMS reduction of 0.3 mm on average for the height component when additional GLONASS observations can be used for the time series determination. Analyzing satellite orbit overlaps, the rigorous combination of GPS and GLONASS neither improves nor degrades the GPS orbit precision. For GLONASS, however, the quality of the microwave-derived GLONASS orbits improves due to the combination. These findings are confirmed using independent SLR observations for a GNSS orbit validation. In comparison to previous studies, mean SLR biases for satellites GPS-35 and GPS-36 could be reduced in magnitude from \(-35\) and \(-38\)  mm to \(-12\) and \(-13\)  mm, respectively. Our results show that remaining SLR biases depend on the satellite type and the use of coated or uncoated retro-reflectors. For Earth rotation parameters, the increasing number of GLONASS satellites and tracking stations over the past few years leads to differences between GPS-only and GPS+GLONASS combined solutions which are most pronounced in the pole rate estimates with maximum 0.2 mas/day in magnitude. At the same time, the difference between GLONASS-only and combined solutions decreases. Derived GNSS orbits are used to estimate combined GPS+GLONASS satellite clocks, with first results presented in this paper. Phase observation residuals from a precise point positioning are at the level of 2 mm and particularly reveal poorly modeled yaw maneuver periods.  相似文献   

12.
Fast error analysis of continuous GNSS observations with missing data   总被引:3,自引:0,他引:3  
One of the most widely used method for the time-series analysis of continuous Global Navigation Satellite System (GNSS) observations is Maximum Likelihood Estimation (MLE) which in most implementations requires $\mathcal{O }(n^3)$ operations for $n$ observations. Previous research by the authors has shown that this amount of operations can be reduced to $\mathcal{O }(n^2)$ for observations without missing data. In the current research we present a reformulation of the equations that preserves this low amount of operations, even in the common situation of having some missing data.Our reformulation assumes that the noise is stationary to ensure a Toeplitz covariance matrix. However, most GNSS time-series exhibit power-law noise which is weakly non-stationary. To overcome this problem, we present a Toeplitz covariance matrix that provides an approximation for power-law noise that is accurate for most GNSS time-series.Numerical results are given for a set of synthetic data and a set of International GNSS Service (IGS) stations, demonstrating a reduction in computation time of a factor of 10–100 compared to the standard MLE method, depending on the length of the time-series and the amount of missing data.  相似文献   

13.
The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515–534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the $\Lambda $ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The $\Lambda $ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the $\Lambda $ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given $\Lambda $ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel $\Lambda $ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C $^*$ conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93–101, 2012).  相似文献   

14.
This paper presents deformation analysis of Lake Urmia causeway (LUC) embankments in northwest Iran using observations from interferometry synthetic aperture radar (InSAR) and finite element model (FEM) simulation. 58 SAR images including 10 ALOS, 30 Envisat and 18 TerraSAR-X are used to assess settlement of the embankments during 2003–2013. The interferometric dataset includes 140 differential interferograms which are processed using InSAR time series technique of small baseline subset approach. The results show a clear indication of large deformation on the embankments with peak amplitude of \(>\) 50 mm/year in 2003–2010, increasing to \(>\!\!80\)  mm/year in 2012–2013 in the line of sight (LOS) direction from ground to the satellite. 2D decomposition of InSAR observations from Envisat and ALOS satellites that overlap in the years 2007–2010 shows that the rate of the vertical settlement and horizontal motion is not uniform along the embankments; Both eastern and western embankments show significant vertical motion, while horizontal motion plays a more significant role in eastern embankment than western embankment. The InSAR results are then used to simulate deformation using FEM at two cross-sections at the distance of 4 and 9 km from the most western edge of the LUC for which detailed stratigraphy data are available. Results suggest that consolidation due to dissipation of excess pore pressure in embankments can satisfactory predict settlement of the LUC embankments. Our numerical modeling indicates that nearly half of the consolidation since the construction time of the causeway 30 years ago has been done.  相似文献   

15.
Continental hydrology loading observed by VLBI measurements   总被引:1,自引:1,他引:0  
Variations in continental water storage lead to loading deformation of the crust with typical peak-to-peak variations at very long baseline interferometry (VLBI) sites of 3–15 mm in the vertical component and 1–2 mm in the horizontal component. The hydrology signal at VLBI sites has annual and semi-annual components and clear interannual variations. We have calculated the hydrology loading series using mass loading distributions derived from the global land data assimilation system (GLDAS) hydrology model and alternatively from a global grid of equal-area gravity recovery and climate experiment (GRACE) mascons. In the analysis of the two weekly VLBI 24-h R1 and R4 network sessions from 2003 to 2010 the baseline length repeatabilities are reduced in 79 % (80 %) of baselines when GLDAS (GRACE) loading corrections are applied. Site vertical coordinate repeatabilities are reduced in about 80 % of the sites when either GLDAS or GRACE loading is used. In the horizontal components, reduction occurs in 70–80 % of the sites. Estimates of the annual site vertical amplitudes were reduced for 16 out of 18 sites if either loading series was applied. We estimated loading admittance factors for each site and found that the average admittances were 1.01 \(\pm \) 0.05 for GRACE and 1.39 \(\pm \) 0.07 for GLDAS. The standard deviations of the GRACE admittances and GLDAS admittances were 0.31 and 0.68, respectively. For sites that have been observed in a set of sufficiently temporally dense daily sessions, the average correlation between VLBI vertical monthly averaged series and GLDAS or GRACE loading series was 0.47 and 0.43, respectively.  相似文献   

16.
Determining how the global mean sea level (GMSL) evolves with time is of primary importance to understand one of the main consequences of global warming and its potential impact on populations living near coasts or in low-lying islands. Five groups are routinely providing satellite altimetry-based estimates of the GMSL over the altimetry era (since late 1992). Because each group developed its own approach to compute the GMSL time series, this leads to some differences in the GMSL interannual variability and linear trend. While over the whole high-precision altimetry time span (1993–2012), good agreement is noticed for the computed GMSL linear trend (of $3.1\pm 0.4$  mm/year), on shorter time spans (e.g., ${<}10~\hbox {years}$ ), trend differences are significantly larger than the 0.4 mm/year uncertainty. Here we investigate the sources of the trend differences, focusing on the averaging methods used to generate the GMSL. For that purpose, we consider outputs from two different groups: the Colorado University (CU) and Archiving, Validation and Interpretation of Satellite Oceanographic Data (AVISO) because associated processing of each group is largely representative of all other groups. For this investigation, we use the high-resolution MERCATOR ocean circulation model with data assimilation (version Glorys2-v1) and compute synthetic sea surface height (SSH) data by interpolating the model grids at the time and location of “true” along-track satellite altimetry measurements, focusing on the Jason-1 operating period (i.e., 2002–2009). These synthetic SSH data are then treated as “real” altimetry measurements, allowing us to test the different averaging methods used by the two processing groups for computing the GMSL: (1) averaging along-track altimetry data (as done by CU) or (2) gridding the along-track data into $2^{\circ }\times 2^{\circ }$ meshes and then geographical averaging of the gridded data (as done by AVISO). We also investigate the effect of considering or not SSH data at shallow depths $({<}120~\hbox {m})$ as well as the editing procedure. We find that the main difference comes from the averaging method with significant differences depending on latitude. In the tropics, the $2^{\circ }\times 2^{\circ }$ gridding method used by AVISO overestimates by 11 % the GMSL trend. At high latitudes (above $60^{\circ }\hbox {N}/\hbox {S}$ ), both methods underestimate the GMSL trend. Our calculation shows that the CU method (along-track averaging) and AVISO gridding process underestimate the trend in high latitudes of the northern hemisphere by 0.9 and 1.2 mm/year, respectively. While we were able to attribute the AVISO trend overestimation in the tropics to grid cells with too few data, the cause of underestimation at high latitudes remains unclear and needs further investigation.  相似文献   

17.
Comparison of GOCE-GPS gravity fields derived by different approaches   总被引:2,自引:1,他引:1  
Several techniques have been proposed to exploit GNSS-derived kinematic orbit information for the determination of long-wavelength gravity field features. These methods include the (i) celestial mechanics approach, (ii) short-arc approach, (iii) point-wise acceleration approach, (iv) averaged acceleration approach, and (v) energy balance approach. Although there is a general consensus that—except for energy balance—these methods theoretically provide equivalent results, real data gravity field solutions from kinematic orbit analysis have never been evaluated against each other within a consistent data processing environment. This contribution strives to close this gap. Target consistency criteria for our study are the input data sets, period of investigation, spherical harmonic resolution, a priori gravity field information, etc. We compare GOCE gravity field estimates based on the aforementioned approaches as computed at the Graz University of Technology, the University of Bern, the University of Stuttgart/Austrian Academy of Sciences, and by RHEA Systems for the European Space Agency. The involved research groups complied with most of the consistency criterions. Deviations only occur where technical unfeasibility exists. Performance measures include formal errors, differences with respect to a state-of-the-art GRACE gravity field, (cumulative) geoid height differences, and SLR residuals from precise orbit determination of geodetic satellites. We found that for the approaches (i) to (iv), the cumulative geoid height differences at spherical harmonic degree 100 differ by only \({\approx }10~\%\) ; in the absence of the polar data gap, SLR residuals agree by \({\approx }96~\%\) . From our investigations, we conclude that real data analysis results are in agreement with the theoretical considerations concerning the (relative) performance of the different approaches.  相似文献   

18.
Canadian gravimetric geoid model 2010   总被引:4,自引:1,他引:3  
A new gravimetric geoid model, Canadian Gravimetric Geoid 2010 (CGG2010), has been developed to upgrade the previous geoid model CGG2005. CGG2010 represents the separation between the reference ellipsoid of GRS80 and the Earth’s equipotential surface of $W_0=62{,}636{,}855.69~\mathrm{m}^2\mathrm{s}^{-2}$ W 0 = 62 , 636 , 855.69 m 2 s ? 2 . The Stokes–Helmert method has been re-formulated for the determination of CGG2010 by a new Stokes kernel modification. It reduces the effect of the systematic error in the Canadian terrestrial gravity data on the geoid to the level below 2 cm from about 20 cm using other existing modification techniques, and renders a smooth spectral combination of the satellite and terrestrial gravity data. The long wavelength components of CGG2010 include the GOCE contribution contained in a combined GRACE and GOCE geopotential model: GOCO01S, which ranges from $-20.1$ ? 20.1 to 16.7 cm with an RMS of 2.9 cm. Improvement has been also achieved through the refinement of geoid modelling procedure and the use of new data. (1) The downward continuation effect has been accounted accurately ranging from $-22.1$ ? 22.1 to 16.5 cm with an RMS of 0.9 cm. (2) The geoid residual from the Stokes integral is reduced to 4 cm in RMS by the use of an ultra-high degree spherical harmonic representation of global elevation model for deriving the reference Helmert field in conjunction with a derived global geopotential model. (3) The Canadian gravimetric geoid model is published for the first time with associated error estimates. In addition, CGG2010 includes the new marine gravity data, ArcGP gravity grids, and the new Canadian Digital Elevation Data (CDED) 1:50K. CGG2010 is compared to GPS-levelling data in Canada. The standard deviations are estimated to vary from 2 to 10 cm with the largest error in the mountainous areas of western Canada. We demonstrate its improvement over the previous models CGG2005 and EGM2008.  相似文献   

19.
20.
The estimation of crustal deformations from repeated baseline measurements is a singular problem in the absence of prior information. One often applied solution is a free adjustment in which the singular normal matrix is augmented with a set of inner constraints. These constraints impose no net translation nor rotation for the estimated deformations X which may not be physically meaningful for a particular problem. The introduction of an available geophysical model from which an expected deformation vector \(\bar X\) and its covariance matrix \(\sum _{\bar X} \) can be computed will direct X to a physically more meaningful solution. Three possible estimators are investigated for estimating deformations from a combination of baseline measurements and geophysical models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号