首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Griliches’ knowledge production function has been increasingly adopted at the regional level where location-specific conditions drive the spatial differences in knowledge creation dynamics. However, the large majority of such studies rely on a traditional regression approach that assumes spatially homogenous marginal effects of knowledge input factors. This paper extends the authors’ previous work (Kang and Dall’erba in Int Reg Sci Rev, 2015. doi: 10.1177/0160017615572888) to investigate the spatial heterogeneity in the marginal effects by using nonparametric local modeling approaches such as geographically weighted regression (GWR) and mixed GWR with two distinct samples of the US Metropolitan Statistical Area (MSA) and non-MSA counties. The results indicate a high degree of spatial heterogeneity in the marginal effects of the knowledge input variables, more specifically for the local and distant spillovers of private knowledge measured across MSA counties. On the other hand, local academic knowledge spillovers are found to display spatially homogenous elasticities in both MSA and non-MSA counties. Our results highlight the strengths and weaknesses of each county’s innovation capacity and suggest policy implications for regional innovation strategies.  相似文献   

2.
Missing or incorrect consideration of azimuthal asymmetry of troposphere delays is a considerable error source in space geodetic techniques such as Global Navigation Satellite Systems (GNSS) or Very Long Baseline Interferometry (VLBI). So-called horizontal troposphere gradients are generally utilized for modeling such azimuthal variations and are particularly required for observations at low elevation angles. Apart from estimating the gradients within the data analysis, which has become common practice in space geodetic techniques, there is also the possibility to determine the gradients beforehand from different data sources than the actual observations. Using ray-tracing through Numerical Weather Models (NWMs), we determined discrete gradient values referred to as GRAD for VLBI observations, based on the standard gradient model by Chen and Herring (J Geophys Res 102(B9):20489–20502, 1997.  https://doi.org/10.1029/97JB01739) and also for new, higher-order gradient models. These gradients are produced on the same data basis as the Vienna Mapping Functions 3 (VMF3) (Landskron and Böhm in J Geod, 2017.  https://doi.org/10.1007/s00190-017-1066-2), so they can also be regarded as the VMF3 gradients as they are fully consistent with each other. From VLBI analyses of the Vienna VLBI and Satellite Software (VieVS), it becomes evident that baseline length repeatabilities (BLRs) are improved on average by 5% when using a priori gradients GRAD instead of estimating the gradients. The reason for this improvement is that the gradient estimation yields poor results for VLBI sessions with a small number of observations, while the GRAD a priori gradients are unaffected from this. We also developed a new empirical gradient model applicable for any time and location on Earth, which is included in the Global Pressure and Temperature 3 (GPT3) model. Although being able to describe only the systematic component of azimuthal asymmetry and no short-term variations at all, even these empirical a priori gradients slightly reduce (improve) the BLRs with respect to the estimation of gradients. In general, this paper addresses that a priori horizontal gradients are actually more important for VLBI analysis than previously assumed, as particularly the discrete model GRAD as well as the empirical model GPT3 are indeed able to refine and improve the results.  相似文献   

3.
This work is an investigation of three methods for regional geoid computation: Stokes’s formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223–232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes’s formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes’s formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.  相似文献   

4.
The ionosphere effective height (IEH) is a very important parameter in total electron content (TEC) measurements under the widely used single-layer model assumption. To overcome the requirement of a large amount of simultaneous vertical and slant ionospheric observations or dense “coinciding” pierce points data, a new approach comparing the converted vertical TEC (VTEC) value using mapping function based on a given IEH with the “ground truth” VTEC value provided by the combined International GNSS Service Global Ionospheric Maps is proposed for the determination of the optimal IEH. The optimal IEH in the Chinese region is determined using three different methods based on GNSS data. Based on the ionosonde data from three different locations in China, the altitude variation of the peak electron density (hmF2) is found to have clear diurnal, seasonal and latitudinal dependences, and the diurnal variation of hmF2 varies from approximately 210 to 520 km in Hainan. The determination of the optimal IEH employing the inverse method suggested by Birch et al. (Radio Sci 37, 2002. doi: 10.1029/2000rs002601) did not yield a consistent altitude in the Chinese region. Tests of the method minimizing the mapping function errors suggested by Nava et al. (Adv Space Res 39:1292–1297, 2007) indicate that the optimal IEH ranges from 400 to 600 km, and the height of 450 km is the most frequent IEH at both high and low solar activities. It is also confirmed that the IEH of 450–550 km is preferred for the Chinese region instead of the commonly adopted 350–450 km using the determination method of the optimal IEH proposed in this paper.  相似文献   

5.
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, \(1/f^{\alpha }\) with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.  相似文献   

6.
Proper understanding of how the Earth’s mass distributions and redistributions influence the Earth’s gravity field-related functionals is crucial for numerous applications in geodesy, geophysics and related geosciences. Calculations of the gravitational curvatures (GC) have been proposed in geodesy in recent years. In view of future satellite missions, the sixth-order developments of the gradients are becoming requisite. In this paper, a set of 3D integral GC formulas of a tesseroid mass body have been provided by spherical integral kernels in the spatial domain. Based on the Taylor series expansion approach, the numerical expressions of the 3D GC formulas are provided up to sixth order. Moreover, numerical experiments demonstrate the correctness of the 3D Taylor series approach for the GC formulas with order as high as sixth order. Analogous to other gravitational effects (e.g., gravitational potential, gravity vector, gravity gradient tensor), numerically it is found that there exist the very-near-area problem and polar singularity problem in the GC east–east–radial, north–north–radial and radial–radial–radial components in spatial domain, and compared to the other gravitational effects, the relative approximation errors of the GC components are larger due to not only the influence of the geocentric distance but also the influence of the latitude. This study shows that the magnitude of each term for the nonzero GC functionals by a grid resolution 15\(^{{\prime } }\,\times \) 15\(^{{\prime }}\) at GOCE satellite height can reach of about 10\(^{-16}\) m\(^{-1}\) s\(^{2}\) for zero order, 10\(^{-24 }\) or 10\(^{-23}\) m\(^{-1}\) s\(^{2}\) for second order, 10\(^{-29}\) m\(^{-1}\) s\(^{2}\) for fourth order and 10\(^{-35}\) or 10\(^{-34}\) m\(^{-1}\) s\(^{2}\) for sixth order, respectively.  相似文献   

7.
Automatic building extraction is an important topic for many applications such as urban planning, disaster management, 3D building modeling and updating GIS databases. Its approaches mainly depend on two data sources: light detection and ranging (LiDAR) point cloud and aerial imagery both of which have advantages and disadvantages of their own. In this study, in order to benefit from the advantages of each data sources, LiDAR and image data combined together. And then, the building boundaries were extracted with the automated active contour algorithm implemented in MATLAB. Active contour algorithm uses initial contour positions to segment an object in the image. Initial contour positions were detected without user interaction by a series of image enhancements, band ratio and morphological operations. Four test areas with varying building and background levels of detail were selected from ISPRS’s benchmark Vaihingen and Istanbul datasets. Vegetation and shadows were removed from all the datasets by band ratio to improve segmentation quality. Subsequently, LiDAR point cloud data was converted to raster format and added to the aerial imagery as an extra band. Resulting merged image and initial contour positions were given to the active contour algorithm to extract building boundaries. In order to compare the contribution of LiDAR to the proposed method, the boundaries of the buildings were extracted from the input image before and after adding LiDAR data to the image as a layer. Finally extracted building boundaries were smoothed by the Awrangjeb (Int J Remote Sen 37(3): 551–579.  https://doi.org/10.1080/01431161.2015.1131868, 2016) boundary regularization algorithm. Correctness (Corr), completeness (Comp) and accuracy (Q) metrics were used to assess accuracy of segmented building boundaries by comparing extracted building boundaries with manually digitized building boundaries. Proposed approach shows the promising results with over 93% correctness, 92% completeness and 89% quality.  相似文献   

8.
As a precursor study for the upcoming combined Earth Gravitational Model 2020 (EGM2020), the Experimental Gravity Field Model XGM2016, parameterized as a spherical harmonic series up to degree and order 719, is computed. XGM2016 shares the same combination methodology as its predecessor model GOCO05c (Fecher et al. in Surv Geophys 38(3): 571–590, 2017. doi: 10.1007/s10712-016-9406-y). The main difference between these models is that XGM2016 is supported by an improved terrestrial data set of \(15^\prime \times 15^\prime \) gravity anomaly area-means provided by the United States National Geospatial-Intelligence Agency (NGA), resulting in significant upgrades compared to existing combined gravity field models, especially in continental areas such as South America, Africa, parts of Asia, and Antarctica. A combination strategy of relative regional weighting provides for improved performance in near-coastal ocean regions, including regions where the altimetric data are mostly unchanged from previous models. Comparing cumulative height anomalies, from both EGM2008 and XGM2016 at degree/order 719, yields differences of 26 cm in Africa and 40 cm in South America. These differences result from including additional information of satellite data, as well as from the improved ground data in these regions. XGM2016 also yields a smoother Mean Dynamic Topography with significantly reduced artifacts, which indicates an improved modeling of the ocean areas.  相似文献   

9.
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density, $N_m \mathrm{F2}$ N m F 2 , and the height, $h_m \mathrm{F2}$ h m F 2 . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve $N_m \mathrm{F2}$ N m F 2 and $h_m \mathrm{F2}$ h m F 2 values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between $0.5\times 10^{10}$ 0.5 × 10 10 and $3.6\times 10^{10}$ 3.6 × 10 10 elec/m $^{-3}$ ? 3 for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height ( $\sim $ 2 %).  相似文献   

10.
The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515–534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the $\Lambda $ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The $\Lambda $ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the $\Lambda $ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given $\Lambda $ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel $\Lambda $ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C $^*$ conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93–101, 2012).  相似文献   

11.
12.
The present paper deals with the least-squares adjustment where the design matrix (A) is rank-deficient. The adjusted parameters \(\hat x\) as well as their variance-covariance matrix ( \(\sum _{\hat x} \) ) can be obtained as in the “standard” adjustment whereA has the full column rank, supplemented with constraints, \(C\hat x = w\) , whereC is the constraint matrix andw is sometimes called the “constant vector”. In this analysis only the inner adjustment constraints are considered, whereC has the full row rank equal to the rank deficiency ofA, andAC T =0. Perhaps the most important outcome points to the three kinds of results
  1. A general least-squares solution where both \(\hat x\) and \(\sum _{\hat x} \) are indeterminate corresponds tow=arbitrary random vector.
  2. The minimum trace (least-squares) solution where \(\hat x\) is indeterminate but \(\sum _{\hat x} \) is detemined (and trace \(\sum _{\hat x} \) corresponds tow=arbitrary constant vector.
  3. The minimum norm (least-squares) solution where both \(\hat x\) and \(\sum _{\hat x} \) are determined (and norm \(\hat x\) , trace \(\sum _{\hat x} \) corresponds tow?0
  相似文献   

13.
The quality of the links between the different space geodetic techniques (VLBI, SLR, GNSS and DORIS) is still one of the major limiting factors for the realization of a unique global terrestrial reference frame that is accurate enough to allow the monitoring of the Earth system, i.e., of processes like sea level change, postglacial rebound and silent earthquakes. According to the specifications of the global geodetic observing system of the International Association of Geodesy, such a reference frame should be accurate to 1 mm over decades, with rates of change stable at the level of 0.1 mm/year. The deficiencies arise from inaccurate or incomplete local ties at many fundamental sites as well as from systematic instrumental biases in the individual space geodetic techniques. Frequently repeated surveys, the continuous monitoring of antenna heights and the geometrical mount stability (Lösler et al. in J Geod 90:467–486, 2016.  https://doi.org/10.1007/s00190-016-0887-8) have not provided evidence for insufficient antenna stability. Therefore, we have investigated variations in the respective system delays caused by electronic circuits, which is not adequately captured by the calibration process, either because of subtle differences in the circuitry between geodetic measurement and calibration, high temporal variability or because of lacking resolving bandwidth. The measured system delay variations in the electric chain of both VLBI- and SLR systems reach the order of 100 ps, which is equivalent to 3 cm of path length. Most of this variability is usually removed by the calibrations but by far not all. This paper focuses on the development of new technologies and procedures for co-located geodetic instrumentation in order to identify and remove systematic measurement biases within and between the individual measurement techniques. A closed-loop optical time and frequency distribution system and a common inter-technique reference target provide the possibility to remove variable system delays. The main motivation for the newly established central reference target, locked to the station clock, is the combination of all space geodetic instruments at a single reference point at the observatory. On top of that it provides the unique capability to perform a closure measurement based on the observation of time.  相似文献   

14.
The study areas Tikovil and Payppara sub-watersheds of Meenachil river cover 158.9 and 111.9 km2, respectively. These watersheds are parts of Western Ghats, which is an ecologically sensitive region. The drainage network of the sub-watersheds was delineated using SOI topographical maps on 1:50,000 scale using the Arc GIS software. The stream orders were calculated using the method proposed by Strahler's (1964 Strahler, A. N. 1964. “Quantitative geomorphology of drainage basins and channel networks”. In Hand book of applied hydrology. Vol. 4, Edited by: Chow, V. T. Vol. 4, 3944.  [Google Scholar]). The drainage network shows that the terrain exhibits dendritic to sub-dendritic drainage pattern. Stream order ranges from the fifth to the sixth order. Drainage density varies between 1.69 and 2.62 km/km2. The drainage texture of the drainage basins are 2.3 km–1 and 6.98 km–1 and categorized as coarse to very fine texture. Stream frequency is low in the case of Payappara sub-watershed (1.78 km–2). Payappara sub-watershed has the highest constant of channel maintenance value of 0.59 indicating much fewer structural disturbances and fewer runoff conditions. The form factor value varies in between 0.42 and 0.55 suggesting elongated shape formed for Payappara sub-watershed and a rather more circular shape for Tikovil sub-watershed. The mean bifurcation ratio (3.5) indicates that both the sub-watersheds are within the natural stream system. Hence from the study it can be concluded that GIS techniques prove to be a competent tool in morphometric analysis.  相似文献   

15.
We propose an approach for calibrating the horizontal tidal shear components [(differential extension (\(\gamma _1\)) and engineering shear (\(\gamma _2\))] of two Sacks–Evertson (in Pap Meteorol Geophys 22:195–208, 1971) SES-3 borehole strainmeters installed in the Longitudinal Valley in eastern Taiwan. The method is based on the waveform reconstruction of the Earth and ocean tidal shear signals through linear regressions on strain gauge signals, with variable sensor azimuth. This method allows us to derive the orientation of the sensor without any initial constraints and to calibrate the shear strain components \(\gamma _1\) and \(\gamma _2\) against \(M_2\) tidal constituent. The results illustrate the potential of tensor strainmeters for recording horizontal tidal shear strain.  相似文献   

16.
We can map zenith wet delays onto precipitable water with a conversion factor, but in order to calculate the exact conversion factor, we must precisely calculate its key variable $T_\mathrm{m}$ . Yao et al. (J Geod 86:1125–1135, 2012. doi:10.1007/s00190-012-0568-1) established the first generation of global $T_\mathrm{m}$ model (GTm-I) with ground-based radiosonde data, but due to the lack of radiosonde data at sea, the model appears to be abnormal in some areas. Given that sea surface temperature varies less than that on land, and the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship are accurate enough to describe the surface temperature and $T_\mathrm{m}$ , this paper capitalizes on the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship to provide simulated $T_\mathrm{m}$ at sea, as a compensation for the lack of data. Combined with the $T_\mathrm{m}$ from radiosonde data, we recalculated the GTm model coefficients. The results show that this method not only improves the accuracy of the GTm model significantly at sea but also improves that on land, making the GTm model more stable and practically applicable.  相似文献   

17.
Error analysis of the NGS’ surface gravity database   总被引:1,自引:1,他引:0  
Are the National Geodetic Survey’s surface gravity data sufficient for supporting the computation of a 1 cm-accurate geoid? This paper attempts to answer this question by deriving a few measures of accuracy for this data and estimating their effects on the US geoid. We use a data set which comprises ${\sim }1.4$ million gravity observations collected in 1,489 surveys. Comparisons to GRACE-derived gravity and geoid are made to estimate the long-wavelength errors. Crossover analysis and $K$ -nearest neighbor predictions are used for estimating local gravity biases and high-frequency gravity errors, and the corresponding geoid biases and high-frequency geoid errors are evaluated. Results indicate that 244 of all 1,489 surface gravity surveys have significant biases ${>}2$  mGal, with geoid implications that reach 20 cm. Some of the biased surveys are large enough in horizontal extent to be reliably corrected by satellite-derived gravity models, but many others are not. In addition, the results suggest that the data are contaminated by high-frequency errors with an RMS of ${\sim }2.2$  mGal. This causes high-frequency geoid errors of a few centimeters in and to the west of the Rocky Mountains and in the Appalachians and a few millimeters or less everywhere else. Finally, long-wavelength ( ${>}3^{\circ }$ ) surface gravity errors on the sub-mGal level but with large horizontal extent are found. All of the south and southeast of the USA is biased by +0.3 to +0.8 mGal and the Rocky Mountains by $-0.1$ to $-0.3$  mGal. These small but extensive gravity errors lead to long-wavelength geoid errors that reach 60 cm in the interior of the USA.  相似文献   

18.
Graph theory is useful for analyzing time-dependent model parameters estimated from interferometric synthetic aperture radar (InSAR) data in the temporal domain. Plotting acquisition dates (epochs) as vertices and pair-wise interferometric combinations as edges defines an incidence graph. The edge-vertex incidence matrix and the normalized edge Laplacian matrix are factors in the covariance matrix for the pair-wise data. Using empirical measures of residual scatter in the pair-wise observations, we estimate the relative variance at each epoch by inverting the covariance of the pair-wise data. We evaluate the rank deficiency of the corresponding least-squares problem via the edge-vertex incidence matrix. We implement our method in a MATLAB software package called GraphTreeTA available on GitHub (https://github.com/feigl/gipht). We apply temporal adjustment to the data set described in Lu et al. (Geophys Res Solid Earth 110, 2005) at Okmok volcano, Alaska, which erupted most recently in 1997 and 2008. The data set contains 44 differential volumetric changes and uncertainties estimated from interferograms between 1997 and 2004. Estimates show that approximately half of the magma volume lost during the 1997 eruption was recovered by the summer of 2003. Between June 2002 and September 2003, the estimated rate of volumetric increase is \((6.2 \, \pm \, 0.6) \times 10^6~\mathrm{m}^3/\mathrm{year} \). Our preferred model provides a reasonable fit that is compatible with viscoelastic relaxation in the five years following the 1997 eruption. Although we demonstrate the approach using volumetric rates of change, our formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, such as range change, range gradient, or atmospheric delay.  相似文献   

19.
Large-scale mass redistribution in the terrestrial water storage (TWS) leads to changes in the low-degree spherical harmonic coefficients of the Earth’s surface mass density field. Studying these low-degree fluctuations is an important task that contributes to our understanding of continental hydrology. In this study, we use global GNSS measurements of vertical and horizontal crustal displacements that we correct for atmospheric and oceanic effects, and use a set of modified basis functions similar to Clarke et al. (Geophys J Int 171:1–10, 2007) to perform an inversion of the corrected measurements in order to recover changes in the coefficients of degree-0 (hydrological mass change), degree-1 (centre of mass shift) and degree-2 (flattening of the Earth) caused by variations in the TWS over the period January 2003–January 2015. We infer from the GNSS-derived degree-0 estimate an annual variation in total continental water mass with an amplitude of \((3.49 \pm 0.19) \times 10^{3}\) Gt and a phase of \(70^{\circ } \pm 3^{\circ }\) (implying a peak in early March), in excellent agreement with corresponding values derived from the Global Land Data Assimilation System (GLDAS) water storage model that amount to \((3.39 \pm 0.10) \times 10^{3}\) Gt and \(71^{\circ } \pm 2^{\circ }\), respectively. The degree-1 coefficients we recover from GNSS predict annual geocentre motion (i.e. the offset change between the centre of common mass and the centre of figure) caused by changes in TWS with amplitudes of \(0.69 \pm 0.07\) mm for GX, \(1.31 \pm 0.08\) mm for GY and \(2.60 \pm 0.13\) mm for GZ. These values agree with GLDAS and estimates obtained from the combination of GRACE and the output of an ocean model using the approach of Swenson et al. (J Geophys Res 113(B8), 2008) at the level of about 0.5, 0.3 and 0.9 mm for GX, GY and GZ, respectively. Corresponding degree-1 coefficients from SLR, however, generally show higher variability and predict larger amplitudes for GX and GZ. The results we obtain for the degree-2 coefficients from GNSS are slightly mixed, and the level of agreement with the other sources heavily depends on the individual coefficient being investigated. The best agreement is observed for \(T_{20}^C\) and \(T_{22}^S\), which contain the most prominent annual signals among the degree-2 coefficients, with amplitudes amounting to \((5.47 \pm 0.44) \times 10^{-3}\) and \((4.52 \pm 0.31) \times 10^{-3}\) m of equivalent water height (EWH), respectively, as inferred from GNSS. Corresponding agreement with values from SLR and GRACE is at the level of or better than \(0.4 \times 10^{-3}\) and \(0.9 \times 10^{-3}\) m of EWH for \(T_{20}^C\) and \(T_{22}^S\), respectively, while for both coefficients, GLDAS predicts smaller amplitudes. Somewhat lower agreement is obtained for the order-1 coefficients, \(T_{21}^C\) and \(T_{21}^S\), while our GNSS inversion seems unable to reliably recover \(T_{22}^C\). For all the coefficients we consider, the GNSS-derived estimates from the modified inversion approach are more consistent with the solutions from the other sources than corresponding estimates obtained from an unconstrained standard inversion.  相似文献   

20.
We show that the current levels of accuracy being achieved for the precise orbit determination (POD) of low-Earth orbiters demonstrate the need for the self-consistent treatment of tidal variations in the geocenter. Our study uses as an example the POD of the OSTM/Jason-2 satellite altimeter mission based upon Global Positioning System (GPS) tracking data. Current GPS-based POD solutions are demonstrating root-mean-square (RMS) radial orbit accuracy and precision of \({<}1\)  cm and 1 mm, respectively. Meanwhile, we show that the RMS of three-dimensional tidal geocenter variations is \({<}6\)  mm, but can be as large as 15 mm, with the largest component along the Earth’s spin axis. Our results demonstrate that GPS-based POD of Earth orbiters is best performed using GPS satellite orbit positions that are defined in a reference frame whose origin is at the center of mass of the entire Earth system, including the ocean tides. Errors in the GPS-based POD solutions for OSTM/Jason-2 of \({<}4\)  mm (3D RMS) and \({<}2\)  mm (radial RMS) are introduced when tidal geocenter variations are not treated consistently. Nevertheless, inconsistent treatment is measurable in the OSTM/Jason-2 POD solutions and manifests through degraded post-fit tracking data residuals, orbit precision, and relative orbit accuracy. For the latter metric, sea surface height crossover variance is higher by \(6~\hbox {mm}^{2}\) when tidal geocenter variations are treated inconsistently.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号