首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Jan Rooba 《Journal of Geodesy》1983,57(1-4):138-145
Short-arc orbit computations by numerical or analytical integration of equations of motion traditionally utilized in geodetic and geodynamic satellite positioning are relatively involved and computationally expensive. However, short-arc orbits can be evaluated more efficiently by means of least squares polynomial approximations. Such orbit computations do not significantly increase the computation time when compared to widely used semi-short-arc techniques which utilize externally generated orbits. The sufficiently high-degree polynomial approximation of the second time derivatives , and evaluated from a gravitational potential model at regular (two-minute) intervals and everaged initial conditions (position and velocity vectors at the beginning, the middle and the end of a pass) reproduces the U.S. Defense Mapping Agency precise ephemeris of the Navy Navigation Satellites (NNSS) to about 5 cm RMS in each coordinate. To achieve this level of orbit shape resolution for NNSS satellites, the gravitational potential model should not be truncated at less than degree and order 10. Contribution of the Earth Physics Branch No. 1034.  相似文献   

2.
Array algebra forms the general base of fast transforms and multilinear algebra making rigorous solutions of a large number (millions) of parameters computationally feasible. Loop inverses are operators solving the problem of general matrix inverses. Their derivation starts from the inconsistent linear equations by a parameter exchangeXL 0, where X is a set of unknown observables,A 0 forming a basis of the so called “problem space”. The resulting full rank design matrix of parameters L0 and its ℓ-inverse reveal properties speeding the computational least squares solution expressed in observed values . The loop inverses are found by the back substitution expressing ∧X in terms ofL through . Ifp=rank (A) ≤n, this chain operator creates the pseudoinverseA +. The idea of loop inverses and array algebra started in the late60's from the further specialized case,p=n=rank (A), where the loop inverse A 0 −1 (AA 0 −1 ) reduces into the ℓ-inverse A=(ATA)−1AT. The physical interpretation of the design matrixA A 0 −1 as an interpolator, associated with the parametersL 0, and the consideration of its multidimensional version has resulted in extended rules of matrix and tensor calculus and mathematical statistics called array algebra.  相似文献   

3.
The resolution of a nonlinear parametric adjustment model is addressed through an isomorphic geometrical setup with tensor structure and notation, represented by a u-dimensional “model surface” embedded in a flat n-dimensional “observational space”. Then observations correspond to the observational-space coordinates of the pointQ, theu initial parameters correspond to the model-surface coordinates of the “initial” pointP, and theu adjusted parameters correspond to the model-surface coordinates of the “least-squares” point . The least-squares criterion results in a minimum-distance property implying that the vector Q must be orthogonal to the model surface. The geometrical setup leads to the solution of modified normal equations, characterized by a positive-definite matrix. The latter contains second-order and, optionally, thirdorder partial derivatives of the observables with respect to the parameters. This approach significantly shortens the convergence process as compared to the standard (linearized) method.  相似文献   

4.
In the last year a new formulation of Molodensky's problem has been given, in which the gravity vector has been considered as the independent variable of the problem, while the position vector is the dependent. This new approach has the great advantage to transform the problem of Molodensky which is of free boundary type, into a fixed boundary problem for a non linear differential equations. In this paper the first results of the study of the new approach are summarized, without going into many mathematical details. The problem of Molodensky for the rotating earth is also discussed.  相似文献   

5.
Techniques will be presented for the design of one-dimensional gravity nets by means of given variance-covariance matrices. After a critical review of the methods for the solution of the matrix equation , we shall compare different numerical results in order to judge the quality of the designs carried out by means of anSVD criterion matrix, by a criterion matrix created according to an assumed distance-dependence of the mean errors of the grid points, and by means of an iteratively improved criterion matrix respectively.  相似文献   

6.
A simple statistical approach has been applied to the repeated electro-optical distance measurements (EDM) of 1,358 lines in the Tohoku district of Japan to obtain knowledge about the precision of EDM and the possible accumulation of strain. The average time interval between measurements is about seven or eight years. It is shown that the whole data of the difference between distance measurements repeated over a given lineD are interpreted in terms of EDM errors comprising distance proportional systematic errors and standard errors expressed by the usual form . The rate of horizontal deformation must therefore be much smaller than the strain rates of about 0.7 0.8 ppm over 7 to 8 years which have been hitherto expected.  相似文献   

7.
Since the earth is closer to a revolving ellipsoid than a sphere, it is very important to study directly the original model of the Stokes' BVP on the reference ellipsoid, where denotes the reference ellipsoid, is the Somigliana normal gravity, andh is the outer normal direction of. This paper deals with: 1) simplification of the above BVP under preserving accuracy to , 2) derivation of computational formula of the elliptical harmonic series, 3) solving the BVP by the elliptical harmonic series, and 4) providing a principle for finding the elliptical harmonic model of the earth's gravity field from the spherical harmonic coefficients ofg. All results given in the paper have the same accuracy as the original BVP, that is, the accuracy of the BVP is theoretically preserved in each derivation step.  相似文献   

8.
The conventional expansions of the gravity gradients in the local north-oriented reference frame have a complicated form, depending on the first- and second-order derivatives of the associated Legendre functions of the colatitude and containing factors which tend to infinity when approaching the poles. In the present paper, the general term of each of these series is transformed to a product of a geopotential coefficient and a sum of several adjacent Legendre functions of the colatitude multiplied by a function of the longitude. These transformations are performed on the basis of relations between the Legendre functions and their derivatives published by Ilk (1983). The second-order geopotential derivatives corresponding to the local orbital reference frame are presented as linear functions of the north-oriented gravity gradients. The new expansions for the latter are substituted into these functions. As a result, the orbital derivatives are also presented as series depending on the geopotential coefficients multiplied by sums of the Legendre functions whose coefficients depend on the longitude and the satellite track azimuth at an observation point. The derived expansions of the observables can be applied for constructing a geopotential model from the GOCE mission data by the time-wise and space-wise approaches. The numerical experiments demonstrate the correctness of the analytical formulas.An erratum to this article can be found at  相似文献   

9.
The Bayesian estimates b of the standard deviation σ in a linear model—as needed for the evaluation of reliability—is well known to be proportional to the square root of the Bayesian estimate (s 2) b of the variance component σ2 by a proportionality factor involving the ratio of Gamma functions. However, in analogy to the case of the respective unbiased estimates, the troublesome exact computation ofa b may be avoided by a simple approximation which turns out to be good enough for most applications even if the degree of freedom ν is rather small. Paper presented to the Int. Conf. on “Practical Bayesian Statistics”, Cambridge (U.K.), 8.–11. July 1986.  相似文献   

10.
A new method for modeling the ionospheric delay using global positioning system (GPS) data is proposed, called the ionospheric eclipse factor method (IEFM). It is based on establishing a concept referred to as the ionospheric eclipse factor (IEF) λ of the ionospheric pierce point (IPP) and the IEF’s influence factor (IFF) . The IEF can be used to make a relatively precise distinction between ionospheric daytime and nighttime, whereas the IFF is advantageous for describing the IEF’s variations with day, month, season and year, associated with seasonal variations of total electron content (TEC) of the ionosphere. By combining λ and with the local time t of IPP, the IEFM has the ability to precisely distinguish between ionospheric daytime and nighttime, as well as efficiently combine them during different seasons or months over a year at the IPP. The IEFM-based ionospheric delay estimates are validated by combining an absolute positioning mode with several ionospheric delay correction models or algorithms, using GPS data at an international Global Navigation Satellite System (GNSS) service (IGS) station (WTZR). Our results indicate that the IEFM may further improve ionospheric delay modeling using GPS data.  相似文献   

11.
The vector-based algorithm to transform Cartesian (X, Y, Z ) into geodetic coordinates (, λ, h) presented by Feltens (J Geod, 2007, doi:) has been extended for triaxial ellipsoids. The extended algorithm is again based on simple formulae and has successfully been tested for the Earth and other celestial bodies and for a wide range of positive and negative ellipsoidal heights.  相似文献   

12.
An investigation was made of the behaviour of the variable (where ρij are the discrepancies between the direct and reverse measurements of the height of consecutive bench marks and theR ij are their distance apart) in a partial net of the Italian high precision levelling of a total length of about1.400 km. The methods of analysis employed were in general non-parametric individual and cumulative tests; in particular randomness, normality and asymmetry tests were carried out. The computers employed wereIBM/7094/7040. From the results evidence was obtained of the existence of an asymmetry in respect to zero of thex ij confirming the well-known results given firstly by Lallemand. A new result was obtained from the tests of randomness which put in evidence trends of the mean values of thex ij and explained some anomalous behaviours of the cumulative discrepancy curves. The extension of this investigation to a broader net possibly covering other national nets would be very useful to get a deeper insight into the behaviour of the errors in high precision levelling. Ad hoc programs for electronic computers are available to accomplish this job quickly. Presented at the 14th International Assembly of Geodesy (Lucerne, 1967).  相似文献   

13.
Spherical harmonic series, commonly used to represent the Earth’s gravitational field, are now routinely expanded to ultra-high degree (> 2,000), where the computations of the associated Legendre functions exhibit extremely large ranges (thousands of orders) of magnitudes with varying latitude. We show that in the degree-and-order domain, (ℓ,m), of these functions (with full ortho-normalization), their rather stable oscillatory behavior is distinctly separated from a region of very strong attenuation by a simple linear relationship: , where θ is the polar angle. Derivatives and integrals of associated Legendre functions have these same characteristics. This leads to an operational approach to the computation of spherical harmonic series, including derivatives and integrals of such series, that neglects the numerically insignificant functions on the basis of the above empirical relationship and obviates any concern about their broad range of magnitudes in the recursion formulas that are used to compute them. Tests with a simulated gravitational field show that the errors in so doing can be made less than the data noise at all latitudes and up to expansion degree of at least 10,800. Neglecting numerically insignificant terms in the spherical harmonic series also offers a computational savings of at least one third.  相似文献   

14.
The regularized solution of the external sphericalStokes boundary value problem as being used for computations of geoid undulations and deflections of the vertical is based upon theGreen functions S 1(0, 0, , ) ofBox 0.1 (R = R 0) andV 1(0, 0, , ) ofBox 0.2 (R = R 0) which depend on theevaluation point {0, 0} S R0 2 and thesampling point {, } S R0 2 ofgravity anomalies (, ) with respect to a normal gravitational field of typegm/R (free air anomaly). If the evaluation point is taken as the meta-north pole of theStokes reference sphere S R0 2 , theStokes function, and theVening-Meinesz function, respectively, takes the formS() ofBox 0.1, andV 2() ofBox 0.2, respectively, as soon as we introduce {meta-longitude (azimuth), meta-colatitude (spherical distance)}, namely {A, } ofBox 0.5. In order to deriveStokes functions andVening-Meinesz functions as well as their integrals, theStokes andVening-Meinesz functionals, in aconvolutive form we map the sampling point {, } onto the tangent plane T0S R0 2 at {0, 0} by means ofoblique map projections of type(i) equidistant (Riemann polar/normal coordinates),(ii) conformal and(iii) equiareal.Box 2.1.–2.4. andBox 3.1.– 3.4. are collections of the rigorously transformedconvolutive Stokes functions andStokes integrals andconvolutive Vening-Meinesz functions andVening-Meinesz integrals. The graphs of the correspondingStokes functions S 2(),S 3(r),,S 6(r) as well as the correspondingStokes-Helmert functions H 2(),H 3(r),,H 6(r) are given byFigure 4.1–4.5. In contrast, the graphs ofFigure 4.6–4.10 illustrate the correspondingVening-Meinesz functions V 2(),V 3(r),,V 6(r) as well as the correspondingVening-Meinesz-Helmert functions Q 2(),Q 3(r),,Q 6(r). The difference between theStokes functions / Vening-Meinesz functions andtheir first term (only used in the Flat Fourier Transforms of type FAST and FASZ), namelyS 2() – (sin /2)–1,S 3(r) – (sinr/2R 0)–1,,S 6(r) – 2R 0/r andV 2() + (cos /2)/2(sin2 /2),V 3(r) + (cosr/2R 0)/2(sin2 r/2R 0),, illustrate the systematic errors in theflat Stokes function 2/ or flatVening-Meinesz function –2/2. The newly derivedStokes functions S 3(r),,S 6(r) ofBox 2.1–2.3, ofStokes integrals ofBox 2.4, as well asVening-Meinesz functionsV 3(r),,V 6(r) ofBox 3.1–3.3, ofVening-Meinesz integrals ofBox 3.4 — all of convolutive type — pave the way for the rigorousFast Fourier Transform and the rigorousWavelet Transform of theStokes integral / theVening-Meinesz integral of type equidistant, conformal and equiareal.  相似文献   

15.
Fast error analysis of continuous GPS observations   总被引:4,自引:1,他引:3  
It has been generally accepted that the noise in continuous GPS observations can be well described by a power-law plus white noise model. Using maximum likelihood estimation (MLE) the numerical values of the noise model can be estimated. Current methods require calculating the data covariance matrix and inverting it, which is a significant computational burden. Analysing 10 years of daily GPS solutions of a single station can take around 2 h on a regular computer such as a PC with an AMD AthlonTM 64 X2 dual core processor. When one analyses large networks with hundreds of stations or when one analyses hourly instead of daily solutions, the long computation times becomes a problem. In case the signal only contains power-law noise, the MLE computations can be simplified to a process where N is the number of observations. For the general case of power-law plus white noise, we present a modification of the MLE equations that allows us to reduce the number of computations within the algorithm from a cubic to a quadratic function of the number of observations when there are no data gaps. For time-series of three and eight years, this means in practise a reduction factor of around 35 and 84 in computation time without loss of accuracy. In addition, this modification removes the implicit assumption that there is no environment noise before the first observation. Finally, we present an analytical expression for the uncertainty of the estimated trend if the data only contains power-law noise. Electronic supplementary material The online version of this article (doi: ) contains supplementary material, which is available to authorized users.  相似文献   

16.
Summary The range of computation in normal calculators can be extended to functions by providing an usual machine both with a storage unit containing approximate values of functions for arguments in rough steps and factors of interpolation and a device for transferring the values from the storage unit into the calculator proper. Then values of function for any argument may be computed by direct or inverse interpolation from the values stored. Accuracy depends on the number and distribution of the stored values. If usual trigonometric functions are concerned, five-place sometimes even six-place accuracy may be obtained by storing no more than 100 values of function and 100 factors of interpolation. Such a degree of accuracy is sufficient for almost any computation in geodetic operations of lower order, including third-order triangulation. At the Geodetic Institute of the Stuttgart Technische Hochschule a try-out model was developed, with wich the functions sinx, cosx, lanx, cotanx and their inverse functions as well as sec tanx (secant of tangent) and can be computed. As basic machine a hand calculator with Odhner wheels was used. Experiments with the hand try-out calculator showed that the amount of computing erros is only half of that committed in the usual computations by the customary calculators and printed tables of functions. In addition, gain of time was reached in most computations, which amounts to 50 percent in certain problems. Tests also made it clear that the operation of the function calculator even in the actual state of the try-out machine is very simple and can easily be learnt so that also untrained people may operate it. It may be noted that the majority of the persons used in the testing the try-out machine were willing to repeat the computations if so required, by means of the function calculator, but not so with the function tables. Therefore the function calculator appears well suited not only to simplify geodetic computation considerably but also to make it more efficient.
Zusammenfassung Der Rechenbereich normaler Rechenmaschinen kann dadurch auf Funktionen erweitert werden, dass die Maschine mit einem Speicherwerk, das gen?herte Funktionswerte für grob abgestufte Argumente und Interpolationsfaktoren enth?lt, und einer Einrichtung zur Uebertragung der Werte aus dem Speicherwerk in die Rechenmaschine versehen wird. Die Funktionswerte für beliebige Argumente k?nnen dann durch direkte oder inverse Interpolation aus den gespeicherten Werten berechnet werden. Die Genauigkeit ist abh?ngig von der Anzahl und Verteilung der gespeicherten Grundwerte. Bei den gebr?uchlichen trigonometrichen Funktionen l?sst sich bereits durch Speicherung von nur 100 Funktionswerten und 100 Interpolationsfaktoren eine fünf-teilweise sogar bis sechsstellige Genauigkeit erreichen. Diese Genauigkeit ist für alle Berechnungen der niederen Geod?sie einschliesslich der Triangulation III. Ordnung ausreichend. Im Geod?tischen Institut der Technischen Hochschule Stuttgart wurde eine Versuchsmaschine entwickelt, mit welcher die Funktionen sinx, cosx, tgx, ctgx und ihre Umkehrfunktionen sowie sec tgx (Secans aus Tangens) und berechnet werden k?nnen. Als Grund-maschine wurde eine handbetriebene Sprossenradmaschine verwendet. Die Erprobung ergab, dass die Zahl der durch Unaufmerksamkeit des Rechners bedingten Rechenfehler nur noch halb so gross ist wie bei der üblichen Berechnung mit gew?hnlicher Rechenmaschine und gedruckter Funktionstafel. Ausserdem ergab sich bei den meisten Rechnungen ein betr?chtlicher Zeitgewinn, der bei einer Funktionsdoppelrechenmaschine für bestimmte Aufgaben bis zu 50% betr?gt. Die maschinelle Berechnung von Funktionswerten ist bereits in der vorliegenden Form erheblich einfacher als die Entnahme aus Funktionstafeln, so dass auch ungeschulte Kr?fte eingesetzt werden k?nnen. Die Funktionsrechenmaschine ist demnach geeignet, das geod?tische Rechnen wesentlich zu vereinfachen und wirtschaftlicher zu gestalten.

Resumen El campo de cálculo en máquinas de calcular normales puede ser ampliado a functiones, proporcionando a la máquina calculadora una unidad-almacén que contenga valores aproximados de funciones para argumentos groseramente escalonados y factores de interpolación, así como un dispositivo para transferir los valores de la unidad-almacén a la calculadora. Entonces pueden ser calculados valores de función para cualquier argumento, por interpolación directa o inversa de los valores almacenados. La precisión depende del número y distribución de los valores almacenados. Cuando se trata de funciones trigonométricas usuales, puede lograrse una precisión del órden de la quinta cifra y en ocasiones de la sexta cifra, con solo el almaceneje de 100 valores de función y de 100 factores de interpolación. Tal grado de precisión es suficiente para cuaquier cálculo en operaciones geodésicas de órden inferior, incluyendo la triangulación de 3er órden. En el Instituto Geodésico de la ?Technischen Hochschule Stuttgart? fué desarrollado una máquina de ensayo, con la que pueden ser calculadas las funciones sen ϕ, cos ϕ, tang ϕ, cotang ϕ y sus funciones inversas, así como sectang ϕ (secante de tangente) y . Como máquina básica fué empleada una calculadora a mano con ruedas Odhner. Las experiencias realizadas con esta calculadora demostraron que el número de errores de cálculo es solo la mitad de los cometidos en los cálculos corrientes mediante las máquinas de calcular usuales y tablas impresas de funciones. Además, se consiguió una ganancia de tiempo en la mayoria de los cálculos, que llegó a alcanzar el 50 por ciento en ciertos problemas. El cálculo mecánico de valores de funciones es notablemente más sencillo en la forma actual que el manejo de tablas de funciones y puede ser fácilmente aprendido y llevado a cabo por personas sin práctica. La máquina de calcular funciones es, por lo tanto, adecuada, no solo para simplificar notablemente el cálculo geodésico sino también para hacerlo más eficiente.

Résumé Le domaine d’emploi des machines à calculer normales peut s’étendre à des fonctions quelconques si l’on équipe la machine d’une ?mémoire?, contenant les valeurs approchées de la fonction pour des valeurs largement échelonnées de l’argument et des facteurs d’interpolation, et d’un dispositif permettant de reporter ces valeurs de la ?mémoire? dans la machine. Les valeurs de la fonction pour des arguments quelconques peuvent être calculées par interpolation directe ou inverse à partir des valeurs enregistrées. La précision dépend du nombre et de la répartition de ces valeurs enregistrées. Pour les fonctions trigonométriques usuelles, avec 100 valeurs de la fonction et 100 facteurs d’interpolation, on arrive déjà à la précision de la cinquième ou même de la sixième décimale. Cette précision suffit pour tous les calculs de la géodésie courante, y compris la triangulation de 3e ordre. A l’Institut Géodésique de l’Ecole Supérieure Technique de Stuttgart, on a établi une machine expérimentale, qui permet de calculer les fonctions sinx, cosx, tgx, ctgx, et les fonctions inverses ainsi que sec tgx (sécante à partir de la tangente) et . Comme machine on a utilisé une machine à main du type roue à dents saillantes. L’expérience a montré que le nombre des erreurs de calcul d?es à l’inattention du calculateur n’était que la moitié de celui constaté dans le calcul usuel avec une machine normale et les tables des fonctions. On a obtenu en outre, pour la plupart des calculs, un gain de temps apréciable, atteignant 50% pour certains problèmes, avec une machine double. Le calcul à la machine des fonctions est, dès maintenant, sous cette forme, sensiblement plus simple que l’interpolation à partir des tables, si bien que l’on peut y employer du personnel peu confirmé. La machine à calculer les fonctions permet donc de simplifier notablement les calculs géodésiques et de les rendre plus économiques.

Sommario Le possibilità di una normale macchina calcolatrice sono suscettibili di venire estese al calcolo delle funzioni, abbinando alla macchina un’unità-magazzino contenente i valori approssimati di funzioni per opportuni intervalli, unitamente ai coefficienti per l’interpolazione, e ad un congegno per transportare i valori stessi dal magazzino alla macchina calcolatrice vera e propria. I valori della funzione per un argomento qualunque possono allora venir calcolati per interpolazione. La precisione dipende dal numero e dalla distribuzione dei valori immagazzinati. Se si tratta di funzioni trigonometriche, si può raggiungere una precisione di cinque cifre od anche di sei cifre immagazzinando non più di 100 valori della funzione e 100 coefficienti per l’interpolazione. Tale precisione è sufficiente per la maggior parte dei calcoli topografici, inclusa la triangolazione di terzo ordine. All’Istituto Geodetico del Politecnico di Stoccarda è stato costruito un modello siffatto, con il quale è possibile il calcolo dei valori delle funzioni sinx, cosx, tgx, ctgx e funzioni inverse, come pure di sec tgx (secante della tangente). La macchina calcolatrice originaria è una Odhner. Experienze effettuate con questo modello a mano hanno mostrato che gli errori di calcolo sono solo la metà di quelli commessi nelle ordinarie operazioni a mano eseguite da un calcolatore mediante tavole delle funzioni a stampa. Di più, il risparmio di tempo è risultato, in alcuni casi, del 50%. Prove effettuate hanno dimostrato inoltre che l’impiego della macchina cosi modificata risulta molto semplice, e che questo è alla portata anche di personale non specialmente istruito.
  相似文献   

17.
Mean 5 × 5 heights and depths from ETOPO5U (Earth Topography at 5 spacing Updated) Digital Terrain Model (DTM) were compared with corresponding quantities of a local DTM in the test area [38° 40°, 21° 24°]. From this comparison a shift of ETOPO5U with respect to the local DTM in the longitudinal direction equal to 5 min was found after applying an efficient fast Fourier transform (FFT) technique. Furthermore, sparse mean height differences larger than 1,000 m were observed between ETOPO5U and the local DTM due rather to errors of ETOPO5U. The effect of these errors on gravity and height anomalies was computed in a subregion of the area under consideration.  相似文献   

18.
In this contribution, the regularized Earth’s surface is considered as a graded 2D surface, namely a curved surface, embedded in a Euclidean space . Thus, the deformation of the surface could be completely specified by the change of the metric and curvature tensors, namely strain tensor and tensor of change of curvature (TCC). The curvature tensor, however, is responsible for the detection of vertical displacements on the surface. Dealing with eigenspace components, e.g., principal components and principal directions of 2D symmetric random tensors of second order is of central importance in this study. Namely, we introduce an eigenspace analysis or a principal component analysis of strain tensor and TCC. However, due to the intricate relations between elements of tensors on one side and eigenspace components on other side, we will convert these relations to simple equations, by simultaneous diagonalization. This will provide simple synthesis equations of eigenspace components (e.g., applicable in stochastic aspects). The last part of this research is devoted to stochastic aspects of deformation analysis. In the presence of errors in measuring a random displacement field (under the normal distribution assumption of displacement field), the stochastic behaviors of eigenspace components of strain tensor and TCC are discussed. It is applied by a numerical example with the crustal deformation field, through the Pacific Northwest Geodetic Array permanent solutions in period January 1999 to January 2004, in Cascadia Subduction Zone. Due to the earthquake which occurred on 28 February 2001 in Puget Sound (M w > 6.8), we performed computations in two steps: the coseismic effect and the postseismic effect of this event. A comparison of patterns of eigenspace components of deformation tensors (corresponding the seismic events) reflects that: among the estimated eigenspace components, near the earthquake region, the eigenvalues have significant variations, but eigendirections have insignificant variations.  相似文献   

19.
Accuracy assessment of the National Geodetic Survey’s OPUS-RS utility   总被引:1,自引:1,他引:0  
OPUS-RS is a rapid static form of the National Geodetic Survey’s On-line Positioning User Service (OPUS). Like OPUS, OPUS-RS accepts a user’s GPS tracking data and uses corresponding data from the U.S. Continuously Operating Reference Station (CORS) network to compute the 3-D positional coordinates of the user’s data-collection point called the rover. OPUS-RS uses a new processing engine, called RSGPS, which can generate coordinates with an accuracy of a few centimeters for data sets spanning as little as 15 min of time. OPUS-RS achieves such results by interpolating (or extrapolating) the atmospheric delays, measured at several CORS located within 250 km of the rover, to predict the atmospheric delays experienced at the rover. Consequently, standard errors of computed coordinates depend highly on the local geometry of the CORS network and on the distances between the rover and the local CORS. We introduce a unitless parameter called the interpolative dilution of precision (IDOP) to quantify the local geometry of the CORS network relative to the rover, and we quantify the standard errors of the coordinates, obtained via OPUS-RS, by using functions of the form
here α and β are empirically determined constants, and RMSD is the root-mean-square distance between the rover and the individual CORS involved in the OPUS-RS computations. We found that α = 6.7 ± 0.7 cm and β = 0.15 ± 0.03 ppm in the vertical dimension and α = 1.8 ± 0.2 cm and β = 0.05 ± 0.01 ppm in either the east–west or north–south dimension.  相似文献   

20.
Summary The least-squares collocation method has been used for the computation of a geoid solution in central Spain, combining a geopotential model complete to degree and order 360, gravity anomalies and topographic information. The area has been divided in two 1°× 1° blocks and predictions have been done in each block with gravity data spacing about 5 × 5 within each block, extended 1/2°. Topographic effects have been calculated from 6 × 9 heights using an RTM reduction with a reference terrain model of 30 × 30 mean heights.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号