共查询到20条相似文献,搜索用时 31 毫秒
1.
The global positioning system (GPS) model is distinctive in the way that the unknown parameters are not only real-valued,
the baseline coordinates, but also integers, the phase ambiguities. The GPS model therefore leads to a mixed integer–real-valued
estimation problem. Common solutions are the float solution, which ignores the ambiguities being integers, or the fixed solution,
where the ambiguities are estimated as integers and then are fixed. Confidence regions, so-called HPD (highest posterior density)
regions, for the GPS baselines are derived by Bayesian statistics. They take care of the integer character of the phase ambiguities
but still consider them as unknown parameters. Estimating these confidence regions leads to a numerical integration problem
which is solved by Monte Carlo methods. This is computationally expensive so that approximations of the confidence regions
are also developed. In an example it is shown that for a high confidence level the confidence region consists of more than
one region.
Received: 1 February 2001 / Accepted: 18 July 2001 相似文献
2.
Based on the Bayesian principle and the fact that GPS carrier-phase ambiguities are integers, the posterior distribution
of the ambiguities and the position parameters is derived. This is then used to derive the maximum posterior likelihood solution
of the ambiguities. The accuracy of the integer ambiguity solution and the position parameters is also studied according to
the posterior distribution. It is found that the accuracy of the integer solution depends not only on the variance of the
corresponding float ambiguity solution but also on its values.
Received: 27 July 1999 / Accepted: 22 November 2000 相似文献
3.
The upward-downward continuation of a harmonic function like the gravitational potential is conventionally based on the direct-inverse
Abel-Poisson integral with respect to a sphere of reference. Here we aim at an error estimation of the “planar approximation”
of the Abel-Poisson kernel, which is often used due to its convolution form. Such a convolution form is a prerequisite to
applying fast Fourier transformation techniques. By means of an oblique azimuthal map projection / projection onto the local
tangent plane at an evaluation point of the reference sphere of type “equiareal” we arrive at a rigorous transformation of
the Abel-Poisson kernel/Abel-Poisson integral in a convolution form. As soon as we expand the “equiareal” Abel-Poisson kernel/Abel-Poisson
integral we gain the “planar approximation”. The differences between the exact Abel-Poisson kernel of type “equiareal” and
the “planar approximation” are plotted and tabulated. Six configurations are studied in detail in order to document the error
budget, which varies from 0.1% for points at a spherical height H=10km above the terrestrial reference sphere up to 98% for points at a spherical height H = 6.3×106km.
Received: 18 March 1997 / Accepted: 19 January 1998 相似文献
4.
Parametric least squares collocation was used in order to study the detection of systematic errors of satellite gradiometer
data. For this purpose, simulated data sets with a priori known systematic errors were produced using ground gravity data
in the very smooth gravity field of the Canadian plains. Experiments carried out at different satellite altitudes showed that
the recovery of bias parameters from the gradiometer “measurements” is possible with high accuracy, especially in the case
of crossing tracks. The mean value of the differences (original minus estimated bias parameters) was relatively large compared
to the standard deviation of the corresponding second-order derivative component at the corresponding height. This mean value
almost vanished when gravity data at ground level were combined with the second-order derivative data set at satellite altitude.
In the case of simultaneous estimation of bias and tilt parameters from ∂2
T/∂z
2“measurements”, the recovery of both parameters agreed very well with the collocation error estimation.
Received: 10 October 1996 / Accepted 25 May 1998 相似文献
5.
The spacetime gravitational field of a deformable body 总被引:3,自引:0,他引:3
The high-resolution analysis of orbit perturbations of terrestrial artificial satellites has documented that the eigengravitation
of a massive body like the Earth changes in time, namely with periodic and aperiodic constituents. For the space-time variation
of the gravitational field the action of internal and external volume as well as surface forces on a deformable massive body
are responsible. Free of any assumption on the symmetry of the constitution of the deformable body we review the incremental
spatial (“Eulerian”) and material (“Lagrangean”) gravitational field equations, in particular the source terms (two constituents:
the divergence of the displacement field as well as the projection of the displacement field onto the gradient of the reference
mass density function) and the `jump conditions' at the boundary surface of the body as well as at internal interfaces both
in linear approximation. A spherical harmonic expansion in terms of multipoles of the incremental Eulerian gravitational potential
is presented. Three types of spherical multipoles are identified, namely the dilatation multipoles, the transport displacement
multipoles and those multipoles which are generated by mass condensation onto the boundary reference surface or internal interfaces.
The degree-one term has been identified as non-zero, thus as a “dipole moment” being responsible for the varying position
of the deformable body's mass centre. Finally, for those deformable bodies which enjoy a spherically symmetric constitution,
emphasis is on the functional relation between Green functions, namely between Fourier-/ Laplace-transformed volume versus
surface Love-Shida functions (h(r),l(r) versus h
′(r),l
′(r)) and Love functions k(r) versus k
′(r). The functional relation is numerically tested for an active tidal force/potential and an active loading force/potential,
proving an excellent agreement with experimental results.
Received: December 1995 / Accepted: 1 February 1997 相似文献
6.
Burkhard Schaffrin 《Journal of Geodesy》1989,63(4):395-404
The now classical collocation method in geodesy has been derived byH. Moritz (1970; 1973) within an appropriate Mixed Linear Model. According toB. Schaffrin (1985; 1986) even a generalized form of the collocation solution can be proved to represent a combined estimation/prediction
procedure of typeBLUUE (Best Linear Uniformly Unbiased Estimation) for the fixed parameters, and of type inhomBLIP (Best inhomogeneously LInear Prediction) for the random effects with not necessarily zero expectation. Moreover, “robust collocation” has been introduced by means of homBLUP (Best homogeneously Linear weakly Unbiased Prediction) for the random effects together with a suitableLUUE for the fixed parameters. Here we present anequivalence theorem which states that the robust collocation solution in theoriginal Mixed Linear Model can identically be derived as traditionalLESS (LEast Squares Solution) in amodified Mixed Linear Model without using artifacts like “pseudo-observations”. This allows us a nice interpretation of “robust collocation”
as an adjustment technique in the presence of “weak prior information”. 相似文献
7.
The total optimal search criterion in solving the mixed integer linear model with GNSS carrier phase observations 总被引:3,自引:2,他引:1
Existing algorithms for GPS ambiguity determination can be classified into three categories, i.e. ambiguity resolution in
the measurement domain, the coordinate domain and the ambiguity domain. There are many techniques available for searching
the ambiguity domain, such as FARA (Frei and Beutler in Manuscr Geod 15(4):325–356, 1990), LSAST (Hatch in Proceedings of KIS’90, Banff, Canada, pp 299–308, 1990), the modified Cholesky decomposition method (Euler and Landau in Proceedings of the sixth international geodetic symposium on satellite positioning,
Columbus, Ohio, pp 650–659, 1992), LAMBDA (Teunissen in Invited lecture, section IV theory and methodology, IAG general meeting, Beijing, China, 1993), FASF (Chen and Lachapelle in J Inst Navig 42(2):371–390, 1995) and modified LLL Algorithm (Grafarend in GPS Solut 4(2):31–44, 2000; Lou and Grafarend in Zeitschrift für Vermessungswesen 3:203–210, 2003). The widely applied LAMBDA method is based on the Least Squares Ambiguity Search (LSAS) criterion and employs an effective decorrelation technique in addition. G. Xu (J Glob Position Syst 1(2):121–131,
2002) proposed also a new general criterion together with its equivalent objective function for ambiguity searching that can be
carried out in the coordinate domain, the ambiguity domain or both. Xu’s objective function differs from the LSAS function,
leading to different numerical results. The cause of this difference is identified in this contribution and corrected. After
correction, the Xu’s approach and the one implied in LAMBDA are identical. We have developed a total optimal search criterion
for the mixed integer linear model resolving integer ambiguities in both coordinate and ambiguity domain, and derived the
orthogonal decomposition of the objective function and the related minimum expressions algebraically and geometrically. This
criterion is verified with real GPS phase data. The theoretical and numerical results show that (1) the LSAS objective function
can be derived from the total optimal search criterion with the constraint on the fixed integer ambiguity parameters, and
(2) Xu’s derivation of the equivalent objective function was incorrect, leading to an incorrect search procedure. The effects
of the total optimal criterion on GPS carrier phase data processing are discussed and its practical implementation is also
proposed. 相似文献
8.
The problem of “global height datum unification” is solved in the gravity potential space based on: (1) high-resolution local
gravity field modeling, (2) geocentric coordinates of the reference benchmark, and (3) a known value of the geoid’s potential.
The high-resolution local gravity field model is derived based on a solution of the fixed-free two-boundary-value problem
of the Earth’s gravity field using (a) potential difference values (from precise leveling), (b) modulus of the gravity vector
(from gravimetry), (c) astronomical longitude and latitude (from geodetic astronomy and/or combination of (GNSS) Global Navigation
Satellite System observations with total station measurements), (d) and satellite altimetry. Knowing the height of the reference
benchmark in the national height system and its geocentric GNSS coordinates, and using the derived high-resolution local gravity
field model, the gravity potential value of the zero point of the height system is computed. The difference between the derived
gravity potential value of the zero point of the height system and the geoid’s potential value is computed. This potential
difference gives the offset of the zero point of the height system from geoid in the “potential space”, which is transferred
into “geometry space” using the transformation formula derived in this paper. The method was applied to the computation of
the offset of the zero point of the Iranian height datum from the geoid’s potential value W
0=62636855.8 m2/s2. According to the geometry space computations, the height datum of Iran is 0.09 m below the geoid. 相似文献
9.
The resolution of a nonlinear parametric adjustment model is addressed through an isomorphic geometrical setup with tensor
structure and notation, represented by a u-dimensional “model surface” embedded in a flat n-dimensional “observational space”.
Then observations correspond to the observational-space coordinates of the pointQ, theu initial parameters correspond to the model-surface coordinates of the “initial” pointP, and theu adjusted parameters correspond to the model-surface coordinates of the “least-squares” point
. The least-squares criterion results in a minimum-distance property implying that the vector
Q must be orthogonal to the model surface. The geometrical setup leads to the solution of modified normal equations, characterized
by a positive-definite matrix. The latter contains second-order and, optionally, thirdorder partial derivatives of the observables
with respect to the parameters. This approach significantly shortens the convergence process as compared to the standard (linearized)
method. 相似文献
10.
Random simulation and GPS decorrelation 总被引:13,自引:1,他引:13
Peiliang Xu 《Journal of Geodesy》2001,75(7-8):408-423
(i) A random simulation approach is proposed, which is at the centre of a numerical comparison of the performances of different
GPS decorrelation methods. The most significant advantage of the approach is that it does not depend on nor favour any particular
satellite–receiver geometry and weighting system. (ii) An inverse integer Cholesky decorrelation method is proposed, which
will be shown to out-perform the integer Gaussian decorrelation and the Lenstra, Lenstra and Lovász (LLL) algorithm, and thus
indicates that the integer Gaussian decorrelation is not the best decorrelation technique and that further improvement is
possible. (iii) The performance study of the LLL algorithm is the first of its kind and the results have shown that the algorithm
can indeed be used for decorrelation, but that it performs worse than the integer Gaussian decorrelation and the inverse integer
Cholesky decorrelation. (iv) Simulations have also shown that no decorrelation techniques available to date can guarantee
a smaller condition number, especially in the case of high dimension, although reducing the condition number is the goal of
decorrelation.
Received: 26 April 2000 / Accepted: 5 March 2001 相似文献
11.
An absolute measurement of the gravitational acceleration “g” has been made at the National Standards Laboratory, Chippendale, N.S.W., Australia.
The determination was made by studying the free motion of a body projected vertically upwards in a vacuum and the time between
its initial and final passages through two horizontal planes of known vertical separation was measured.
The measured value ofg at a point 12 metres above the floor in room B. 37 of the National Standards Laboratory is 9.7967134 m/s2
The corresponding value at floor level at the BMR gravity station is 9.796717 m/s2
Paper presented at the meeting of the International Gravimetric Commission, Paris 7–11 September 1970. 相似文献
12.
M. Louis 《Journal of Geodesy》1975,49(1):110-111
Sans résumé
“Astronomy of star positions:A critical investigation of star catalogues, the methods of their construction and their purpose” by Heinrich Eichhorn 1 volume. 357 pages, 25 U.S. $, publié par Frederick Ungar Publishing Co, Inc., 250 Park Avenue South, New York, N.Y. 10003相似文献
13.
C. C. Tscherning 《Journal of Geodesy》1978,52(1):85-92
The term “entity” covers, when used in the field of electronic data processing, the meaning of words like “thing”, “being”,
“event”, or “concept”. Each entity is characterized by a set of properties.
An information element is a triple consisting of an entity, a property and the value of a property. Geodetic information is
sets of information elements with entities being related to geodesy. This information may be stored in the form ofdata and is called ageodetic data base provided (1) it contains or may contain all data necessary for the operations of a particular geodetic organization, (2)
the data is stored in a form suited for many different applications and (3) that unnecessary duplications of data have been
avoided.
The first step to be taken when establishing a geodetic data base is described, namely the definition of the basic entities
of the data base (such as trigonometric stations, astronomical stations, gravity stations, geodetic reference-system parameters,
etc...).
Presented at the “International Symposium on Optimization of Design and Computation of Control Networks”, Sopron, Hungary,
July 1977. 相似文献
14.
Accuracy of GPS-derived relative positions as a function of interstation distance and observing-session duration 总被引:6,自引:0,他引:6
Ten days of GPS data from 1998 were processed to determine how the accuracy of a derived three-dimensional relative position
vector between GPS antennas depends on the chord distance (denoted L) between these antennas and on the duration of the GPS observing session (denoted T). It was found that the dependence of accuracy on L is negligibly small when (a) using the `final' GPS satellite orbits disseminated by the International GPS Service, (b) fixing
integer ambiguities, (c) estimating appropriate neutral-atmosphere-delay parameters, (d) 26 km ≤ L ≤ 300 km, and (e) 4 h ≤T ≤ 24 h. Under these same conditions, the standard error for the relative position in the north–south dimension (denoted S
n
and expressed in mm) is adequately approximated by the equation S
n
=k
n
/T
0.5 with k
n
=9.5 ± 2.1 mm · h0.5 and T expressed in hours. Similarly, the standard errors for the relative position in the east–west and in the up-down dimensions
are adequately approximated by the equations S
e
=k
e
/T
0.5 and S
u
=k
u
/T
0.5, respectively, with k
e
=9.9 ± 3.1 mm · h0.5 and k
u
=36.5 ± 9.1 mm · h0.5.
Received: 5 February 2001 / Accepted: 14 May 2001 相似文献
15.
Burkhard Schaffrin 《Journal of Geodesy》1987,61(3):276-280
The Bayesian estimates
b of the standard deviation σ in a linear model—as needed for the evaluation of reliability—is well known to be proportional
to the square root of the Bayesian estimate (s
2)
b
of the variance component σ2 by a proportionality factor
involving the ratio of Gamma functions. However, in analogy to the case of the respective unbiased estimates, the troublesome
exact computation ofa
b may be avoided by a simple approximation which turns out to be good enough for most applications even if the degree of freedom
ν is rather small.
Paper presented to the Int. Conf. on “Practical Bayesian Statistics”, Cambridge (U.K.), 8.–11. July 1986. 相似文献
16.
Integer carrier-phase ambiguity resolution is one of the critical issues for precise GPS applications in geodesy and geodynamics. To resolve as many integer ambiguities as possible, the ‘most-easy-to-fix’ double-difference ambiguities have to be defined. For this purpose, several strategies are implemented in existing GPS software packages, such as choosing the ambiguities according to the baseline length or the variances of the estimated real-valued ambiguities. Although their efficiencies are demonstrated in practice, it is proven in this paper that they do not reflect all effects of varying data quality, because they are based on theoretical considerations of GPS data processing. Therefore, a new approach is presented, which selects the double-difference ambiguities according to their probability of being fixed to the nearest integer. The probability is computed from estimates and variances of wide-lane and narrow-lane ambiguities. Together with an optimized ambiguity fixing procedure, the new approach is implemented in the routine data processing for the International GPS Service (IGS) at GeoForschungsZentrum (GFZ) Potsdam. Within a sub-network of about 90 IGS stations, it is demonstrated that more than 97% of the independent ambiguities are fixed correctly compared to 75% by a commonly used method, and that the additionally fixed ambiguities improve the repeatability of the station coordinates by 10–26% in regions with sparse site distribution. 相似文献
17.
This research deals with some theoretical and numerical problems of the downward continuation of mean Helmert gravity disturbances.
We prove that the downward continuation of the disturbing potential is much smoother, as well as two orders of magnitude smaller
than that of the gravity anomaly, and we give the expression in spectral form for calculating the disturbing potential term.
Numerical results show that for calculating truncation errors the first 180∘ of a global potential model suffice. We also discuss the theoretical convergence problem of the iterative scheme. We prove
that the 5′×5′ mean iterative scheme is convergent and the convergence speed depends on the topographic height; for Canada, to achieve an
accuracy of 0.01 mGal, at most 80 iterations are needed. The comparison of the “mean” and “point” schemes shows that the mean
scheme should give a more reasonable and reliable solution, while the point scheme brings a large error to the solution.
Received: 19 August 1996 / Accepted: 4 February 1998 相似文献
18.
On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms 总被引:2,自引:1,他引:1
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model (Y−E
Y
= (X−E
X
) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E
Y
and E
X
. Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation
matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix
X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem
by using the nonlinear Euler–Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative
algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition.
For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert
cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing
the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment
of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift
to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335–342). The differences between the standard least-squares
and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion
matrix of the estimated parameters. 相似文献
19.
R. P. Singh S. Rovshan S. K. Goroshi S. Panigrahy J. S. Parihar 《Journal of the Indian Society of Remote Sensing》2011,39(3):345-353
The monitoring of terrestrial carbon dynamics is important in studies related with global climate change. This paper presents
results of the inter-annual variability of Net Primary Productivity (NPP) from 1981 to 2000 derived using observations from
NOAA-AVHRR data using Global Production Efficiency Model (GloPEM). The GloPEM model is based on physiological principles and
uses the production efficiency concept, in which the canopy absorption of photosynthetically active radiation (APAR) is used
with a conversion “efficiency” to estimate Gross Primary Production (GPP). NPP derived from GloPEM model over India showed
maximum NPP about 3,000 gCm−2year−1 in west Bengal and lowest up to 500 gCm−2year−1 in Rajasthan. The India averaged NPP varied from 1,084.7 gCm−2year−1 to 1,390.8 gCm−2year−1 in the corresponding years of 1983 and 1998 respectively. The regression analysis of the 20 year NPP variability showed significant
increase in NPP over India (r = 0.7, F = 17.53, p < 0.001). The mean rate of increase was observed as 10.43 gCm−2year−1. Carbon fixation ability of terrestrial ecosystem of India is increasing with rate of 34.3 TgC annually (t = 4.18, p < 0.001). The estimated net carbon fixation over Indian landmass ranged from 3.56 PgC (in 1983) to 4.57 PgC (in 1998). Grid
level temporal correlation analysis showed that agricultural regions are the source of increase in terrestrial NPP of India.
Parts of forest regions (Himalayan in Nepal, north east India) are relatively less influenced over the study period and showed
lower or negative correlation (trend). Finding of the study would provide valuable input in understanding the global change
associated with vegetation activities as a sink for atmospheric carbon dioxide. 相似文献
20.
The LLL reduction of lattice vectors and its variants have been widely used to solve the weighted integer least squares (ILS)
problem, or equivalently, the weighted closest point problem. Instead of reducing lattice vectors, we propose a parallel Cholesky-based
reduction method for positive definite quadratic forms. The new reduction method directly works on the positive definite matrix
associated with the weighted ILS problem and is shown to satisfy part of the inequalities required by Minkowski’s reduction
of positive definite quadratic forms. The complexity of the algorithm can be fixed a priori by limiting the number of iterations.
The simulations have clearly shown that the parallel Cholesky-based reduction method is significantly better than the LLL
algorithm to reduce the condition number of the positive definite matrix, and as a result, can significantly reduce the searching
space for the global optimal, weighted ILS or maximum likelihood estimate. 相似文献