首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
When precise positioning is carried out via GNSS carrier phases, it is important to make use of the property that every ambiguity should be an integer. With the known float solution, any integer vector, which has the same degree of freedom as the ambiguity vector, is the ambiguity vector in probability. For both integer aperture estimation and integer equivariant estimation, it is of great significance to know the posterior probabilities. However, to calculate the posterior probability, we have to face the thorny problem that the equation involves an infinite number of integer vectors. In this paper, using the float solution of ambiguity and its variance matrix, a new approach to rapidly and accurately calculate the posterior probability is proposed. The proposed approach consists of four steps. First, the ambiguity vector is transformed via decorrelation. Second, the range of the adopted integer of every component is directly obtained via formulas, and a finite number of integer vectors are obtained via combination. Third, using the integer vectors, the principal value of posterior probability and the correction factor are worked out. Finally, the posterior probability of every integer vector and its error upper bound can be obtained. In the paper, the detailed process to calculate the posterior probability and the derivations of the formulas are presented. The theory and numerical examples indicate that the proposed approach has the advantages of small amount of computations, high calculation accuracy and strong adaptability.  相似文献   

2.
A technique for renormalization of integral equations is used for obtaining very robust solutions. The number of multiplications used for inverting the integral equations can be reduced dramatically and mostly only weighted means will be needed. Theoretical gain in computer time might be up to 100 000 times for the most favourable cases when using 1000 unknowns. (Practical gains will be considerably less.) Solutions have been obtained with increased accuracy compared to the classical technique over integral equations. Surface elements might be of arbitrary size but the method is optimal for a global approach with equal area elements. The solutions were found strictly invariant with respect to the depth to the internal sphere, when using simpler models. Applications in surveying are possible after some modifications. Renormalization of integral equation has been widely used in the theory of quantum field.  相似文献   

3.
Collocation is widely used in physical geodesy. Its application requires to solve systems with a dimension equal to the number of observations, causing numerical problems when many observations are available. To overcome this drawback, tailored step-wise techniques are usually applied. An example of these step-wise techniques is the space-wise approach to the GOCE mission data processing. The original idea of this approach was to implement a two-step procedure, which consists of first predicting gridded values at satellite altitude by collocation and then deriving the geo-potential spherical harmonic coefficients by numerical integration. The idea was generalized to a multi-step iterative procedure by introducing a time-wise Wiener filter to reduce the highly correlated observation noise. Recent studies have shown how to optimize the original two-step procedure, while the theoretical optimization of the full multi-step procedure is investigated in this work. An iterative operator is derived so that the final estimated spherical harmonic coefficients are optimal with respect to the Wiener–Kolmogorov principle, as if they were estimated by a direct collocation. The logical scheme used to derive this optimal operator can be applied not only in the case of the space-wise approach but, in general, for any case of step-wise collocation. Several numerical tests based on simulated realistic GOCE data are performed. The results show that adding a pre-processing time-wise filter to the two-step procedure of data gridding and spherical harmonic analysis is useful, in the sense that the accuracy of the estimated geo-potential coefficients is improved. This happens because, in its practical implementation, the gridding is made by collocation over local patches of data, while the observation noise has a time-correlation so long that it cannot be treated inside the patch size. Therefore, the multi-step operator, which is in theory equivalent to the two-step operator and to the direct collocation, is in practice superior thanks to the time-wise filter that reduces the noise correlation before the gridding. The criteria for the choice of this filter are investigated numerically.  相似文献   

4.
The critical angle is the angle at which the contrast of oil slicks reverse their contrasts against the surrounding oil-free seawater under sunglint. Accurate determination of the critical angle can help estimate surface roughness and refractive index of the oil slicks. Although it’s difficult to determine a certain critical angle, the potential critical angle range help to improve the estimation accuracy. In this study, the angle between the viewing direction and the direction of mirror reflection is used as an indicator for quantifying the critical angle and could be calculated from the solar/viewing geometry from observations of the Moderate Resolution Imaging Spectroradiometer (MODIS). The natural seep oil slicks in the Gulf of Mexico were first delineated using a customized segmentation approach to remove noise and apply a morphological filter. On the basis of the histograms of the brightness values of the delineated oil slicks, the potential range of the critical angle was determined, and then an optimal critical angle between oil slicks and seawater was then determined from statistical and regression analyses in this range. This critical angle corresponds to the best fitting between the modeled and observed surface roughness of seep oil slicks and seawater.  相似文献   

5.
最小二乘配置最初是在组合各种资料来研究地球形状与重力场的一种数学方法,目前最小二乘配置已经在测绘数据处理中得到广泛应用。本文首先分析了目前采用的最小二乘配置法解算方法,在讨论了矩阵的奇异值分解(Singular Value Decomposition,SVD)方法的基础上,推导得出了矩阵SVD分解与广义逆矩阵的关系,得出了可以直接利用SVD分解求解矩阵的Moore-Penrose广义逆,并推导了应用SVD分解求解最小二乘配置的估值计算公式和精度估算公式,最后通过重力异常实例进行了计算,得出矩阵的SVD分解用于最小二乘配置解算的正确性和可行性,为最小二乘配置的求解提供了一种新方法。  相似文献   

6.
The algorithm to transform from 3D Cartesian to geodetic coordinates is obtained by solving the equation of the Lagrange parameter. Numerical experiments show that geodetic height can be recovered to 0.5 mm precision over the range from −6×106 to 1010 m. Electronic Supplementary Material: Supplementary material is available in the online version of this article at  相似文献   

7.
The cartogram, or value-by-area map, is a popular technique for cartographically representing social data. Such maps visually equalize a basemap prior to mapping a social variable by adjusting the size of each enumeration unit by a second, related variable. However, to scale the basemap units according to an equalizing variable, cartograms must distort the shape and/or topology of the original geography. Such compromises reduce the effectiveness of the visualization for elemental and general map-reading tasks. Here we describe a new kind of representation, termed a value-by-alpha map, which visually equalizes the basemap by adjusting the alpha channel, rather than the size, of each enumeration unit. Although not without its own limitations, the value-by-alpha map is able to circumvent the compromise inherent to the cartogram form, perfectly equalizing the basemap while preserving both shape and topology.  相似文献   

8.
An approach to GLONASS ambiguity resolution   总被引:7,自引:2,他引:7  
J. Wang 《Journal of Geodesy》2000,74(5):421-430
 When processing global navigation satellite system (GLONASS) carrier phases, the standard double-differencing (DD) procedure cannot cancel receiver clock terms in the DD phase measurement equations due to the multiple frequencies of the carrier phases. Consequently, a receiver clock parameter has to be set up in the measurement equations in addition to baseline components and DD ambiguities. The resulting normal matrix unfortunately becomes singular. Methods to deal with this problem have been proposed in the literature. However, these methods rely on the use of pseudo-ranges. As pseudo-ranges are contaminated by multi-path and hardware delays, biases in these pseudo-ranges are significant, which may result in unreliable ambiguity resolution. A new approach is addressed that is not sensitive to the biases in the pseudo-ranges. The proposed approach includes such steps as converting the carrier phases to their distances to cancel the receiver clock errors, and searching for the most likely single-differenced (SD) ambiguity. Based on the results from the theoretical investigation, a practical procedure for GLONASS ambiguity resolution is presented. The initial experimental results demonstrate that the proposed approach is useable in cases of GLONASS and combined global positioning system (GPS) and GLONASS positioning. Received: 19 August 1998 / Accepted: 12 November 1999  相似文献   

9.
Fast collocation     
In this paper a new method to compute in a fast and reliable way the collocation solution is presented. In order to speed up the numerical procedures, some restrictions on input data are needed.The basic assumption is that data are gridded and homogeneous; this implies that the autocovariance matrix entering in the collocation formula is of Toeplitz type. In particular, if observations are placed on a two dimensional planar grid, the autocovariance matrix is a symmetric block Toeplitz matrix and each block is itself a symmetric Toeplitz matrix (Toeplitz/Toeplitz structure). The analysis can be extended to a regular geographical grid, considered as a generalization of the planar one, taking into account the distortions on the Toeplitz/Toeplitz structure induced by the convergence of the meridians. The devised method is based on a combined application of the Preconditioned Conjugate Gradient Method and of the Fast Fourier Transform. This allows a proper exploitation of the Toeplitz/Toeplitz structure of the autocovariance matrix in computing the collocation solution.The numerical tests proved that the application of this algorithm leads to a relevant decrease in CPU time if compared with standard methods used to solve a collocation problem (Cholesky, Levinson).  相似文献   

10.
This paper addresses implementation issues in order to apply non-stationary least-squares collocation (LSC) to a practical geodetic problem: fitting a gravimetric quasigeoid to discrete geometric quasigeoid heights at a local scale. This yields a surface that is useful for direct GPS heighting. Non-stationary covariance functions and a non-stationary model of the mean were applied to residual gravimetric quasigeoid determination by planar LSC in the Perth region of Western Australia. The non-stationary model of the mean did not change the LSC results significantly. However, elliptical kernels in non-stationary covariance functions were used successfully to create an iterative optimisation loop to decrease the difference between the gravimetric quasigeoid and geometric quasigeoid at 99 GPS-levelling points to a user-prescribed tolerance.  相似文献   

11.
基于约束的城市街道网自动综合方法   总被引:1,自引:0,他引:1  
近年来,基于约束的地图自动综合成为研究的热点。本研究在基于约束的地图自动综合理论的基础上,提出了一种城市街道网自动综合方法。该方法依赖于街道网综合中的约束和改进的动态决策树结构,它很好地将约束融入了特定的数据结构,实现了街道网的渐进式综合。实验证明了方法具备可行性。  相似文献   

12.
Harnessing the radiometric information provided by photogrammetric flights could be useful in increasing the thematic applications of aerial images. The aim of this paper is to improve relative and absolute homogenization in aerial images by applying atmospheric correction and treatment of bidirectional effects. We propose combining remote sensing methodologies based on radiative transfer models and photogrammetry models, taking into account the three-dimensional geometry of the images (external orientation and Digital Elevation Model). The photogrammetric flight was done with a Z/I Digital Mapping Camera (DMC) with a Ground Sample Distance (GSD) of 45 cm. Spectral field data were acquired by defining radiometric control points in order to apply atmospheric correction models, obtaining calibration parameters from the camera and surface reflectance images. Kernel-driven models were applied to correct the anisotropy caused by the bidirectional reflectance distribution function (BRDF) of surfaces viewed under large observation angles with constant illumination, using the overlapping area between images and the establishment of radiometric tie points. Two case studies were used: 8-bit images with applied Lookup Tables (LUTs) resulting from the conventional photogrammetric workflow for BRDF studies and original 12-bit images (Low Resolution Color, LRC) for the correction of atmospheric and bidirectional effects. The proposed methodology shows promising results in the different phases of the process. The geometric kernel that shows the best performance is the Lidense kernel. The homogenization factor in 8-bit images ranged from 6% to 25% relative to the range of digital numbers (0–255), and from 18% to 35% relative to levels of reflectance (0–100) in the 12-bit images, representing a relative improvement of approximately 1–30%, depending on the band analyzed.  相似文献   

13.
Certain geodetic problems such as the downward continuation of gravity information from satellite or aerial altitudes to the surface of the earth or the inverse Stokes problem are improperly posed in the sense that the best approximate solution does not continuously depend on the given observations. In order to obtain a stable solution a technique of regularization is discussed which can be shown to be identical to the method of least squares collocation. The characteristic features of regularization are analysed transferring the problem through a singular value decomposition into the spectral domain which allows an easy interpretation as a special method of filtering. In practical applications the stability depends not only on the observational errors including the computer round-off but in the same way on the number of observations and their distribution. The regularized solution should achieve a proper trade-off between sufficient smoothness and highest possible resolution with a limit defined through the internal accuracy of the computer.  相似文献   

14.
Least squares adjustment and collocation   总被引:10,自引:1,他引:10  
Summary For the estimation of parameters in linear models best linear unbiased estimates are derived in case the parameters are random variables. If their expected values are unknown, the well known formulas of least squares adjustment are obtained. If the expected values of the parameters are known, least squares collocation, prediction and filtering are derived. Hence in case of the determination of parameters, a least squares adjustment must precede a collocation because otherwise the collocation gives biased estimates. Since the collocation can be shown to be equivalent to a special case of the least squares adjustment, the variance of unit weight can be estimated for the collocation also. This estimate gives the scale factor for the covariance matrices being used in the collocation. In addition, the methods of testing hypotheses and establishing confidence intervals for the parameters of the least squares adjustment may be applied to the collocation.  相似文献   

15.
T. Krarup proposed the use of collocation with kernel functions for the approximation of a potential function on the earth surface as well as in local regions of a sphere. Starting from the smoothing criterion of the least norm of the horizontal gradients on a sphere, a neighbourhood criterion was derived taking into account smoothness as well as stability properties of the series evaluation. It is finally shown how to choose the kernel functions in order to obtain a smooth interpolation function at the surface of the earth.  相似文献   

16.
Least-squares collocation with covariance-matching constraints   总被引:1,自引:0,他引:1  
Most geostatistical methods for spatial random field (SRF) prediction using discrete data, including least-squares collocation (LSC) and the various forms of kriging, rely on the use of prior models describing the spatial correlation of the unknown field at hand over its domain. Based upon an optimal criterion of maximum local accuracy, LSC provides an unbiased field estimate that has the smallest mean squared prediction error, at every computation point, among any other linear prediction method that uses the same data. However, LSC field estimates do not reproduce the spatial variability which is implied by the adopted covariance (CV) functions of the corresponding unknown signals. This smoothing effect can be considered as a critical drawback in the sense that the spatio-statistical structure of the unknown SRF (e.g., the disturbing potential in the case of gravity field modeling) is not preserved during its optimal estimation process. If the objective for estimating a SRF from its observed functionals requires spatial variability to be represented in a pragmatic way then the results obtained through LSC may pose limitations for further inference and modeling in Earth-related physical processes, despite their local optimality in terms of minimum mean squared prediction error. The aim of this paper is to present an approach that enhances LSC-based field estimates by eliminating their inherent smoothing effect, while preserving most of their local prediction accuracy. Our methodology consists of correcting a posteriori the optimal result obtained from LSC in such a way that the new field estimate matches the spatial correlation structure implied by the signal CV function. Furthermore, an optimal criterion is imposed on the CV-matching field estimator that minimizes the loss in local prediction accuracy (in the mean squared sense) which occurs when we transform the LSC solution to fit the spatial correlation of the underlying SRF.  相似文献   

17.
推导了白噪声条件下的非线性系统的逐步拟合推估和有色噪声条件下的非线性逐步拟合推估的计算公式。非线性系统逐步拟合推估理论第一步是非线性拟合推估,以后各步是非线性滤波。白噪声条件下的非线性系统的逐步拟合推估是有色噪声条件下的非线性逐步拟合推估一个特例。  相似文献   

18.
The multiresolution character of collocation   总被引:3,自引:0,他引:3  
 An interesting theoretical connection between the statistical (non-stochastic) collocation principle and the multiresolution/wavelet framework of signal approximation is presented. The rapid developments in multiresolution analysis theory over the past few years have provided very useful (theoretical and practical) tools for approximation and spectral studies of irregularly varying signals, thus opening new possibilities for `non-stationary' gravity field modeling. It is demonstrated that the classic multiresolution formalism according to Mallat's pioneering work lies at the very core of some of the general approximation principles traditionally used in physical geodesy problems. In particular, it is shown that the use of a spatio-statistical (non-probabilistic) minimum mean-square-error criterion for optimal linear estimation of deterministic signals, in conjunction with regularly gridded data, always gives rise to a generalized multiresolution analysis in the Hilbert space L 2(R), under some mild constraints on the spatial covariance function and the power spectrum of the unknown field under consideration. Using the theory and the actual approximation algorithms associated with statistical collocation, a new constructive framework for building generalized multiresolution analyses in L 2(R) is presented, without the need for the usual dyadic restriction that exists in classic wavelet theory. The multiresolution and `non-stationary' aspects of the statistical collocation approximation procedure are also discussed, and finally some conclusions and recommendations for future work are given. Received: 26 January 1999 / Accepted: 16 August 1999  相似文献   

19.
The paper presents an approach to internal reliability analysis of observation systems known as Errors-in-Variables (EIV) models with parameters estimated by the method of least squares. Such problems are routinely treated by total least squares adjustment, or orthogonal regression. To create a suitable environment for derivations in the analysis, a general nonlinear form of such EIV models is assumed, based on a traditional adjustment method of condition equations with unknowns, also known as the Gauss–Helmert model. However, in order to apply the method of reliability analysis based on the approach to response assessment in systems with correlated observations, presented in the earlier work of this author, it was necessary to confine the considerations to a quasi-linear form of the Gauss–Helmert model, representing quasi-linear EIV models. This made it possible to obtain a linear disturbance/response relationship needed in that approach. Several specific cases of quasi-linear EIV models are discussed. The derived formulas are consistent with those already functioning for standard least squares adjustment problems. The analysis shows that, as could be expected, the average level of response-based reliability for such EIV models under investigation is lower than that for the corresponding standard linear models. For EIV models with homoscedastic and uncorrelated observations, the relationship between the average reliability indices for the independent and the dependent variables is formulated for multiple regression and coordinate transformations. Numerical examples for these two applications are provided to illustrate this analysis.  相似文献   

20.
抗差估计的多波束测深数据内插方法   总被引:2,自引:0,他引:2  
针对传统多波束测深数据网格化内插方法抗差性不足的问题,该文将抗差估计理论应用于网格化内插方法中。基于不确定度指标建立了距离不确定度联合加权内插模型,在该模型的权函数中引入了等价权,同时给出了等价权设计方案及迭代初值确定原则,以增强模型的抗差性;介绍了一种节点邻域参考点合理选取方法,给出了改进内插法实现的基本步骤,最后通过实测数据对反距离加权法、联合加权法及改进方法内插水深质量进行比较。结果表明,改进方法内插结果要优于另外两种方法,可应用于多波束测深数据网格化处理中。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号