首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The findings of this paper are summarized as follows: (1) We propose a sign-constrained robust estimation method, which can tolerate 50% of data contamination and meanwhile achieve high, least-squares-comparable efficiency. Since the objective function is identical with least squares, the method may also be called sign-constrained robust least squares. An iterative version of the method has been implemented and shown to be capable of resisting against more than 50% of contamination. As a by-product, a robust estimate of scale parameter can also be obtained. Unlike the least median of squares method and repeated medians, which use a least possible number of data to derive the solution, the sign-constrained robust least squares method attempts to employ a maximum possible number of good data to derive the robust solution, and thus will not be affected by partial near multi-collinearity among part of the data or if some of the data are clustered together; (2) although M-estimates have been reported to have a breakdown point of 1/(t+1), we have shown that the weights of observations can readily deteriorate such results and bring the breakdown point of M-estimates of Huber’s type to zero. The same zero breakdown point of the L 1-norm method is also derived, again due to the weights of observations; (3) by assuming a prior distribution for the signs of outliers, we have developed the concept of subjective breakdown point, which may be thought of as an extension of stochastic breakdown by Donoho and Huber but can be important in explaining real-life problems in Earth Sciences and image reconstruction; and finally, (4) We have shown that the least median of squares method can still break down with a single outlier, even if no highly concentrated good data nor highly concentrated outliers exist. An erratum to this article is available at .  相似文献   

2.
Robust estimation of geodetic datum transformation   总被引:18,自引:1,他引:17  
Y. Yang 《Journal of Geodesy》1999,73(5):268-274
The robust estimation of geodetic datum transformation is discussed. The basic principle of robust estimation is introduced. The error influence functions of the robust estimators, together with those of least-squares estimators, are given. Particular attention is given to the robust initial estimates of the transformation parameters, which should have a high breakdown point in order to provide reliable residuals for the following estimation. The median method is applied to solve for robust initial estimates of transformation parameters since it has the highest breakdown point. A smooth weight function is then used to improve the efficiency of the parameter estimates in successive iterative computations. A numerical example is given on a datum transformation between a global positioning system network and the corresponding geodetic network in China. The results show that when the coordinates are contaminated by outliers, the proposed method can still give reasonable results. Received: 25 September 1997 / Accepted: 1 March 1999  相似文献   

3.
针对存在噪声的点云数据,采用常规方法拟合效果精度不高的问题,提出了一种有效改善拟合精度的方法。在移动最小二乘的基础上,考虑观测量存在噪声的情况,通过设定阈值剔除噪声,从而得到精度较高的结果。通过相关实验可知:本文方法可有效剔除点云数据中的噪声,提高拟合结果的精度,稳定性更好。  相似文献   

4.
For detection of gross errors in processing triangulation networks, this paper introduces the principle of designing robust estimators with high breakdown points based on the median approach. Three examples presented in the paper show how to form the combinations of observations while considering geometrical constraints necessary for computing the median estimate, and how to calculate the breakdown points as a measure of global reliability of the estimate.  相似文献   

5.
基于中位参数初值的等价权抗差估计方法   总被引:1,自引:0,他引:1  
等价权抗差估计法保留最小二乘估计处理正常观测值的优良特性,但其抗差性与初值关系极大,若用最小二乘估值作初值,必定会影响其抗差性.中位数法具有很好的抗差特性,但它只用部分观测数据计算参数估值,丢失大量有效信息.基于中位参数的抗差估计方法,在有限样本时,给出其崩溃污染率的估算方法.根据中位参数法和等价权抗差估计法的各自优点...  相似文献   

6.
 The proper and optimal design and subsequent assessment of geodetic networks is an integral part of most surveying engineering projects. Optimization and design are carried out before the measurements are actually made. A geodetic network is designed and optimized in terms of high reliability and the results are compared with those obtained by the robustness analysis technique. The purpose of an optimal design is to solve for both the network configuration (first-order design) and observations accuracy (second-order design) in order to meet the desired criteria. For this purpose, an analytical method is presented for performing the first-order design, second-order design, and/or the combined design. In order to evaluate the geometrical strength of a geodetic network, the results of robustness analysis are displayed in terms of robustness in rotation, robustness in shear, and robustness in scale. Results showed that the robustness parameters were affected by redundancy numbers. The largest robustness parameters were due to the observations with minimum redundancy numbers. Received: 14 August 2000 / Accepted: 2 January 2001  相似文献   

7.
Robust estimation of systematic errors of satellite laser range   总被引:13,自引:0,他引:13  
Methods for analyzing laser-ranging residuals to estimate station-dependent systematic errors and to eliminate outliers in satellite laser ranges are discussed. A robust estimator based on an M-estimation principle is introduced. A practical calculation procedure which provides a robust criterion with high breakdown point and produces robust initial residuals for following iterative robust estimation is presented. Comparison of the results from the least-squares method with those of the robust method shows that the results of the station systematic errors from the robust estimator are more reliable. Received: 18 March 1997 / Accepted: 17 March 1999  相似文献   

8.
姚宜斌 《测绘工程》2001,10(2):29-31,35
在观测值中加入粗差,粗差的影响可以通过调整观测值的权加以消除,对含有粗差的观测值利用稳健估计处理后的平差结果应与加粗差前的利用最小二乘原理处理的平差结果一致,依据这样的思想,本文利用间接平差函数模型,借用经典最小二乘原理,推导出了基于等价分析方法的稳健估计的等价权函数。  相似文献   

9.
1 IntroductionOnthebasisoflargequantitiesofdataanalyzed ,statisticianspointoutthattheprobabilityofout liersin practiceandscientificexperimentareap proximately 1 %_1 0 %[1 ] .Outliersalwaysaffectthecorrectnessofresult.ThemethodofLeastSquareisverysensitivetooutli…  相似文献   

10.
基于选权迭代法的基本理论,文中提出先用LMS稳健估计来确定残差的初值,然后再进行选权迭代方法。其估计结果既继承LMS方法的高失效点(BP)稳健性,又具有选权迭代法的高估计效率,其计算结果与无异常点时最小二乘估计结果基本一致。  相似文献   

11.
作为非线性滤波的代表,粒子滤波得到广泛应用。该算法通过随机生成的具有权重的样本(粒子)来计算后验概率密度。其中赋权的过程需要综合系统信息和观测信息,当观测值含有粗差时,会使粒子错误赋权影响滤波结果。提出一种新的基于抗差估计的抗差粒子滤波算法,通过计算粒子等价权,抑制观测值粗差的影响。模拟计算分析表明,当观测值含有粗差时,与标准粒子滤波相比,该方法能有效提高滤波精度。  相似文献   

12.
J. Saleh 《Journal of Geodesy》2000,74(3-4):291-305
 It is argued that the tendency of nature to minimize energy may be used as a unifying basis for all robust estimators. Robustness is defined and discussed based on mechanical rather than empirical and abstract tools. This mechanical view of robustness is then extended to design new and useful robust data editors that suppress the outlying content of the contaminated observations. These editors are applied to edit samples of sea-surface heights, gravity observations and reduced global positioning system baselines. Received: 12 July 1997 / Accepted: 8 June 1999  相似文献   

13.
This paper proposes robust methods for local planar surface fitting in 3D laser scanning data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and/or outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g. surface normals), necessary for many point cloud processing tasks. Consider one example data set used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20° and 0.24° respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49°, 39.55° and 0.79° respectively. In terms of speed, DetRD-PCA takes 0.033 s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature preservation and segmentation in 3D point clouds obtained from laser scanners. The results are significantly better and more efficiently computed than those obtained by existing methods.  相似文献   

14.
Robust Kalman filter for rank deficient observation models   总被引:14,自引:0,他引:14  
A robust Kalman filter is derived for rank deficient observation models. The datum for the Kalman filter is introduced at the zero epoch by the choice of a generalized inverse. The robust filter is obtained by Bayesian statistics and by applying a robust M-estimate. Outliers are not only looked for in the observations but also in the updated parameters. The ability of the robust Kalman filter to detect outliers is demonstrated by an example. Received: 8 November 1996 / Accepted: 11 February 1998  相似文献   

15.
Studies on small-world networks have received intensive interdisciplinary attention during the past several years. It is well-known among researchers that a small-world network is often characterized by high connectivity and clustering, but so far there exist few effective approaches to evaluate small-world properties, especially for spatial networks. This paper proposes a method to examine the small-world properties of spatial networks from the perspective of network autocorrelation. Two network autocorrelation statistics, Moran’s I and Getis–Ord’s G, are used to monitor the structural properties of networks in a process of “rewiring” networks from a regular to a random network. We discovered that Moran’s I and Getis–Ord’s G tend to converge and have relatively low values when properties of small-world networks emerge. Three transportation networks at the national, metropolitan, and intra-city levels are analyzed using this approach. It is found that spatial networks at these three scales possess small-world properties when the correlation lag distances reach certain thresholds, implying that the manifestation of small-world phenomena result from the interplay between the network structure and the dynamics taking place on the network.   相似文献   

16.
When applying single outlier detection techniques, such as the Tau () test, to examine the residuals of observations for outliers, the number of detected observations in any iteration of adjustment is most often more numerous than the actual number of true outliers. A new technique is proposed which estimates the number of outliers in a network by evaluating the redundancy contributions of the detected observations. In this way, a number of potential outliers can be identified and eliminated in each iteration of an adjustment. This leads to higher efficiency in data snooping of geodetic networks. The technique is illustrated with some numerical examples.  相似文献   

17.
The proper identification and removal of outliers in the combination of rates of vertical displacements derived from GPS, tide gauges/satellite altimetry, and GRACE observations is presented. Outlier detection is a necessary pre-screening procedure in order to ensure reliable estimates of stochastic properties of the observations in the combined least-squares adjustment (via rescaling of covariance matrices) and to ensure that the final vertical motion model is not corrupted and/or distorted by erroneous data. Results from this study indicate that typical data snooping methods are inadequate in dealing with these heterogeneous data sets and their stochastic properties. Using simulated vertical displacement rates, it is demonstrated that a large variety of outliers (random scattered and adjacent, as well as jointly influential) can be dealt with if an iterative re-weighting least-squares adjustment is combined with a robust median estimator. Moreover, robust estimators are efficient in areas weakly constrained by the data, where even high quality observations may appear to be erroneous if their estimates are largely influenced by outliers. Four combined models for the vertical motion in the region of the Great Lakes are presented. The computed vertical displacements vary between  − 2 mm/year (subsidence) along the southern shores and 3 mm/year (uplift) along the northern shores. The derived models provide reliable empirical constraints and error bounds for postglacial rebound models in the region.  相似文献   

18.
我国近海平均海面及其变化的研究   总被引:5,自引:1,他引:5  
建立了计算平均海面及其变化的动态抗差模型,并把它与计算平均海面的平均值法、抗差法和动态模型法作了实测数据的计算和比较,表明动态抗差模型不仅能顾及海面动态变化反应,而且能削弱海面异常变化的影响,其结果更稳定可靠,优于其他方法。最后应用动态抗差模型,计算了我国42个验潮站的平均海面及其变化,结果表明,从50年代到70提供,我国近海的海面平均以0.621mm/a的速率上升。  相似文献   

19.
Robustness analysis of geodetic horizontal networks   总被引:2,自引:1,他引:2  
  相似文献   

20.
The earth’s phase of rotation, expressed as Universal Time UT1, is the most variable component of the earth’s rotation. Continuous monitoring of this quantity is realised through daily single-baseline VLBI observations which are interleaved with VLBI network observations. The accuracy of these single-baseline observations is established mainly through statistically determined standard deviations of the adjustment process although the results of these measurements are prone to systematic errors. The two major effects are caused by inaccuracies in the polar motion and nutation angles introduced as a priori values which propagate into the UT1 results. In this paper, we analyse the transfer of these components into UT1 depending on the two VLBI baselines being used for short duration UT1 monitoring. We develop transfer functions of the errors in polar motion and nutation into the UT1 estimates. Maximum values reach 30 [μs per milliarcsecond] which is quite large considering that observations of nutation offsets w.r.t. the state-of-the-art nutation model show deviations of as much as one milliarcsecond.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号