首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The well-known statistical tool of variance component estimation (VCE) is implemented in the combined least-squares (LS) adjustment of heterogeneous height data (ellipsoidal, orthometric and geoid), for the purpose of calibrating geoid error models. This general treatment of the stochastic model offers the flexibility of estimating more than one variance and/or covariance component to improve the covariance information. Specifically, the iterative minimum norm quadratic unbiased estimation (I-MINQUE) and the iterative almost unbiased estimation (I-AUE) schemes are implemented in case studies with observed height data from Switzerland and parts of Canada. The effect of correlation among measurements of the same height type and the role of the systematic effects and datum inconsistencies in the combined adjustment of ellipsoidal, geoid and orthometric heights on the estimated variance components are investigated in detail. Results give valuable insight into the usefulness of the VCE approach for calibrating geoid error models and the challenges encountered when implementing such a scheme in practice. In all cases, the estimated variance component corresponding to the geoid height data was less than or equal to 1, indicating an overall downscaling of the initial covariance (CV) matrix was necessary. It was also shown that overly optimistic CV matrices are obtained when diagonal-only cofactor matrices are implemented in the stochastic model for the observations. Finally, the divergence of the VCE solution and/or the computation of negative variance components provide insight into the selected parametric model effectiveness.  相似文献   

2.
在我国探月工程嫦娥一号卫星测轨中,需要对测距观测和VLBI时延观测进行综合解算,以确定卫星的角位置时间序列,因而需要考虑不同类型观测资料之间的权重分配问题。本文通过仿真计算,具体比较了不同情况下最小二乘平差方法与赫尔默特(Helmert)方差分量估计方法下测角计算的精度。虽然通常情况下观测资料都提供误差估计,但此估计却不一定能够准确反映实际的观测精度。仿真计算表明,此时应用Helmert求解方法,能够显著提高解算的精度。相比于最小二乘平差方式,Helmert求解方式在计算量上略有增加,但这对于现代计算设备几乎可以忽略。  相似文献   

3.
高为广  苗维凯  陈谷仓  加松 《测绘学报》1957,49(12):1511-1522
首先从参数估计、精度评定和质量控制角度论证了在精密定位中随机模型的重要性;然后基于短基线单差观测模型,采用严密的方差分量估计方法计算了不同频率、不同卫星的相位和伪距观测值精度,任意频率之间的交叉相关性以及不同频率的相位和伪距观测值在不同时间间隔上的时间相关性;随后分析了随机模型对基线精度和整体检验统计量的影响。结果表明:北斗用户接收机数据精度都与高度角相关,建议采用高度角指数加权函数;北斗二号3个频率相位观测值之间存在不同程度的相关性,其他类型观测值之间的交叉相关性不明显,不同频率的相位和伪距观测值时间相关性较明显,高精度应用中需关注。另外,正确的随机模型计算的基线精度更接近理论精度。本文为用户正确认识北斗系统3个类型卫星观测信息、正确使用北斗系统提供支撑。  相似文献   

4.
5.
徐志军  沈云中 《测绘工程》2012,21(4):9-12,16
介绍方差-协方差分量估计理论的研究和发展情况,讨论最小二乘配置模型信号与观测误差的方差分量估计问题。在实际应用中,考虑到未知参数间存在几何或物理约束,针对附有约束条件最小二乘配置的方差分量估计的问题,基于Helmert方差分量估计原理,导出相应的计算公式。模拟算例结果表明,利用约束条件能够改善方差分量的估计精度,验证方法的有效性。  相似文献   

6.
基于等价条件闭合差的VCE通用解析法   总被引:1,自引:1,他引:0  
刘志平 《测绘学报》2013,42(5):648-653
分析指出了现有方差-协方差分量估计(VCE)方法在计算效率与?2统计量统计性质两方面的固有缺陷。利用零空间算子消去概括平差模型中的参数向量,建立了等价条件平差模型。由此定义了等价条件闭合差(ECC),并导出了以ECC表示的?2统计量计算式。进而,基于等价条件闭合差与新构造的可逆方差分量模型提出了方差-协方差分量估计的通用解析法,简称为VCE-ECC法。同时,给出了对应四种基本平差模型的VCE-ECC法简化计算式。实例与仿真结果表明:VCE-ECC法与现有VCE方法的方差-协方差分量估计值在统计意义上无明显差异,并有效地克服了现有VCE方法的固有缺陷。  相似文献   

7.
 General rigorous and simplified formulae are reported for the best invariant quadratic unbiased estimates of the variance–covariance components, which can be applied to all least-squares adjustments with the general linear stochastic model. Simplified procedures are given for two cases frequently recurring in geodetic applications: uncorrelated groups of correlated or uncorrelated observations, with more than one variance component in each group. Received: 19 November 1998 / Accepted: 21 March 2000  相似文献   

8.
随着第二次土地调查接近尾声,各个市(州)、县完成了城镇地籍数据库的建设,今后工作重点是数据库中各个要素的变更。本文根据城镇地籍数据库更新内容要求,针对国土管理部门的业务流程,提出基于日常测量业务和基于外业巡视的2种数据更新方法,设计了一套更新流程,并以长沙市城镇地籍数据库为例,实现了数据库中地形要素及时更新。  相似文献   

9.
改进的Helmert方差分量估计方法在精密定轨中的应用   总被引:10,自引:0,他引:10  
分析了Helmert方差分量估计方法用于人造卫星精密定轨时出现负方差的原因,认为主要是由于观测资料信息不足而使法矩秩亏引起的,在此基础上,提出了利用等估参数先验信息来避免负方差的“改进的Helmert方差分量估计方法”。试算表明,利用本文中的方法可以有效地消除负方差的产生。为了与现用的精密定轨软件相兼容,便于程序实现,将GivensGentliman正交变换方法用于方差分量估计,给出了详细的计算分  相似文献   

10.
We propose a methodology for the combination of a gravimetric (quasi-) geoid with GNSS-levelling data in the presence of noise with correlations and/or spatially varying noise variances. It comprises two steps: first, a gravimetric (quasi-) geoid is computed using the available gravity data, which, in a second step, is improved using ellipsoidal heights at benchmarks provided by GNSS once they have become available. The methodology is an alternative to the integrated processing of all available data using least-squares techniques or least-squares collocation. Unlike the corrector-surface approach, the pursued approach guarantees that the corrections applied to the gravimetric (quasi-) geoid are consistent with the gravity anomaly data set. The methodology is applied to a data set comprising 109 gravimetric quasi-geoid heights, ellipsoidal heights and normal heights at benchmarks in Switzerland. Each data set is complemented by a full noise covariance matrix. We show that when neglecting noise correlations and/or spatially varying noise variances, errors up to 10% of the differences between geometric and gravimetric quasi-geoid heights are introduced. This suggests that if high-quality ellipsoidal heights at benchmarks are available and are used to compute an improved (quasi-) geoid, noise covariance matrices referring to the same datum should be used in the data processing whenever they are available. We compare the methodology with the corrector-surface approach using various corrector surface models. We show that the commonly used corrector surfaces fail to model the more complicated spatial patterns of differences between geometric and gravimetric quasi-geoid heights present in the data set. More flexible parametric models such as radial basis function approximations or minimum-curvature harmonic splines perform better. We also compare the proposed method with generalized least-squares collocation, which comprises a deterministic trend model, a random signal component and a random correlated noise component. Trend model parameters and signal covariance function parameters are estimated iteratively from the data using non-linear least-squares techniques. We show that the performance of generalized least-squares collocation is better than the performance of corrector surfaces, but the differences with respect to the proposed method are still significant.  相似文献   

11.
利用Helmert方差分量估计方法为GPS/BeiDou组合单点定位不同系统观测值定权,获得了合理的权值。分析了开阔环境和遮挡环境两种情况下的动态GPS/BeiDou组合单点定位的精度。结果表明:在静态观测和动态观测中,组合单点定位与单独G PS单点定位相比精度有显著提高。  相似文献   

12.
刘志平  朱丹彤  余航  张克非 《测绘学报》2019,48(9):1088-1095
提出等价条件闭合差的方差-协方差分量最小二乘估计方法,简称LSV-ECM法。首先,利用等价条件平差模型建立了基于等价条件闭合差二次型的方差-协方差分量估计方程,由矩阵半拉直算子将其变换为线性Gauss-Markov形式,进而通过最小二乘准则导出了具有模型通用性、形式简洁性且满足无偏性和最优性的方差-协方差分量估计公式。其次,证明了LSV-ECM方法与残差型VCE方法的等价性,并在此基础上通过计算复杂度定量分析了所提方法的计算高效性。最后,通过边角网平差和中国区域GNSS站坐标时序建模及其结果分析,验证了所提新方法的正确性和计算高效性。  相似文献   

13.
结合沈阳市地理信息公共服务平台数据更新项目,研究地理共享平台数据的更新方式、更新技术以及更新发布一体化流程等关键技术。采用增量更新技术矢量数据,基于历史要素与现势要素节点、分线段的位置关系,检测并提取数据库中更新的要素;利用分词检测方法检测地名信息的匹配度,进行地名数据的更新;基于地名与路网数据,对公交数据进行检测、采集、优化等流程,完成公交数据的更新;并建立平台数据更新与发布一体化流程。经过作业流程控制与编写辅助程序,极大提高数据更新的速度与质量控制水平。  相似文献   

14.
无缝仿射基准转换模型的方差分量估计   总被引:1,自引:1,他引:0  
李博峰 《测绘学报》2016,45(1):30-35
以无缝仿射基准变换为例,研究了无缝基准转换模型的方差分量估计理论,导出无缝基准变换过程中两套坐标的方差分量估计公式。通过模拟试验分析表明,采用方差分量估计方法能正确地恢复出客观体现两套坐标误差的方差值,从而有效提高无缝基准转换精度。  相似文献   

15.
面向对象特征的城市空间数据库动态更新机制研究与实现   总被引:1,自引:0,他引:1  
王磊 《测绘工程》2009,18(3):65-68
针对长期困扰城市空间数据库建设的“更新冲突”和“更新滞后”问题进行深入分析,在研究、借鉴前人的时空数据更新方法的基础上,根据城市规划空间数据实际生产特点,将时间维引入城市空间数据库,按照事件驱动思想,提出基于对象特征的数据动态更新模型和动态更新机制,解决数据管理与数据生产之间数据并发性更新需要,并能实现数据库不同历史时期版本输出问题。实验证明:该系统可以满足城市空间数据库动态更新的需要。  相似文献   

16.
The variance component estimation (VCE) method as developed by Helmert has been applied to the global SLR data set for the year 1987. In the first part of this study the observations have been divided into two groups: those from ruby and YAG laser systems, and their weights estimated over several months. It was found that the weights of both sets of stations altered slightly from month to month, but that, not surprisingly, the YAG systems consistently outperformed those based on ruby lasers. The major part of this paper then considers the estimation of the variance components (i.e. weights) at each SLR station from month to month. These were tested using the F-statistic and, although it indicated that most stations had significant temporal variations, they were generally small compared with the differences between the stations themselves, i.e. the method has been shown to be capable of discriminating between the precision with which the various laser stations are operating. The station coordinates and baseline lengths computed using both a priori, and estimated, weights where also compared and this showed that changes in the weights can have significant effects on the estimation of the station positions, particularly in the z component, and on the baseline lengths - so proving the importance of proper stochastic modelling when processing SLR data.  相似文献   

17.
The ionosphere is a dispersive medium for microwaves, and most space-geodetic techniques using two or more signal frequencies can be applied to extract information on ionospheric parameters, including terrestrial as well as satellite-based GNSS, DORIS, altimetry, and VLBI. Because of their different sensitivity regarding ionization, their different spatial and temporal data distribution, and their different signal paths, a joint analysis of all observation types seems reasonable and promises the best results for ionosphere modeling. However, it has turned out that there exist offsets between ionospheric observations of the diverse techniques mainly caused by calibration uncertainties or model errors. Direct comparisons of the information from different data types are difficult because of the inhomogeneous measurement epochs and locations. In the approach presented here, all measurements are combined into one ionosphere model of vertical total electron content (VTEC). A variance component estimation is applied to take into account the different accuracy levels of the observations. In order to consider systematic offsets, a constant bias term is allowed for each observation group. The investigations have been performed for the time interval of the CONT08 campaign (2 weeks in August 2008) in a region around the Hawaiian Islands. Almost all analyzed observation techniques show good data sensitivity and are suitable for VTEC modeling in case the systematic offsets which can reach up to 5 TECU are taken into account. Only the Envisat DORIS data cannot provide reliable results.  相似文献   

18.
岩体或建构筑物的变形通常具有复杂性和非线性等特性,一般的回归模型难以精确地进行回归预测,应用高斯过程回归理论对变形监测数据呈现出的非线性特征进行时间序列分析。考虑到监测数据的不断更新和累积,以及超参数与样本集的适应性,首先研究了“递进-截尾式”超参数自动更新模式和训练样本集的选择方法;在此基础上构建了以时间作为输入项的高斯过程回归变形智能预测模型(GPR-TIPM);将该模型应用于矿山边坡监测点非线性时间序列分析中,通过分析变形趋势,最终采用Matérn 32和平方指数协方差函数相加的方式进行核函数组合。实验结果表明,采用组合核函数的预测性能较单一核函数有所改善,该方法提高了模型的泛化能力,GPR-TIPM模型在短期内的预测效果较理想。  相似文献   

19.
Least-squares variance component estimation   总被引:19,自引:15,他引:4  
Least-squares variance component estimation (LS-VCE) is a simple, flexible and attractive method for the estimation of unknown variance and covariance components. LS-VCE is simple because it is based on the well-known principle of LS; it is flexible because it works with a user-defined weight matrix; and it is attractive because it allows one to directly apply the existing body of knowledge of LS theory. In this contribution, we present the LS-VCE method for different scenarios and explore its various properties. The method is described for three classes of weight matrices: a general weight matrix, a weight matrix from the unit weight matrix class; and a weight matrix derived from the class of elliptically contoured distributions. We also compare the LS-VCE method with some of the existing VCE methods. Some of them are shown to be special cases of LS-VCE. We also show how the existing body of knowledge of LS theory can be used to one’s advantage for studying various aspects of VCE, such as the precision and estimability of VCE, the use of a-priori variance component information, and the problem of nonlinear VCE. Finally, we show how the mean and the variance of the fixed effect estimator of the linear model are affected by the results of LS-VCE. Various examples are given to illustrate the theory.  相似文献   

20.
介绍FASF法的理论、算法和陆海空验算结果,并同目前常用的最小二乘寻优法进行比较。FASF法的重要特征是,确定每个模糊值的寻优范围。过去,每个模糊值的寻优范围是分别按其它模糊值的假定整数来确定,而FASF法的寻优范围则是按递归法确定。它在确定模糊值的范围时,顾及了假定模糊值对其它模糊值的影响。另一特征是,利用卡尔曼滤波顾及了从初始历元到现今历元的全部观测值。因此,无需寻找一切可能的模糊值,从而使计算工作量急剧减小。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号