首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
王乐洋  陈汉清 《测绘学报》2017,46(5):658-665
针对利用最小二乘配置处理多波束测深数据,存在二次曲面数学模型通常无法精确表征海底地形的整体变化趋势以及观测数据存在粗差或异常点时,常规方法给出的协方差函数不能精确表征其统计特性的问题,本文提出了一种抗差最小二乘配置迭代解法。该方法首先进行协方差函数和观测值方差阵初始化,以多面函数拟合趋势项,然后应用等价权抗差估计并通过迭代计算,最终给出稳健的协方差函数参数解及最小二乘配置解。利用本文提出的方法及传统的方法处理实测的多波束测深数据,试验结果表明,相比于传统的方法,本文提出的方法能够较好地表征海底地形的整体变化趋势,一定程度上克服了多波束测深数据中粗差或异常点的影响。相比于传统的抗差方法,本文方法更为有效地识别出测深数据中异常点,推估效果较好,具有稳健性。  相似文献   

2.
最小二乘配置法和Kriging法进行高程转换时均兼顾考虑高程异常的系统部分和随机部分,为对比分析两种方法在GPS高程转化时的优劣性,根据最小二乘配置法的协方差函数(或kriging法的变异函数)的不同,采用五种方案对实测数据进行处理。结果分析证明了以距离作为协方差函数的最小二乘配置法在算法和推估结果可靠性要优于其他方案。  相似文献   

3.
最小二乘配置法在GPS高程异常推估中的应用   总被引:2,自引:0,他引:2  
简述了最小二乘配置法的基本原理,并在水准联测点不多的情况下,直接采用平方根函数作为各随机参数间的协方差函数用于推估GPS高程异常值。由于最小二乘配置法的函数模型中同时考虑了非随机变量和随机变量,使得高程异常值的推估精度更高。实例分析也证明了此模型在精度上优于传统的平面拟合模型和协方差推估模型。残差分析表明此方法更适合于同时存在内插和外推高程异常值的情况。  相似文献   

4.
基于多面核函数配置型模型的参数估计   总被引:2,自引:1,他引:2  
针对同时出现非随机参数、已测点和未测点信号的配置型模型精确建立协方差函数的困难,在研究和应用最小二乘配置和多面函数拟合法的基础上,将两法综合,提出了多面核函数配置法,导出了这种平差方法的解算公式。并与最小二乘配置、多面函数拟合作了比较分析。  相似文献   

5.
拟合推估两步极小解法   总被引:11,自引:1,他引:11  
杨元喜  刘念 《测绘学报》2002,31(3):192-195
在回顾了最小二乘拟合推估的“综合极小”解法(正常拟合推估)后,分析了正常拟合推估存在的问题。考虑到随机场信号不一定完全表现为随机性,其中可能含有趋势性,顾而提出了拟合推估的“两步极小”解法,即将随机场分成趋势性部分和随机性部分,对趋势性部分采用函数拟合,对随机性部分采用协方差函数拟合。给出了“分两步极小”拟合推估的2种解法。计算表明,两步极小解法能部分地改善拟合推估的精度。  相似文献   

6.
从最小二乘配置方法的基本原理出发,以我国某地区范围内1km分辨率的大地水准面高模型数据为例,根据实用公式计算了试验区大地水准面高的协方差值后,采用多项式函数模型和高斯函数模型分别拟合了该地区大地水准面高的局部协方差函数,并对试验区内18个检核点做了推估计算。根据推估值(Nfit)与实测值(NGPSL)的比较分析表明,虽然多项式协方差函数模型略优于高斯协方差函数模型,但它们都能以厘米级的精度拟合局部大地水准面,这表明了配置法用于精化厘米级大地水准面的有效性。  相似文献   

7.
针对利用多项式建立的GPS高程拟合模型不能很好地拟合高程异常变化趋势面的问题,提出了在常规最小二乘多项式模型的基础上,引入一个非参数模型补偿项,并参考重力测量中最小二乘配置模型方法,建立高程异常趋势面的半参数拟合模型。以小区域GPS采集数据为例,并分别运用两种模型进行拟合与推估,结果表明,基于半参数模型的高程异常拟合与推估效果更好。  相似文献   

8.
本文研究了四个问题。1.给出了一个扰动位的局部协方差函数其中常数K_o、a和b可以由研究地区的重力异常和高程异常数据求定。2.按最小二乘推估求点异常。3.依带参数的最小二乘配置法求点异常。试算表明,结果与信号协方差函数关系甚微。4.利用重力异常和垂线偏差数据求定两点的高程异常差。  相似文献   

9.
顾及卫星钟随机特性的抗差最小二乘配置钟差预报算法   总被引:2,自引:2,他引:0  
为了更好地反映钟差特性并提高其预报精度,采用抗差最小二乘配置方法建立一种能够同时考虑星载原子钟物理特性、钟差周期性变化与随机性变化特点的钟差预报模型。首先使用附有周期项的二次多项式模型进行拟合提取卫星钟差的趋势项与周期项,然后针对剩余的随机项及其可能存在的粗差,采用抗差最小二乘配置的原理进行建模,其中最小二乘配置的协方差函数通过对比协方差拟合的方法并结合试验进行确定。使用IGS精密钟差数据进行预报试验,将本文方法与二次多项式模型、灰色模型进行对比,预报精度分别提高了0.457 ns和0.948 ns,而预报稳定性则分别提高了0.445 ns和1.233 ns,证明了本文方法能够更好地预报卫星钟差,同时说明本文的协方差函数确定方法的有效性。  相似文献   

10.
卫星重力梯度数据解算位系数的最小二乘配置法   总被引:1,自引:0,他引:1  
卫星重力梯度测量在恢复地球重力场的研究中已经得到了广泛应用。本文通过空间扰动位协方差函数特性,得出卫星重力梯度数据与引力位系数的相关协方差函数。利用最小二乘配置法,最终推导出由重力梯度数据直接解算引力位系数的函数表达式,并简要分析其实用性。  相似文献   

11.
最小二乘配置在钢结构建筑物沉降监测中的应用研究   总被引:1,自引:0,他引:1  
根据最小二乘配置可以推估与观测值并无关系的未测点参数特点,文章给出一种钢结构建筑物沉降监测预报新方法,取得了较好的效果.研究结果表明,在建筑物变形监测数据处理中最小二乘配置是一种有效的分析方法.  相似文献   

12.
Least-squares collocation with covariance-matching constraints   总被引:1,自引:0,他引:1  
Most geostatistical methods for spatial random field (SRF) prediction using discrete data, including least-squares collocation (LSC) and the various forms of kriging, rely on the use of prior models describing the spatial correlation of the unknown field at hand over its domain. Based upon an optimal criterion of maximum local accuracy, LSC provides an unbiased field estimate that has the smallest mean squared prediction error, at every computation point, among any other linear prediction method that uses the same data. However, LSC field estimates do not reproduce the spatial variability which is implied by the adopted covariance (CV) functions of the corresponding unknown signals. This smoothing effect can be considered as a critical drawback in the sense that the spatio-statistical structure of the unknown SRF (e.g., the disturbing potential in the case of gravity field modeling) is not preserved during its optimal estimation process. If the objective for estimating a SRF from its observed functionals requires spatial variability to be represented in a pragmatic way then the results obtained through LSC may pose limitations for further inference and modeling in Earth-related physical processes, despite their local optimality in terms of minimum mean squared prediction error. The aim of this paper is to present an approach that enhances LSC-based field estimates by eliminating their inherent smoothing effect, while preserving most of their local prediction accuracy. Our methodology consists of correcting a posteriori the optimal result obtained from LSC in such a way that the new field estimate matches the spatial correlation structure implied by the signal CV function. Furthermore, an optimal criterion is imposed on the CV-matching field estimator that minimizes the loss in local prediction accuracy (in the mean squared sense) which occurs when we transform the LSC solution to fit the spatial correlation of the underlying SRF.  相似文献   

13.
Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage for dealing with non-stationarity in geodetic data. We then compared stationary and non- stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC.  相似文献   

14.
The use of GPS for establishing height control in an area where levelling data are available can involve the so-called GPS/levelling technique. Modelling of the GPS/levelling geoid undulations has usually been carried out using polynomial surface fitting, least-squares collocation (LSC) and finite-element methods. Artificial neural networks (ANNs) have recently been used for many investigations, and proven to be effective in solving complex problems represented by noisy and missing data. In this study, a feed-forward ANN structure, learning the characteristics of the training data through the back-propagation algorithm, is employed to model the local GPS/levelling geoid surface. The GPS/levelling geoid undulations for Istanbul, Turkey, were estimated from GPS and precise levelling measurements obtained during a field study in the period 1998–99. The results are compared to those produced by two well-known conventional methods, namely polynomial fitting and LSC, in terms of root mean square error (RMSE) that ranged from 3.97 to 5.73 cm. The results show that ANNs can produce results that are comparable to polynomial fitting and LSC. The main advantage of the ANN-based surfaces seems to be the low deviations from the GPS/levelling data surface, which is particularly important for distorted levelling networks.  相似文献   

15.
本文所采用的基于输入-输出系统论的谱方法在计算结果的精度上与最小二乘配置方法相当,却很容易用于异性场的计算。用该谱方法对卫星测高及海洋重力资料进行组合求解重力场量(大地水准面差距和重力异常),其误差估计结果表明各向异性场的计算精度优于各向同性场的精度。  相似文献   

16.
The merging of a gravimetric quasigeoid model with GPS-levelling data using second-generation wavelets is considered so as to provide better transformation of GPS ellipsoidal heights to normal heights. Since GPS-levelling data are irregular in the space domain and the classical wavelet transform relies on Fourier theory, which is unable to deal with irregular data sets without prior gridding, the classical wavelet transform is not directly applicable to this problem. Instead, second-generation wavelets and their associated lifting scheme, which do not require regularly spaced data, are used to combine gravimetric quasigeoid models and GPS-levelling data over Norway and Australia, and the results are cross-validated. Cross-validation means that GPS-levelling points not used in the merging are used to assess the results, where one point is omitted from the merging and used to test the merged surface, which is repeated for all points in the dataset. The wavelet-based results are also compared to those from least squares collocation (LSC) merging. This comparison shows that the second-generation wavelet method can be used instead of LSC with similar results, but the assumption of stationarity for LSC is not required in the wavelet method. Specifically, it is not necessary to (somewhat arbitrarily) remove trends from the data before applying the wavelet method, as is the case for LSC. It is also shown that the wavelet method is better at decreasing the maximum and minimum differences between the merged geoid and the cross-validating GPS-levelling data.  相似文献   

17.
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.  相似文献   

18.
This work is an investigation of three methods for regional geoid computation: Stokes’s formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223–232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes’s formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes’s formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号