首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
测量平差中不适定问题解的统一表达与选权拟合法   总被引:38,自引:6,他引:32  
欧吉坤 《测绘学报》2004,33(4):283-288
将测量平差中常见的几种数学模型分析比较,发现它们的解可以统一表达,形式上,都可以由吉洪诺夫正则化原理导出.在拟稳平差思想的启迪下,作者提出选权拟合法解不适定问题的思路.作者强调,解不适定问题应根据具体问题对参数作具体分析,找出合理的权阵或参数约束矩阵,利用统一的解式,可以得到符合客观实际的结果.最后介绍两个新解法算例.  相似文献   

2.
在我国探月工程嫦娥一号卫星测轨中,需要对测距观测和VLBI时延观测进行综合解算,以确定卫星的角位置时间序列,因而需要考虑不同类型观测资料之间的权重分配问题。本文通过仿真计算,具体比较了不同情况下最小二乘平差方法与赫尔默特(Helmert)方差分量估计方法下测角计算的精度。虽然通常情况下观测资料都提供误差估计,但此估计却不一定能够准确反映实际的观测精度。仿真计算表明,此时应用Helmert求解方法,能够显著提高解算的精度。相比于最小二乘平差方式,Helmert求解方式在计算量上略有增加,但这对于现代计算设备几乎可以忽略。  相似文献   

3.
4.
The findings of this paper are summarized as follows: (1) We propose a sign-constrained robust estimation method, which can tolerate 50% of data contamination and meanwhile achieve high, least-squares-comparable efficiency. Since the objective function is identical with least squares, the method may also be called sign-constrained robust least squares. An iterative version of the method has been implemented and shown to be capable of resisting against more than 50% of contamination. As a by-product, a robust estimate of scale parameter can also be obtained. Unlike the least median of squares method and repeated medians, which use a least possible number of data to derive the solution, the sign-constrained robust least squares method attempts to employ a maximum possible number of good data to derive the robust solution, and thus will not be affected by partial near multi-collinearity among part of the data or if some of the data are clustered together; (2) although M-estimates have been reported to have a breakdown point of 1/(t+1), we have shown that the weights of observations can readily deteriorate such results and bring the breakdown point of M-estimates of Huber’s type to zero. The same zero breakdown point of the L 1-norm method is also derived, again due to the weights of observations; (3) by assuming a prior distribution for the signs of outliers, we have developed the concept of subjective breakdown point, which may be thought of as an extension of stochastic breakdown by Donoho and Huber but can be important in explaining real-life problems in Earth Sciences and image reconstruction; and finally, (4) We have shown that the least median of squares method can still break down with a single outlier, even if no highly concentrated good data nor highly concentrated outliers exist. An erratum to this article is available at .  相似文献   

5.
A new feature weighting method for band selection is presented, which is based on the pairwise separability criterion and matrix coefficients analysis. Through decorrelation of each class by principal component transformation, the criterion value of any band subset is the summations of the values of individual bands of it for the transformed feature space, and thus the computation amounts of calculating criteria of each band combinations are reduced. Following it, the corresponding matrix coefficients analysis is done to assign weights to original bands. As feature weighting considers little about the spectral correlation, the redundant bands are removed by choosing those with lower correlation coefficients than a preset threshold. Hyperspectral data classification experiments show the effectiveness of the new band selection method.  相似文献   

6.
The least squares estimator is derived for a random stochastic process implied by one or two heterogeneous random stochastic processes on a sphere. The solution can be regarded as least squares collocation in the continuous case. When the method is applied in physical geodesy the statistical expectation is usually substituted by the global average and the method will then give the minimum mean squares errors of the estimated quantities. The solutions can also be considered as generalizations of the classical integral formulas in physical geodesy for heterogeneous data information.  相似文献   

7.
The purpose of this paper is to suggest estimators for the parameters of spatial models containing a spatially lagged dependent variable, as well as spatially lagged independent variables, and an incomplete data set. The specifications allow for nonstationarity, and the disturbance process of the model is specified non-parametrically. We consider various scenarios concerning the pattern of missing data points. One estimator we suggest is based on a smaller but complete subset of the sample; another is based on a larger but incomplete subset of the sample. We give large sample results for both of these cases.  相似文献   

8.
Support vectors, which usually compose a subset of training sets, determine the decision function of support vector machine (SVM) classification. Selecting a subset including the support vectors through reducing a large training set is a challenge. This paper examines how different linkage techniques in a clustering-based reduction method affect classification accuracy for semiarid vegetation mapping. The investigated linkage techniques include single, complete, weighted pairgroup average, and unweighted pair-group average. Using a multiple-angle remote sensing data set, there is no loss of SVM accuracy when the original training set is reduced to 21%, 14%, 20%, and 20% for these four linkage techniques, respectively.  相似文献   

9.
叶亚琴  陈波  万波  周顺平 《测绘科学》2012,37(6):101-103
空间实体匹配过程中多个指标的融合问题是影响匹配效果的关键问题之一。本文针对这一问题,以区实体为例提出了一套基于范例库的解决方案。首先提取出影响实体匹配的数据特征因子并确定了量化方法,其次选取典型的匹配指标,接下来通过建立指标权值范例库确定各指标权值,最后根据权值和数据特征因子调整匹配过程。该方法使得数据具有学习能力,达到了指标权值的自适应性的目标。实验表明该方法可行,并且可以提升空间实体匹配算法的效率、准确度和智能化程度。  相似文献   

10.
New IGS Station and Satellite Clock Combination   总被引:3,自引:5,他引:3  
Following the principles set forth in the Position Paper #3 at the 1998 Darmstadt Analysis Center (AC) Workshop on the new International GPS Service (IGS) International Terrestrial Reference Frame (ITRF) realization and discussions at the 1999 La Jolla AC workshop, a new clock combination program was developed. The program allows for the input of both SP3 and the new clock (RINEX) format (ftp://igsch.jpl.nasa.gov//igscb/data/format/rinex_clock.txt). The main motivation for this new development is the realization of the goals of the IGS/BIPM timing project. Besides this there is a genuine interest in station clocks and a need for a higher sampling rate of the IGS clocks (currently limited to 15 min due to the SP3 format). The inclusion of station clocks should also allow for a better alignment of the individual AC solutions and should enable the realization of a stable GPS time-scale. For each input AC clock solution the new clock combination solves and corrects for reference clock errors/instabilities as well as satellite/station biases, geocenter and station/satellite orbit errors. External station clock calibrations and/or constraints, such as those resulting from the IGS/BIPM timing pilot project, can be introduced via a subset of the fiducial timing station set, to facilitate a precise and consistent IGS UTC realization for both station and satellite combined clock solutions. Furthermore, the new clock combination process enforces strict strict conformity and consistency with the current and future IGS standards. The new clock combination maintains orbit/clock consistency at millimeter level, which is comparable to the best AC orbit/clock solutions. This is demonstrated by static GIPSY precise point positioning tests using GPS week 0995 data for stations in both Northern and Southern Hemispheres and similar tests with the Bernese software using more recent data from GPS week 1081. ? 2001 John Wiley & Sons, Inc.  相似文献   

11.
This paper describes techniques to compute and map dasymetric population densities and to areally interpolate census data using dasymetrically derived population weights. These techniques are demonstrated with 1980-2000 census data from the 13-county Atlanta metropolitan area. Land-use/land-cover data derived from remotely sensed satellite imagery were used to determine the areal extent of populated areas, which in turn served as the denominator for dasymetric population density computations at the census tract level. The dasymetric method accounts for the spatial distribution of population within administrative areas, yielding more precise population density estimates than the choroplethic method, while graphically representing the geographic distribution of populations. In order to areally interpolate census data from one set of census tract boundaries to another, the percentages of populated areas affected by boundary changes in each affected tract were used as adjustment weights for census data at the census tract level, where census tract boundary shifts made temporal data comparisons difficult. This method of areal interpolation made it possible to represent three years of census data (1980, 1990, and 2000) in one set of common census tracts (1990). Accuracy assessment of the dasymetrically derived adjustment weights indicated a satisfactory level of accuracy. Dasymetrically derived areal interpolation weights can be applied to any type of geographic boundary re-aggregation, such as from census tracts to zip code tabulation areas, from census tracts to local school districts, from zip code areas to telephone exchange prefix areas, and for electoral redistricting.  相似文献   

12.
The two-temperature method (TTM) allows the separation of land-surface temperature and land-surface emissivity information from radiance measurements, and therefore, the solution can be uniquely determined by the data. However, the inverse problem is still an ill-posed problem, since the solution does not depend continuously on the data. Accordingly, we have used some mathematical tools, which are suited for analyses of ill-posed problems in order to show TTM properties, evaluate it, and optimize its estimations. Related to this last point, we have shown that it is necessary to constrain the problem, either by defining a region of physically admissible solutions and/or by using regularization methods, in order to obtain stable results. Besides, the results may be improved by using TTM with systems that possess a high temporal resolution, as well as by acquiring observations near the maximum and minimum of the diurnal temperature range.  相似文献   

13.
We combine the publicly available GRACE monthly gravity field time series to produce gravity fields with reduced systematic errors. We first compare the monthly gravity fields in the spatial domain in terms of signal and noise. Then, we combine the individual gravity fields with comparable signal content, but diverse noise characteristics. We test five different weighting schemes: equal weights, non-iterative coefficient-wise, order-wise, or field-wise weights, and iterative field-wise weights applying variance component estimation (VCE). The combined solutions are evaluated in terms of signal and noise in the spectral and spatial domains. Compared to the individual contributions, they in general show lower noise. In case the noise characteristics of the individual solutions differ significantly, the weighted means are less noisy, compared to the arithmetic mean: The non-seasonal variability over the oceans is reduced by up to 7.7% and the root mean square (RMS) of the residuals of mass change estimates within Antarctic drainage basins is reduced by 18.1% on average. The field-wise weighting schemes in general show better performance, compared to the order- or coefficient-wise weighting schemes. The combination of the full set of considered time series results in lower noise levels, compared to the combination of a subset consisting of the official GRACE Science Data System gravity fields only: The RMS of coefficient-wise anomalies is smaller by up to 22.4% and the non-seasonal variability over the oceans by 25.4%. This study was performed in the frame of the European Gravity Service for Improved Emergency Management (EGSIEM; http://www.egsiem.eu) project. The gravity fields provided by the EGSIEM scientific combination service (ftp://ftp.aiub.unibe.ch/EGSIEM/) are combined, based on the weights derived by VCE as described in this article.  相似文献   

14.
整数规划在施工控制网观测精度优化设计中的应用   总被引:2,自引:1,他引:2  
岑敏仪 《测绘学报》1992,21(1):34-41
使用数学解法来获得最优的控制网观测量的权,必须转换成观测方案后才能实现。怎样才能使转换后的观测方案既避免主观因素的影响,又具有最优的设计功能,是本文所力图解决的问题。文章顾及施工控制网原始数据误差的影响,提出利用整数规划进行观测精度优化设计的数学模型和计算方法,使优化设计理论更趋完美。文章最后通过算例说明该法在控制网观测精度优化设计中的作用。  相似文献   

15.
Abstract

Scatterplots are essential tools for data exploration. However, this tool poorly scales with data-size, with overplotting and excessive delay being the main problems. Generalization methods in the attribute domain focus on visual manipulations, but do not take into account the inherent nature of information redundancy in most geographic data. These methods may also result in alterations of statistical properties of data. Recent developments in spatial statistics, particularly the formulation of effective sample size and the fast approximation of the eigenvalues of a spatial weights matrix, make it possible to assess the information content of a georeferenced data-set, which can serve as the basis for resampling such data. Experiments with both simulated data and actual remotely sensed data show that an equivalent scatterplot consisting of point clouds and fitted lines can be produced from a small subset extracted from a parent georeferenced data-set through spatial resampling. The spatially simplified data subset also maintains key statistical properties as well as the geographic coverage of the original data.  相似文献   

16.
王乐洋  陈汉清 《测绘学报》2017,46(5):658-665
针对利用最小二乘配置处理多波束测深数据,存在二次曲面数学模型通常无法精确表征海底地形的整体变化趋势以及观测数据存在粗差或异常点时,常规方法给出的协方差函数不能精确表征其统计特性的问题,本文提出了一种抗差最小二乘配置迭代解法。该方法首先进行协方差函数和观测值方差阵初始化,以多面函数拟合趋势项,然后应用等价权抗差估计并通过迭代计算,最终给出稳健的协方差函数参数解及最小二乘配置解。利用本文提出的方法及传统的方法处理实测的多波束测深数据,试验结果表明,相比于传统的方法,本文提出的方法能够较好地表征海底地形的整体变化趋势,一定程度上克服了多波束测深数据中粗差或异常点的影响。相比于传统的抗差方法,本文方法更为有效地识别出测深数据中异常点,推估效果较好,具有稳健性。  相似文献   

17.
胡小工  黄珹  廖新浩 《测绘学报》2001,30(2):101-107
采用美国喷气推进实验室JPL发展的GIPSY软件解算区域GPS网,比较了固家精密星历和同时固定精密星历及卫星钟参数2种解算方案,残差统计检验表明前者的左中仍保留了部分未解出的信号而后者的残差接近于白噪声高斯分布,解算结果与ITRF96的比较和对重复率的统计表明,残差分布合理的解算较优,简单的线差统计检验的计算可提供重要的解算评估。  相似文献   

18.
针对单频单历元组合载波相位差分技术(RTK)定位过程中存在的秩亏及模糊度解算病态等问题,提出了一种模糊度降相关的新方法。该方法引入伪距观测值进行辅助解算。首先采用经验分权法对伪距与载波相位观测值分配权重,并通过加权最小二乘法获得整周模糊度浮点解及协方差。然后通过对整周模糊度浮点解的方差-协方差矩阵进行降序排列和剔除病态模糊度。最后利用修正后的浮点解迭代搜索模糊度的整数解。试验结果表明而且可以起到良好的模糊度降相关的效果定位。   相似文献   

19.
Observation systems known as errors-in-variables (EIV) models with model parameters estimated by total least squares (TLS) have been discussed for more than a century, though the terms EIV and TLS were coined much more recently. So far, it has only been shown that the inequality-constrained TLS (ICTLS) solution can be obtained by the combinatorial methods, assuming that the weight matrices of observations involved in the data vector and the data matrix are identity matrices. Although the previous works test all combinations of active sets or solution schemes in a clear way, some aspects have received little or no attention such as admissible weights, solution characteristics and numerical efficiency. Therefore, the aim of this study was to adjust the EIV model, subject to linear inequality constraints. In particular, (1) This work deals with a symmetrical positive-definite cofactor matrix that could otherwise be quite arbitrary. It also considers cross-correlations between cofactor matrices for the random coefficient matrix and the random observation vector. (2) From a theoretical perspective, we present first-order Karush–Kuhn–Tucker (KKT) necessary conditions and the second-order sufficient conditions of the inequality-constrained weighted TLS (ICWTLS) solution by analytical formulation. (3) From a numerical perspective, an active set method without combinatorial tests as well as a method based on sequential quadratic programming (SQP) is established. By way of applications, computational costs of the proposed algorithms are shown to be significantly lower than the currently existing ICTLS methods. It is also shown that the proposed methods can treat the ICWTLS problem in the case of more general weight matrices. Finally, we study the ICWTLS solution in terms of non-convex weighted TLS contours from a geometrical perspective.  相似文献   

20.
Finite element method for solving geodetic boundary value problems   总被引:1,自引:1,他引:0  
The goal of this paper is to present the finite element scheme for solving the Earth potential problems in 3D domains above the Earth surface. To that goal we formulate the boundary-value problem (BVP) consisting of the Laplace equation outside the Earth accompanied by the Neumann as well as the Dirichlet boundary conditions (BC). The 3D computational domain consists of the bottom boundary in the form of a spherical approximation or real triangulation of the Earth’s surface on which surface gravity disturbances are given. We introduce additional upper (spherical) and side (planar and conical) boundaries where the Dirichlet BC is given. Solution of such elliptic BVP is understood in a weak sense, it always exists and is unique and can be efficiently found by the finite element method (FEM). We briefly present derivation of FEM for such type of problems including main discretization ideas. This method leads to a solution of the sparse symmetric linear systems which give the Earth’s potential solution in every discrete node of the 3D computational domain. In this point our method differs from other numerical approaches, e.g. boundary element method (BEM) where the potential is sought on a hypersurface only. We apply and test FEM in various situations. First, we compare the FEM solution with the known exact solution in case of homogeneous sphere. Then, we solve the geodetic BVP in continental scale using the DNSC08 data. We compare the results with the EGM2008 geopotential model. Finally, we study the precision of our solution by the GPS/levelling test in Slovakia where we use terrestrial gravimetric measurements as input data. All tests show qualitative and quantitative agreement with the given solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号