首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Total least squares (TLS) can solve the issue of parameter estimation in the errors-invariables (EIV) model, however, the estimated parameters are affected or even severely distorted when the observation vector and coefficient matrix are contaminated by gross errors. Currently, the use of existing robust TLS (RTLS) methods for the EIV model is unreasonable. Original residuals are directly used in most studies to construct the weight factor function, thus the robustness for the structure space is not considered. In this study, a robust weighted total least squares (RWTLS) algorithm for the partial EIV model is proposed based on Newton-Gauss method and the equivalent weight principle of general robust estimation. The algorithm utilizes the standardized residuals to construct the weight factor function and employs the median method to obtain a robust estimator of the variance component. Therefore, the algorithm possesses good robustness in both the observation and structure spaces. To obtain standardized residuals, we use the linearly approximate cofactor propagation law for deriving the expression of the cofactor matrix of WTLS residuals. The iterative procedure and precision assessment approach for RWTLS are presented. Finally, the robustness of RWTLS method is verified by two experiments involving line fitting and plane coordinate transformation. The results show that RWTLS algorithm possesses better robustness than the general robust estimation and the robust total least squares algorithm directly constructed with original residuals.  相似文献   

2.
A method for variance component estimation (VCE) in errors-in-variables (EIV) models is proposed, which leads to a novel rigorous total least-squares (TLS) approach. To achieve a realistic estimation of parameters, knowledge about the stochastic model, in addition to the functional model, is required. For an EIV model, the existing TLS techniques either do not consider the stochastic model at all or assume approximate models such as those with only one variance component. In contrast to such TLS techniques, the proposed method considers an unknown structure for the stochastic model in the adjustment of an EIV model. It simultaneously predicts the stochastic model and estimates the unknown parameters of the functional model. Moreover the method shows how an EIV model can support the Gauss-Helmert model in some cases. To make the VCE theory into EIV model more applicable, two simplified algorithms are also proposed. The proposed methods can be applied to linear regression and datum transformation. We apply these methods to these examples. In particular a 3-D non-linear close to identical similarity transformation is performed. Two simulation studies besides an experimental example give insight into the efficiency of the algorithms.  相似文献   

3.
4.
针对Mogi模型垂直位移与水平位移联合反演中的病态问题,改进火山形变总体最小二乘(Total Least Squares,TLS)联合反演的虚拟观测法,并使用方差分量估计(Variance Components Estimation,VCE)方法确定病态问题的正则化参数.将附有先验信息的参数作为观测方程,与垂直位移和水平位移的观测方程联合解算,推导了三类观测方程联合反演的求解公式及基于总体最小二乘方差分量估计确定正则化参数的表达式,给出了算法的迭代流程.通过算例实验,研究了总体最小二乘联合反演的虚拟观测法在火山Mogi模型形变反演中的应用;算例结果表明,三类数据的联合平差及方差分量估计方法可以确定权比因子并得到修正后的压力源参数,具有一定的实际参考价值.  相似文献   

5.
Data assimilation is a sophisticated mathematical technique for combining observational data with model predictions to produce state and parameter estimates that most accurately approximate the current and future states of the true system. The technique is commonly used in atmospheric and oceanic modelling, combining empirical observations with model predictions to produce more accurate and well-calibrated forecasts. Here, we consider a novel application within a coastal environment and describe how the method can also be used to deliver improved estimates of uncertain morphodynamic model parameters. This is achieved using a technique known as state augmentation. Earlier applications of state augmentation have typically employed the 4D-Var, Kalman filter or ensemble Kalman filter assimilation schemes. Our new method is based on a computationally inexpensive 3D-Var scheme, where the specification of the error covariance matrices is crucial for success. A simple 1D model of bed-form propagation is used to demonstrate the method. The scheme is capable of recovering near-perfect parameter values and, therefore, improves the capability of our model to predict future bathymetry. Such positive results suggest the potential for application to more complex morphodynamic models.  相似文献   

6.
A new proof is presented of the desirable property of the weighted total least-squares (WTLS) approach in preserving the structure of the coefficient matrix in terms of the functional independent elements. The WTLS considers the full covariance matrix of observed quantities in the observation vector and in the coefficient matrix; possible correlation between entries in the observation vector and the coefficient matrix are also considered. The WTLS approach is then equipped with constraints in order to produce the constrained structured TLS (CSTLS) solution. The proposed approach considers the correlation between the observation vector and the coefficient matrix of an Error-In-Variables model, which is not considered in other, recently proposed approaches. A rigid transformation problem is done by preservation of the structure and satisfying the constraints simultaneously.  相似文献   

7.
Proper incorporation of linear and quadratic constraints is critical in estimating parameters from a system of equations. These constraints may be used to avoid a trivial solution, to mitigate biases, to guarantee the stability of the estimation, to impose a certain “natural” structure on the system involved, and to incorporate prior knowledge about the system. The Total Least-Squares (TLS) approach as applied to the Errors-In-Variables (EIV) model is the proper method to treat problems where all the data are affected by random errors. A set of efficient algorithms has been developed previously to solve the TLS problem, and a few procedures have been proposed to treat TLS problems with linear constraints and TLS problems with a quadratic constraint. In this contribution, a new algorithm is presented to solve TLS problems with both linear and quadratic constraints. The new algorithm is developed using the Euler-Lagrange theorem while following an optimization process that minimizes a target function. Two numerical examples are employed to demonstrate the use of the new approach in a geodetic setting.  相似文献   

8.
Cartesian coordinate transformation between two erroneous coordinate systems is considered within the Errors-In-Variables (EIV) model. The adjustment of this model is usually called the total Least-Squares (LS). There are many iterative algorithms given in geodetic literature for this adjustment. They give equivalent results for the same example and for the same user-defined convergence error tolerance. However, their convergence speed and stability are affected adversely if the coefficient matrix of the normal equations in the iterative solution is ill-conditioned. The well-known numerical techniques, such as regularization, shifting-scaling of the variables in the model, etc., for fixing this problem are not applied easily to the complicated equations of these algorithms. The EIV model for coordinate transformations can be considered as the nonlinear Gauss-Helmert (GH) model. The (weighted) standard LS adjustment of the iteratively linearized GH model yields the (weighted) total LS solution. It is uncomplicated to use the above-mentioned numerical techniques in this LS adjustment procedure. In this contribution, it is shown how properly diminished coordinate systems can be used in the iterative solution of this adjustment. Although its equations are mainly studied herein for 3D similarity transformation with differential rotations, they can be derived for other kinds of coordinate transformations as shown in the study. The convergence properties of the algorithms established based on the LS adjustment of the GH model are studied considering numerical examples. These examples show that using the diminished coordinates for both systems increases the numerical efficiency of the iterative solution for total LS in geodetic datum transformation: the corresponding algorithm working with the diminished coordinates converges much faster with an error of at least 10-5 times smaller than the one working with the original coordinates.  相似文献   

9.
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data.  相似文献   

10.
Nonuniqueness in geophysical inverse problems is naturally resolved by incorporating prior information about unknown models into observed data. In practical estimation procedures, the prior information must be quantitatively expressed. We represent the prior information in the same form as observational equations, nonlinear equations with random errors in general, and treat as data. Then we may define a posterior probability density function of model parameters for given observed data and prior data, and use the maximum likelihood criterion to solve the problem. Supposing Gaussian errors both in observed data and prior data, we obtain a simple algorithm for iterative search to find the maximum likelihood estimates. We also obtain an asymptotic expression of covariance for estimation errors, which gives a good approximation to exact covariance when the estimated model is linearly close to a true model. We demonstrate that our approach is a general extension of various inverse methods dealing with Gaussian data. By way of example, we apply the new approach to a problem of inferring the final rupture state of the 1943 Tottori earthquake (M = 7.4) from coseismic geodetic data. The example shows that the use of sufficient prior information effectively suppresses both the nonuniqueness and the nonlinearity of the problem.  相似文献   

11.
This paper is concerned with developing computational methods and approximations for maximum likelihood estimation and minimum mean square error smoothing of irregularly observed two-dimensional stationary spatial processes. The approximations are based on various Fourier expansions of the covariance function of the spatial process, expressed in terms of the inverse discrete Fourier transform of the spectral density function of the underlying spatial process. We assume that the underlying spatial process is governed by elliptic stochastic partial differential equations (SPDE's) driven by a Gaussian white noise process. SPDE's have often been used to model the underlying physical phenomenon and the elliptic SPDE's are generally associated with steady-state problems.A central problem in estimation of underlying model parameters is to identify the covariance function of the process. The cumbersome exact analytical calculation of the covariance function by inverting the spectral density function of the process, has commonly been used in the literature. The present work develops various Fourier approximations for the covariance function of the underlying process which are in easily computable form and allow easy application of Newton-type algorithms for maximum likelihood estimation of the model parameters. This work also develops an iterative search algorithm which combines the Gauss-Newton algorithm and a type of generalized expectation-maximization (EM) algorithm, namely expectation-conditional maximization (ECM) algorithm, for maximum likelihood estimation of the parameters.We analyze the accuracy of the covariance function approximations for the spatial autoregressive-moving average (ARMA) models analyzed in Vecchia (1988) and illustrate the performance of our iterative search algorithm in obtaining the maximum likelihood estimation of the model parameters on simulated and actual data.  相似文献   

12.
The paper presents a computationally efficient algorithm to integrate a probabilistic, non-Gaussian parameter estimation approach for nonlinear finite element models with the performance-based earthquake engineering (PBEE) framework for accurate performance evaluations of instrumented civil infrastructures. The algorithm first utilizes a minimum variance framework to fuse predictions from a numerical model of a civil infrastructure with its measured behavior during a past earthquake to update the parameters of the numerical model that is, then, used for performance prediction of the civil infrastructure during future earthquakes. A nonproduct quadrature rule, based on the conjugate unscented transformation, forms an enabling tool to drive the computationally efficient model prediction, model-data fusion, and performance evaluation. The algorithm is illustrated and validated on Meloland Road overpass, a heavily instrumented highway bridge in El Centro, CA, which experienced three moderate earthquake events in the past. The benefits of integrating measurement data into the PBEE framework are highlighted by comparing damage fragilities of and annual probabilities of damages to the bridge estimated using the presented algorithm with that estimated using the conventional PBEE approach.  相似文献   

13.
Summary Studies of error propagation in geodetic networks of an absolute type have already been carried through by several authors using various mathematical techniques. The geodetic elasticity theory relies on a continuation of the actual, discrete network. The traditional observation and normal equation matrices are substituted by partial differential equations with corresponding boundary conditions. The continuous approach only reflects the global error behaviour opposed to the discrete case, and produces only asymptotic results. An advantage of the method is that we may directly profit from existing mathematical knowledge. The fundamental solution of the partial differential equations acts as a formal covariance function and yields the best linear unbiased estimates for estimable functions of the adjustment parameters. Levelling networks and networks with distance and azimuth measurements are studied in this framework.Invited paper presented at the International Symposium on Optimisation of Design and Computation of Control Networks, Sopron, July 4–8, 1977.  相似文献   

14.
迭代优化的网络最短路径射线追踪方法研究   总被引:1,自引:1,他引:0       下载免费PDF全文
网络最短路径射线追踪算法,用预先设置的网格节点的连线表示地震波传播路径,当网格节点稀疏时,获得的射线路径呈Z字形,计算的走时比实际走时偏差大.本文在网络最短路径射线追踪算法的基础上,提出了迭代法与网络最短路径相结合的射线追踪算法,运用迭代法优化计算由网络最短路径算法得到的射线路径,并对迭代法进行修正,从而克服了最短路径射线追踪算法的缺陷,大大提高了最小走时和射线路径的计算精度.  相似文献   

15.
Radar estimates of rainfall are being increasingly applied to flood forecasting applications. Errors are inherent both in the process of estimating rainfall from radar and in the modelling of the rainfall–runoff transformation. The study aims at building a framework for the assessment of uncertainty that is consistent with the limitations of the model and data available and that allows a direct quantitative comparison between model predictions obtained by using radar and raingauge rainfall inputs. The study uses radar data from a mountainous region in northern Italy where complex topography amplifies radar errors due to radar beam occlusion and variability of precipitation with height. These errors, together with other error sources, are adjusted by applying a radar rainfall estimation algorithm. Radar rainfall estimates, adjusted and not, are used as an input to TOPMODEL for flood simulation over the Posina catchment (116 km2). Hydrological model parameter uncertainty is explicitly accounted for by use of the GLUE (Generalized Likelihood Uncertainty Estimation). Statistics are proposed to evaluate both the wideness of the uncertainty limits and the percentage of observations which fall within the uncertainty bounds. Results show the critical importance of proper adjustment of radar estimates and the use of radar estimates as close to ground as possible. Uncertainties affecting runoff predictions from adjusted radar data are close to those obtained by using a dense raingauge network, at least for the lowest radar observations available. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

16.
三维场地波动传播的快速射线追踪法   总被引:7,自引:0,他引:7  
本文基于Snell定律和Fermat原理对三维任意界面情况下的两点间射线追踪问题进行了研究,从同一条射线满足相同的射线参数出发,推得一个适用于任意界面情况下计算反(折)射点的一阶近似公式。结合迭代技术,给出了三维场地条件下射线追踪迭代算法的计算格式,并进行了三维场地射线追踪模拟计算。计算表明:计算速度相当快,且其计算精度可以根据需要满足要求。  相似文献   

17.
为推进大地电磁三维反演的实用化,本文实现了基于L-BFGS算法的带地形大地电磁三维反演.首先推导了大地电磁法三维反演的Tikhonov正则化目标函数以及Hessian矩阵逆矩阵近似表达式和计算方法,然后设计了一种既能保证空气电阻率固定不变又能保证模型平滑约束的协方差矩阵统一表达式,解决带地形反演问题.在反演算法中采用正则化因子冷却法以及基于Wolf条件的步长搜索策略,提升了反演的稳定性.利用开发的算法对多个带地形地电模型(山峰地形下的单个异常模型、峰-谷地形下的棋盘模型)的合成数据进行了三维反演,并与已有大地电磁三维反演程序(ModEM)进行对比,验证了本文开发的三维反演算法的正确性和可靠性.最后,利用该算法反演了华南某山区大地电磁实测数据,得到该区三维电性结构,揭示了研究区以高阻介质为基底,中间以低阻不整合面和相对低阻介质连续分布,浅部覆盖高阻介质的电性结构特征,进一步验证了本文算法的实用性.  相似文献   

18.
双能计算机断层成像技术(DECT)由于其材料分解能力,在高级成像应用中发挥着重要作用.图像域分解直接对CT图像进行线性矩阵反演,但分解后的材料图像会受到噪声和伪影的严重影响.虽然各种正则化方法被提出来解决这个问题,但它们仍然面临着两个挑战:繁琐的参数调整和过度平滑导致的图像细节损失.为此,本文提出一种基于迭代残差网络的...  相似文献   

19.
An important stage in two-dimensional magnetotelluric modelling is the calculation of the Earth's response functions for an assumed conductivity model and the calculation of the associated Jacobian relating those response functions to the model parameters. The efficiency of the calculation of the Jacobian will affect the efficiency of the inversion modelling. Rodi (1976) produced all the Jacobian elements by inverting a single matrix and using an approximate first-order algorithm. Since only one inverse matrix required calculation the procedure speeded up the inversion. An iterative scheme to improve the approximation to the Jacobian information is presented in this paper. While this scheme takes a little longer than Rodi's algorithm, it enables a more accurate determination of the Jacobian information. It is found that the Jacobian elements can be produced in 10% of the time required to calculate an inverse matrix or to calculate a 2D starting model. A modification of the algorithm can further be used to improve the accuracy of the original inverse matrix calculated in a 2D finite difference program and hence the solution this program produces. The convergence of the iteration scheme is found to be related both to the originally calculated inverse matrix and to the change in the newly formed matrix arising from perturbation of the model parameter. A ridge regression inverse algorithm is used in conjunction with the iterative scheme for forward modelling described in this paper to produce a 2D conductivity section from field data.  相似文献   

20.
The paper presents a novel approach to the setup of a Kalman filter by using an automatic calibration framework for estimation of the covariance matrices. The calibration consists of two sequential steps: (1) Automatic calibration of a set of covariance parameters to optimize the performance of the system and (2) adjustment of the model and observation variance to provide an uncertainty analysis relying on the data instead of ad-hoc covariance values. The method is applied to a twin-test experiment with a groundwater model and a colored noise Kalman filter. The filter is implemented in an ensemble framework. It is demonstrated that lattice sampling is preferable to the usual Monte Carlo simulation because its ability to preserve the theoretical mean reduces the size of the ensemble needed. The resulting Kalman filter proves to be efficient in correcting dynamic error and bias over the whole domain studied. The uncertainty analysis provides a reliable estimate of the error in the neighborhood of assimilation points but the simplicity of the covariance models leads to underestimation of the errors far from assimilation points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号