共查询到20条相似文献,搜索用时 156 毫秒
1.
2.
基于改进的小子域滤波的重力异常及重力梯度张量边缘增强(英文) 总被引:1,自引:0,他引:1
针对利用重磁资料增强地质体边界在图像中的视觉效果和提高地质解译的准确性问题,提出应用改进的小子域滤波方法对重力异常及重力梯度张量数据进行增强处理。根据滑动窗口子域平均选择原理,探讨了改进的小子域滤波方法在位场异常数据含有高斯白噪声时,不同窗口大小对地质体边界的识别效果及其在具有不同边界延伸方向地质体中的应用效果。模型试验结果表明,利用改进的小子域滤波对重力梯度张量数据进行增强处理,得到的地质体边界形态失真更小,且受滤波窗口大小、噪声以及地质体边界方向的影响较小;对深部场源体,通过增大滤波窗口的方式,可以较好地反映深部场源体的边界。结合黑龙江省虎林盆地重力异常以及计算的重力梯度张量的处理实例表明改进的小子域滤波法较传统的小子域滤波法增强了对断裂水平位置信息的识别。 相似文献
3.
研究地壳形变的理论和方法主要有两大类:地球物理模型方法和几何方法.由于基准误差、观测误差和模型误差等,两种方法得到的结果往往存在差异.为合理利用几何信息和物理信息,控制几何观测及物理模型误差对形变参数估计的影响,并平衡两类信息对形变参数的贡献,本文提出一种利用自适应滤波综合估计形变参数的方法.采用抗差等价权控制几何观测异常误差的影响,引入自适应因子平衡几何观测和地球物理模型信息对形变模型参数估计的贡献,利用高精度IGS站速度确定局部形变的基准.利用一实测GPS监测网进行计算,结果表明该混合估计策略可充分利用局部重复几何观测信息减弱地球物理模型信息带来的形变系统误差,提高了形变参数解算精度. 相似文献
4.
本文建立了利用FG5实测数据求解重力垂直梯度的数据处理模型与算法.通过对多次自由落体实验的下落距离拟合残差叠加求均值,发现下落距离观测量中存在明显的有色噪声.通过对有色噪声的建模,并以剩余残差为依据选取可靠的下落时段,解算测站点的重力垂直梯度.利用本文所提出的数据处理方法分别对FG5-214绝对重力仪在两个测站上的观测数据进行处理,以相对重力仪测量的重力垂直梯度结果为参考值,本文处理得到的重力垂直梯度结果相比于未考虑有色噪声并依据经验选取下落时段的解算方法得到了显著改善. 相似文献
5.
通过联合全球重力位模型(EGM2008)、航空重力扰动数据和剩余地形模型(RTM)数据,基于频谱域(二维FFT变换)和空间域(Stokes数值积分)算法对毛乌素测区GT-2A航空重力测量系统采集的空中测线后处理重力扰动数据进行解算,构建了该地区的航空重力梯度扰动全张量.(1)残余航空重力扰动延拓结果表明:残余航空重力扰动经向下延拓至大地水准面,再向上延拓至航空高度后与原数据差值的标准差为1.0078 mGal,考虑边缘效应后,内缩计算范围得到的差值标准差减小至0.1269 mGal.(2)基于残余重力扰动数据(原航空高度数据及向下延拓数据),通过不同方案解算得到的梯度扰动结果表明:两种方案得到的研究区域重力梯度扰动各分量之差的最大标准差为6.4798E(Γ_(yz)分量),最小标准差为2.6968E(Γ_(xy)分量),内缩计算范围后得到的差值标准差最大值为1.8307E(Γ_(zz)分量),最小值为0.7223E(Γ_(yz)分量).本文的思路和方法可为未来我国自主构建航空重力梯度标定场提供参考. 相似文献
6.
《地球物理学进展》2017,(4)
利用径向基函数的多尺度分析方法可以对地球重力场进行分解,提取出细节的地球物理信息.目前通常采用离散积分法进行解算,但信号分解并重构后,所得信号并不能与原始重力场信号完全符合,分解过程中出现了信号泄露.针对这一现象,本文在最小二乘算法和方差分量估计的基础上,提出了新的在各个尺度上直接解算基函数系数的算法(直接法),有效的减少了重力场分解过程中的信号泄露.以南海地区DTU13重力异常数据为例,分别运用离散积分法和直接法对重力异常进行多尺度分解,结果显示:直接法的5个尺度上的信号泄露误差相对离散积分法减小约39%~79%;直接法总的泄露误差为±1.12 m Gal,明显小于离散积分法的±4.04 m Gal,直接法具有更优的效果. 相似文献
7.
8.
低低卫星跟踪卫星的观测量是两低轨卫星的星间距离或星间速度,星间加速度由星间速度通过数值微分导出,用星间加速度作为观测量可以避免解算卫星运动的变分方程,简化观测方程的建立,但数值微分会使观测噪声放大,从而影响重力位的解算精度.为了定量给出星间加速度观测模式的精度,本文分析并模拟验证了数值微分公式计算星间加速度的精度,导出了基于星间加速度的一般形式的观测方程,模拟计算了基于星间加速度的重力位模型.结果表明,采用星间加速度观测模式的解算精度要明显低于星间速度观测模式的解算精度. 相似文献
9.
《中国科学:地球科学》2015,(2)
目前时变信号模型的混频误差成为时变重力场解算精度的主要限制之处,本文给出三种适合于重力任务的包含不同方向观测量的卫星编队GRACE-type,Pendulum-type和n-sCartwheel-type,设计两种方案并通过仿真实验研究了卫星编队用于消除海潮模型混频误差影响的可行性.结果表明,当不考虑模型混频误差时,n-s-Cartwheel编队能够为重力场解算提供最好的条件,与GRACE-type编队相比,对重力场解算精度提高达43%;当海潮模型的混频误差成为主要误差源时,利用卫星编队由动力法反演重力场并不能消除混频及提高重力场的解算精度,包含径向观测量的Cartwheel-type编队由于对重力场的高阶变化更为敏感,重力场结果中包含了更多的海潮模型误差的高频信号,误差急剧增大. 相似文献
10.
联合不同类型重力测量数据确定地球重力场模型的迭代法 总被引:1,自引:0,他引:1
不同的重力测量数据包含了不同波段的地球重力场信息,因此要恢复更高精度的地球重力场模型,就必须对不同类型的重力测量数据进行联合处理.以地面重力异常Δg为例,推导了利用迭代法联合不同类型重力测量数据反演地球重力场模型的基本原理公式,并给出了其具体实现步骤,接着采用全球的重力异常Δg数据和扰动位T数据,基于迭代法对卫星重力梯度SGG数据解算的重力场模型进行了进一步的精化.结果表明,初始的卫星重力梯度SGG模型和经过全球重力异常Δg数据精化后的模型,其对应的累计大地水准面误差分别达到1.128cm和0.048cm、累计重力异常误差分别达到0.416mGal和0.018mGal的精度;在经过全球扰动位T数据进一步精化后的模型,其对应的累计大地水准面误差达到0.043cm、累计重力异常误差达到0.016mGal的精度. 相似文献
11.
Evaluating the utility of the ensemble transform Kalman filter for adaptive sampling when updating a hydrodynamic model 总被引:1,自引:0,他引:1
This paper compares two Monte Carlo sequential data assimilation methods based on the Kalman filter, for estimating the effect of measurements on simulations of state error variance made by a one-dimensional hydrodynamic model. The first method used an ensemble Kalman filter (EnKF) to update state estimates, which were then used as initial conditions for further simulations. The second method used an ensemble transform Kalman filter (ETKF) to quickly estimate the effect of measurement error covariance on forecast error covariance without the need to re-run the simulation model. The ETKF gave an unbiased estimate of EnKF analysed error variance, although differences in the treatment of measurement errors meant the results were not identical. Estimates of forecast error variance could also be made, but their accuracy deteriorated as the time from measurements increased due in part to model non-linearity and the decreasing signal variance. The motivation behind the study was to assess the ability of the ETKF to target possible measurements, as part of an adaptive sampling framework, before they are assimilated by an EnKF-based forecasting model on the River Crouch, Essex, UK. The ETKF was found to be a useful tool for quickly estimating the error covariance expected after assimilating measurements into the hydrodynamic model. It, thus, provided a means of quantifying the ‘usefulness’ (in terms of error variance) of possible sampling schemes. 相似文献
12.
Gary Barnes 《Geophysical Prospecting》2014,62(3):646-657
When anomalous gravity gradient signals provide a large signal‐to‐noise ratio, airborne and marine surveys can be considered with wide line spacing. In these cases, spatial resolution and sampling requirements become the limiting factors for specifying the line spacing, rather than anomaly detectability. This situation is analysed by generating known signals from a geological model and then sub‐sampling them using a simulated airborne gravity gradient survey with a line spacing much wider than the characteristic anomaly size. The data are processed using an equivalent source inversion, which is used subsequently to predict and grid the field in‐between the survey lines by means of forward calculations. Spatial and spectral error analysis is used to quantify the accuracy and resolution of the processed data and the advantages of acquiring multiple gravity gradient components are demonstrated. With measurements of the full tensor along survey lines spaced at 4 × 4 km, it is shown that the vertical gravity gradient can be reconstructed accurately over a bandwidth of 2 km with spatial root‐mean square errors less than 30%. A real airborne full‐tensor gravity gradient survey is presented to confirm the synthetic analysis in a practical situation. 相似文献
13.
潜艇的惯性导航误差是随时间积累的,利用重力异常数据进行辅助导航可以对惯性导航的漂移误差进行校正.首先利用2′×2′重力异常数据库作为基础信息,结合Kalman滤波算法对某区域进行了模拟计算,模拟过程采用了增益系数和信息更新序列的新方法进行Kalman 滤波的处理,结果表明在重力异常变化幅度较大的地区,重力异常可以进行潜艇的辅助导航. 相似文献
14.
The paper presents a novel approach to the setup of a Kalman filter by using an automatic calibration framework for estimation of the covariance matrices. The calibration consists of two sequential steps: (1) Automatic calibration of a set of covariance parameters to optimize the performance of the system and (2) adjustment of the model and observation variance to provide an uncertainty analysis relying on the data instead of ad-hoc covariance values. The method is applied to a twin-test experiment with a groundwater model and a colored noise Kalman filter. The filter is implemented in an ensemble framework. It is demonstrated that lattice sampling is preferable to the usual Monte Carlo simulation because its ability to preserve the theoretical mean reduces the size of the ensemble needed. The resulting Kalman filter proves to be efficient in correcting dynamic error and bias over the whole domain studied. The uncertainty analysis provides a reliable estimate of the error in the neighborhood of assimilation points but the simplicity of the covariance models leads to underestimation of the errors far from assimilation points. 相似文献
15.
The Bayesian probabilistic approach is proposed to estimate the process noise and measurement noise parameters for a Kalman filter. With state vectors and covariance matrices estimated by the Kalman filter, the likehood of the measurements can be constructed as a function of the process noise and measurement noise parameters. By maximizing the likelihood function with respect to these noise parameters, the optimal values can be obtained. Furthermore, the Bayesian probabilistic approach allows the associated uncertainty to be quantified. Examples using a single-degree-of-freedom system and a ten-story building illustrate the proposed method. The effect on the performance of the Kalman filter due to the selection of the process noise and measurement noise parameters was demonstrated. The optimal values of the noise parameters were found to be close to the actual values in the sense that the actual parameters were in the region with significant probability density. Through these examples, the Bayesian approach was shown to have the capability to provide accurate estimates of the noise parameters of the Kalman filter, and hence for state estimation. 相似文献
16.
A Least-squares Window Curves Method for Interpretation of Magnetic Anomalies Caused by Dipping Dikes 总被引:1,自引:0,他引:1
E. M. Abdelrahman E. R. Abo-Ezz K. S. Soliman T. M. El-Araby K. S. Essa 《Pure and Applied Geophysics》2007,164(5):1027-1044
We have developed a least-squares method to determine simultaneously the depth and the width of a buried thick dipping dike
from residualized magnetic data using filters of successive window lengths. The method involves using a relationship between
the depth and the half-width of the source and a combination of windowed observations. The relationship represents a family
of curves (window curves). For a fixed window length, the depth is determined for each half-width value by solving one nonlinear
equation of the form f (z) = 0 using the least-squares method. The computed depths are plotted against the width values representing a continuous curve.
The solution for the depth and the width of the buried dike is read at the common intersection of the window curves. The method
involves using a dike model convolved with the same moving average filter as applied to the observed data. As a result, this
method can be applied to residuals as well as to measured magnetic data. Procedures are also formulated to estimate the amplitude
coefficient and the index parameter. The method is applied to theoretical data with and without random errors. The validity
of the method is tested on airborne magnetic data from Canada and on a vertical component magnetic anomaly from Turkey. In
all cases examined, the model parameters obtained are in good agreement with the actual ones and with those given in the published
literature. 相似文献
17.
18.
Transformation between gravimetric and GPS/levelling-derived geoids using additional gravity information 总被引:1,自引:0,他引:1
The transformation from the gravimetric to the GPS/levelling-derived geoid using additional gravity information for the covariance function of geoid height differences has been investigated in a test area in south-western Canada. A “corrector surface” model, which accounts for datum inconsistencies, long-wavelength geoid errors, vertical network distortions and GPS errors, has been constructed using least-squares collocation. The local covariance function of geoid height differences is usually obtained from residual values between the GPS/levelling and gravimetric geoid heights after the elimination of all known systematic distortions. If additional gravity data (in the form of gravity anomalies) are available, the covariance function of geoid height differences can be determined by the following steps: (1) transforming the GPS/levelling-derived geoid heights into gravity anomalies; (2) forming differences between the computed in step 1 and given gravity anomalies; (3) determining the parameters of the local covariance function of the gravity anomaly differences; (4) constructing an analytical covariance model for the geoid height differences from the covariance function of the gravity anomaly differences using the parameters derived in step 3. The advantage of the proposed method stems from the great number of gravity data used to derive the empirical covariance function. A comparison with the least-squares adjustment shows that the standard deviation of the residuals of the predicted geoid height differences with respect to the control point values decreases by 2.4 cm. 相似文献
19.
20.
An effective method based on dynamic sampling for data assimilation in a global wave model 总被引:3,自引:2,他引:1
The ensemble Kalman filter (EnKF) performs well because that the covariance of background error is varying along time. It provides a dynamic estimate of background error and represents the reasonable statistic characters of background error. However, high computational cost due to model ensemble in EnKF is employed. In this study, two methods referred as static and dynamic sampling methods are proposed to obtain a good performance and reduce the computation cost. Ensemble adjustment Kalman filter (EAKF) method is used in a global surface wave model to examine the performance of EnKF. The 24-h interval difference of simulated significant wave height (SWH) within 1 year is used to compose the static samples for ensemble errors, and these errors are used to construct the ensemble states at each time the observations are available. And then, the same method of updating the model states in the EAKF is applied for the ensemble states constructed by a static sampling method. The dynamic sampling method employs a similar method to construct the ensemble states, but the period of the simulated SWH is changing with time. Here, 7 days before and after the observation time is used as this period. To examine the performance of three schemes, EAKF, static, or dynamic sampling method, observations from satellite Jason-2 in 2014 are assimilated into a global wave model, and observations from satellite Saral are used for validation. The results indicate that the EAKF performs best, while the static sampling method is relatively worse. The dynamic sampling method improves an assimilation effect dramatically compared to the static sampling method, and its overall performance is closed to the EAKF. In low latitudes, the dynamic sampling method has a slight advantage over the EAKF. In the dynamic or static sampling methods, only one wave model is required to run and their computational cost is reduced sharply. According to the performance of these three methods, the dynamic sampling method can treated as an effective alternative of EnKF, which could reduce the computational cost and provide a good performance of data assimilation. 相似文献