首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 579 毫秒
1.
The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515–534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the $\Lambda $ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The $\Lambda $ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the $\Lambda $ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given $\Lambda $ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel $\Lambda $ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C $^*$ conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93–101, 2012).  相似文献   

2.
We can map zenith wet delays onto precipitable water with a conversion factor, but in order to calculate the exact conversion factor, we must precisely calculate its key variable $T_\mathrm{m}$ . Yao et al. (J Geod 86:1125–1135, 2012. doi:10.1007/s00190-012-0568-1) established the first generation of global $T_\mathrm{m}$ model (GTm-I) with ground-based radiosonde data, but due to the lack of radiosonde data at sea, the model appears to be abnormal in some areas. Given that sea surface temperature varies less than that on land, and the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship are accurate enough to describe the surface temperature and $T_\mathrm{m}$ , this paper capitalizes on the GPT model and the Bevis $T_\mathrm{m}$ $T_\mathrm{s}$ relationship to provide simulated $T_\mathrm{m}$ at sea, as a compensation for the lack of data. Combined with the $T_\mathrm{m}$ from radiosonde data, we recalculated the GTm model coefficients. The results show that this method not only improves the accuracy of the GTm model significantly at sea but also improves that on land, making the GTm model more stable and practically applicable.  相似文献   

3.
For science applications of the gravity recovery and climate experiment (GRACE) monthly solutions, the GRACE estimates of \(C_{20}\) (or \(J_{2}\)) are typically replaced by the value determined from satellite laser ranging (SLR) due to an unexpectedly strong, clearly non-geophysical, variation at a period of \(\sim \)160 days. This signal has sometimes been referred to as a tide-like variation since the period is close to the perturbation period on the GRACE orbits due to the spherical harmonic coefficient pair \(C_{22}/S_{22}\) of S2 ocean tide. Errors in the S2 tide model used in GRACE data processing could produce a significant perturbation to the GRACE orbits, but it cannot contribute to the \(\sim \)160-day signal appearing in \(C_{20}\). Since the dominant contribution to the GRACE estimate of \(C_{20}\) is from the global positioning system tracking data, a time series of 138 monthly solutions up to degree and order 10 (\(10\times 10\)) were derived along with estimates of ocean tide parameters up to degree 6 for eight major tides. The results show that the \(\sim \)160-day signal remains in the \(C_{20}\) time series. Consequently, the anomalous signal in GRACE \(C_{20}\) cannot be attributed to aliasing from the errors in the S2 tide. A preliminary analysis of the cross-track forces acting on GRACE and the cross-track component of the accelerometer data suggests that a temperature-dependent systematic error in the accelerometer data could be a cause. Because a wide variety of science applications relies on the replacement values for \(C_{20}\), it is essential that the SLR estimates are as reliable as possible. An ongoing concern has been the influence of higher degree even zonal terms on the SLR estimates of \(C_{20}\), since only \(C_{20}\) and \(C_{40}\) are currently estimated. To investigate whether a better separation between \(C_{20}\) and the higher-degree terms could be achieved, several combinations of additional SLR satellites were investigated. In addition, a series of monthly gravity field solutions (\(60\times 60\)) were estimated from a combination of GRACE and SLR data. The results indicate that the combination of GRACE and SLR data might benefit the resonant orders in the GRACE-derived gravity fields, but it appears to degrade the recovery of the \(C_{20}\) variations. In fact, the results suggest that the poorer recovery of \(C_{40}\) by GRACE, where the annual variation is significantly underestimated, may be affecting the estimates of \(C_{20}\). Consequently, it appears appropriate to continue using the SLR-based estimates of \(C_{20}\), and possibly also \(C_{40}\), to augment the existing GRACE mission.  相似文献   

4.
The consistent estimation of terrestrial reference frames (TRF), celestial reference frames (CRF) and Earth orientation parameters (EOP) is still an open subject and offers a large field of investigations. Until now, source positions resulting from Very Long Baseline Interferometry (VLBI) observations are not routinely combined on the level of normal equations in the same way as it is a common process for station coordinates and EOPs. The combination of source positions based on VLBI observations is now integrated in the IVS combination process. We present the studies carried out to evaluate the benefit of the combination compared to individual solutions. On the level of source time series, improved statistics regarding weighted root mean square have been found for the combination in comparison with the individual contributions. In total, 67 stations and 907 sources (including 291 ICRF2 defining sources) are included in the consistently generated CRF and TRF covering 30 years of VLBI contributions. The rotation angles \(A_1\), \(A_2\) and \(A_3\) relative to ICRF2 are ?12.7, 51.7 and 1.8 \({\upmu }\) as, the drifts \(D_\alpha \) and \(D_\delta \) are ?67.2 and 19.1 \(\upmu \) as/rad and the bias \(B_\delta \) is 26.1 \(\upmu \) as. The comparison of the TRF solution with the IVS routinely combined quarterly TRF solution shows no significant impact on the TRF, when the CRF is estimated consistently with the TRF. The root mean square value of the post-fit station coordinate residuals is 0.9 cm.  相似文献   

5.
We show that the current levels of accuracy being achieved for the precise orbit determination (POD) of low-Earth orbiters demonstrate the need for the self-consistent treatment of tidal variations in the geocenter. Our study uses as an example the POD of the OSTM/Jason-2 satellite altimeter mission based upon Global Positioning System (GPS) tracking data. Current GPS-based POD solutions are demonstrating root-mean-square (RMS) radial orbit accuracy and precision of \({<}1\)  cm and 1 mm, respectively. Meanwhile, we show that the RMS of three-dimensional tidal geocenter variations is \({<}6\)  mm, but can be as large as 15 mm, with the largest component along the Earth’s spin axis. Our results demonstrate that GPS-based POD of Earth orbiters is best performed using GPS satellite orbit positions that are defined in a reference frame whose origin is at the center of mass of the entire Earth system, including the ocean tides. Errors in the GPS-based POD solutions for OSTM/Jason-2 of \({<}4\)  mm (3D RMS) and \({<}2\)  mm (radial RMS) are introduced when tidal geocenter variations are not treated consistently. Nevertheless, inconsistent treatment is measurable in the OSTM/Jason-2 POD solutions and manifests through degraded post-fit tracking data residuals, orbit precision, and relative orbit accuracy. For the latter metric, sea surface height crossover variance is higher by \(6~\hbox {mm}^{2}\) when tidal geocenter variations are treated inconsistently.  相似文献   

6.
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density, $N_m \mathrm{F2}$ N m F 2 , and the height, $h_m \mathrm{F2}$ h m F 2 . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve $N_m \mathrm{F2}$ N m F 2 and $h_m \mathrm{F2}$ h m F 2 values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between $0.5\times 10^{10}$ 0.5 × 10 10 and $3.6\times 10^{10}$ 3.6 × 10 10 elec/m $^{-3}$ ? 3 for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height ( $\sim $ 2 %).  相似文献   

7.
We examine the relationship between source position stability and astrophysical properties of radio-loud quasars making up the International Celestial Reference Frame (ICRF2). Understanding this relationship is important for improving quasar selection and analysis strategies, and therefore reference frame stability. We construct flux density time series, known as light curves, for 95 of the most frequently observed ICRF2 quasars at both the 2.3 and 8.4 GHz geodetic very long baseline interferometry (VLBI) observing bands. Because the appearance of new quasar components corresponds to an increase in quasar flux density, these light curves alert us about potential changes in source structure before they appear in VLBI images. We test how source position stability depends on three astrophysical parameters: (1) flux density variability at X band; (2) time lag between flares in S and X bands; (3) spectral index root-mean-square (rms), defined as the variability in the ratio between S and X band flux densities. We find that the time lag between S and X band light curves provides a good indicator of position stability: sources with time lags $<$ 0.06 years are significantly more stable ( $>$ 20 % improvement in weighted rms) than sources with larger time lags. A similar improvement is obtained by observing sources with low $(<$ 0.12) spectral index variability. On the other hand, there is no strong dependence of source position stability on flux density variability in a single frequency band. These findings can be understood by interpreting the time lag between S and X band light curves as a measure of the size of the source structure. Monitoring of source flux density at multiple frequencies therefore appears to provide a useful probe of quasar structure on scales important to geodesy. The observed astrometric position of the brightest quasar component (the core) is known to depend on observing frequency. We show how multi-frequency flux density monitoring may allow the dependence on frequency of the relative core positions along the jet to be elucidated. Knowledge of the position–frequency relation has important implications for current and future geodetic VLBI programs, as well as the alignment between the radio and optical celestial reference frames.  相似文献   

8.
Error analysis of the NGS’ surface gravity database   总被引:1,自引:1,他引:0  
Are the National Geodetic Survey’s surface gravity data sufficient for supporting the computation of a 1 cm-accurate geoid? This paper attempts to answer this question by deriving a few measures of accuracy for this data and estimating their effects on the US geoid. We use a data set which comprises ${\sim }1.4$ million gravity observations collected in 1,489 surveys. Comparisons to GRACE-derived gravity and geoid are made to estimate the long-wavelength errors. Crossover analysis and $K$ -nearest neighbor predictions are used for estimating local gravity biases and high-frequency gravity errors, and the corresponding geoid biases and high-frequency geoid errors are evaluated. Results indicate that 244 of all 1,489 surface gravity surveys have significant biases ${>}2$  mGal, with geoid implications that reach 20 cm. Some of the biased surveys are large enough in horizontal extent to be reliably corrected by satellite-derived gravity models, but many others are not. In addition, the results suggest that the data are contaminated by high-frequency errors with an RMS of ${\sim }2.2$  mGal. This causes high-frequency geoid errors of a few centimeters in and to the west of the Rocky Mountains and in the Appalachians and a few millimeters or less everywhere else. Finally, long-wavelength ( ${>}3^{\circ }$ ) surface gravity errors on the sub-mGal level but with large horizontal extent are found. All of the south and southeast of the USA is biased by +0.3 to +0.8 mGal and the Rocky Mountains by $-0.1$ to $-0.3$  mGal. These small but extensive gravity errors lead to long-wavelength geoid errors that reach 60 cm in the interior of the USA.  相似文献   

9.
In this paper two new schemes for resolution enhancement (RE) of satellite images are proposed based on Nonsubsampled Contourlet Transform (NSCT). First one is based on the interpolation on band pass images obtained by applying NSCT on the input low resolution image. Similar to Demirel and Anbarjafari (IEEE Trans Geosci Remote Sens 49(6):1997–2004, 2011), as an intermediate step, the difference between approximation band and the input low resolution image is added with all the band pass directional subbands, to obtain a sharper image. This method is simple and computationally efficient but lacks sharp recovery of the edges due to the interpolation of band pass images. To overcome this, another method is proposed to obtain the difference layer, where dictionary is built using patches which are extracted from high resolution training image subbands. Similar patches from the dictionary are then clustered together. This method gives a much sharper image than the first method. Subjective and objective analysis of proposed methods reveals the superiority of the methods over conventional and other state-of-the-art RE methods.  相似文献   

10.
M-estimation with probabilistic models of geodetic observations   总被引:1,自引:1,他引:0  
The paper concerns \(M\) -estimation with probabilistic models of geodetic observations that is called \(M_{\mathcal {P}}\) estimation. The special attention is paid to \(M_{\mathcal {P}}\) estimation that includes the asymmetry and the excess kurtosis, which are basic anomalies of empiric distributions of errors of geodetic or astrometric observations (in comparison to the Gaussian errors). It is assumed that the influence function of \(M_{\mathcal {P}}\) estimation is equal to the differential equation that defines the system of the Pearson distributions. The central moments \(\mu _{k},\, k=2,3,4\) , are the parameters of that system and thus, they are also the parameters of the chosen influence function. The \(M_{\mathcal {P}}\) estimation that includes the Pearson type IV and VII distributions ( \(M_{\mathrm{PD(l)}}\) method) is analyzed in great detail from a theoretical point of view as well as by applying numerical tests. The chosen distributions are leptokurtic with asymmetry which refers to the general characteristic of empirical distributions. Considering \(M\) -estimation with probabilistic models, the Gram–Charlier series are also applied to approximate the models in question ( \(M_{\mathrm{G-C}}\) method). The paper shows that \(M_{\mathcal {P}}\) estimation with the application of probabilistic models belongs to the class of robust estimations; \(M_{\mathrm{PD(l)}}\) method is especially effective in that case. It is suggested that even in the absence of significant anomalies the method in question should be regarded as robust against gross errors while its robustness is controlled by the pseudo-kurtosis.  相似文献   

11.
Gravimetric quantities are commonly represented in terms of high degree surface or solid spherical harmonics. After EGM2008, such expansions routinely extend to spherical harmonic degree 2190, which makes the computation of gravimetric quantities at a large number of arbitrarily scattered points in space using harmonic synthesis, a very computationally demanding process. We present here the development of an algorithm and its associated software for the efficient and precise evaluation of gravimetric quantities, represented in high degree solid spherical harmonics, at arbitrarily scattered points in the space exterior to the surface of the Earth. The new algorithm is based on representation of the quantities of interest in solid ellipsoidal harmonics and application of the tensor product trigonometric needlets. A FORTRAN implementation of this algorithm has been developed and extensively tested. The capabilities of the code are demonstrated using as examples the disturbing potential T, height anomaly \(\zeta \), gravity anomaly \(\Delta g\), gravity disturbance \(\delta g\), north–south deflection of the vertical \(\xi \), east–west deflection of the vertical \(\eta \), and the second radial derivative \(T_{rr}\) of the disturbing potential. After a pre-computational step that takes between 1 and 2 h per quantity, the current version of the software is capable of computing on a standard PC each of these quantities in the range from the surface of the Earth up to 544 km above that surface at speeds between 20,000 and 40,000 point evaluations per second, depending on the gravimetric quantity being evaluated, while the relative error does not exceed \(10^{-6}\) and the memory (RAM) use is 9.3 GB.  相似文献   

12.
The present paper deals with the least-squares adjustment where the design matrix (A) is rank-deficient. The adjusted parameters \(\hat x\) as well as their variance-covariance matrix ( \(\sum _{\hat x} \) ) can be obtained as in the “standard” adjustment whereA has the full column rank, supplemented with constraints, \(C\hat x = w\) , whereC is the constraint matrix andw is sometimes called the “constant vector”. In this analysis only the inner adjustment constraints are considered, whereC has the full row rank equal to the rank deficiency ofA, andAC T =0. Perhaps the most important outcome points to the three kinds of results
  1. A general least-squares solution where both \(\hat x\) and \(\sum _{\hat x} \) are indeterminate corresponds tow=arbitrary random vector.
  2. The minimum trace (least-squares) solution where \(\hat x\) is indeterminate but \(\sum _{\hat x} \) is detemined (and trace \(\sum _{\hat x} \) corresponds tow=arbitrary constant vector.
  3. The minimum norm (least-squares) solution where both \(\hat x\) and \(\sum _{\hat x} \) are determined (and norm \(\hat x\) , trace \(\sum _{\hat x} \) corresponds tow?0
  相似文献   

13.
魏征  杨必胜  李清泉 《遥感学报》2012,16(2):286-296
以车载激光扫描点云数据为研究对象,提出一种由粗到细且快速获取点云中建筑物3维位置边界的方法。首先,通过分析格网内部点云的空间分布特征(平面距离、高程差异和点密集程度等)确定激光扫描点的权值,采用距离加权倒数IDW(Inverse Distance Weighted)内插方法生成车载激光扫描点云的特征图像。然后,采用阈值分割、轮廓提取与跟踪等手段提取特征图像中的建筑物目标的粗糙边界。最后,对粗糙边界内部的建筑物目标点云进行平面分割,提取建筑物的立面特征并构建立面不规则三角网TIN(Triangulated Irregular Network),并在建筑物先验框架知识条件下自动提取建筑物的精确3维位置边界。  相似文献   

14.
We present results from a new vertical deflection (VD) traverse observed in Perth, Western Australia, which is the first of its kind in the Southern Hemisphere. A digital astrogeodetic QDaedalus instrument was deployed to measure VDs with \({\sim }\)0.2\(''\) precision at 39 benchmarks with a \({{\sim }}1~\hbox {km}\) spacing. For the conversion of VDs to quasigeoid height differences, the method of astronomical–topographical levelling was applied, based on topographical information from the Shuttle Radar Topography Mission. The astronomical quasigeoid heights are in 20–30 mm (RMS) agreement with three independent gravimetric quasigeoid models, and the astrogeodetic VDs agree to 0.2–0.3\(''\) (north–south) and 0.6–0.9\(''\) (east–west) RMS. Tilt-like biases of \({\sim }1\,\,\hbox {mm}\) over \({\sim }1\,\,\hbox {km}\) are present for all quasigeoid models within \({\sim }20\,\,\hbox {km}\) of the coastline, suggesting inconsistencies in the coastal zone gravity data. The VD campaign in Perth was designed as a low-cost effort, possibly allowing replication in other Southern Hemisphere countries (e.g., Asia, Africa, South America and Antarctica), where VD data are particularly scarce.  相似文献   

15.
The study areas Tikovil and Payppara sub-watersheds of Meenachil river cover 158.9 and 111.9 km2, respectively. These watersheds are parts of Western Ghats, which is an ecologically sensitive region. The drainage network of the sub-watersheds was delineated using SOI topographical maps on 1:50,000 scale using the Arc GIS software. The stream orders were calculated using the method proposed by Strahler's (1964 Strahler, A. N. 1964. “Quantitative geomorphology of drainage basins and channel networks”. In Hand book of applied hydrology. Vol. 4, Edited by: Chow, V. T. Vol. 4, 3944.  [Google Scholar]). The drainage network shows that the terrain exhibits dendritic to sub-dendritic drainage pattern. Stream order ranges from the fifth to the sixth order. Drainage density varies between 1.69 and 2.62 km/km2. The drainage texture of the drainage basins are 2.3 km–1 and 6.98 km–1 and categorized as coarse to very fine texture. Stream frequency is low in the case of Payappara sub-watershed (1.78 km–2). Payappara sub-watershed has the highest constant of channel maintenance value of 0.59 indicating much fewer structural disturbances and fewer runoff conditions. The form factor value varies in between 0.42 and 0.55 suggesting elongated shape formed for Payappara sub-watershed and a rather more circular shape for Tikovil sub-watershed. The mean bifurcation ratio (3.5) indicates that both the sub-watersheds are within the natural stream system. Hence from the study it can be concluded that GIS techniques prove to be a competent tool in morphometric analysis.  相似文献   

16.
建筑物轮廓作为建筑物三维重建的重要元素,在建立智慧城市和数字城市中至关重要。本文针对从机载激光雷达点云中提取建筑物轮廓数据处理的点云滤波、建筑物屋顶面提取、建筑物轮廓提取,以及提取精度评定各环节存在的一些问题,提出了一种综合区域生长改进算法、三维Hough变换算法和α-shape算法的建筑物轮廓提取方法。该方法在对机载LiDAR点云数据去噪的基础上,首先利用改进的区域生长算法滤波地面点,并基于地物点到地面的归一化高程特征通过高度阈值去除高度较为低矮的地物点;再基于三维Hough变换算法从剩余建筑物和高大树木点云中提取建筑物平面;最后使用α-shape算法提取建筑物的轮廓信息。对使用RIEGLVQ-1560i机载激光雷达测量系统扫描的某城区点云数据进行计算,通过匹配度、形状相似度和位置精度等评价指标对提取的建筑物轮廓进行精度评定。结果表明,综合区域生长改进算法、三维Hough变换算法和α-shape算法的建筑物轮廓提取方法可以准确提取建筑物的轮廓信息,对于大范围的建筑物轮廓提取具有稳定性和普遍适用性。  相似文献   

17.

Background

LiDAR remote sensing is a rapidly evolving technology for quantifying a variety of forest attributes, including aboveground carbon (AGC). Pulse density influences the acquisition cost of LiDAR, and grid cell size influences AGC prediction using plot-based methods; however, little work has evaluated the effects of LiDAR pulse density and cell size for predicting and mapping AGC in fast-growing Eucalyptus forest plantations. The aim of this study was to evaluate the effect of LiDAR pulse density and grid cell size on AGC prediction accuracy at plot and stand-levels using airborne LiDAR and field data. We used the Random Forest (RF) machine learning algorithm to model AGC using LiDAR-derived metrics from LiDAR collections of 5 and 10 pulses m?2 (RF5 and RF10) and grid cell sizes of 5, 10, 15 and 20 m.

Results

The results show that LiDAR pulse density of 5 pulses m?2 provides metrics with similar prediction accuracy for AGC as when using a dataset with 10 pulses m?2 in these fast-growing plantations. Relative root mean square errors (RMSEs) for the RF5 and RF10 were 6.14 and 6.01%, respectively. Equivalence tests showed that the predicted AGC from the training and validation models were equivalent to the observed AGC measurements. The grid cell sizes for mapping ranging from 5 to 20 also did not significantly affect the prediction accuracy of AGC at stand level in this system.

Conclusion

LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m?2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations and assist in decision making towards more cost effective and efficient forest inventory.
  相似文献   

18.
We analyze the high-resolution dilatation data for the October 2013 \(M_w\) 6.2 Ruisui, Taiwan, earthquake, which occurred at a distance of 15–20 km away from a Sacks–Evertson dilatometer network. Based on well-constrained source parameters (\(\hbox {strike}=217^\circ \), \(\hbox {dip}=48^\circ \), \(\hbox {rake}=49^\circ \)), we propose a simple rupture model that explains the permanent static deformation and the dynamic vibrations at short period (\(\sim \)3.5–4.5 s) for most of the four sites with less than 20 % of discrepancies. This study represents a first attempt of modeling simultaneously the dynamic and static crustal strain using dilatation data. The results illustrate the potential for strain recordings of high-frequency seismic waves in the near-field of an earthquake to add constraints on the properties of seismic sources.  相似文献   

19.
Fast error analysis of continuous GNSS observations with missing data   总被引:3,自引:0,他引:3  
One of the most widely used method for the time-series analysis of continuous Global Navigation Satellite System (GNSS) observations is Maximum Likelihood Estimation (MLE) which in most implementations requires $\mathcal{O }(n^3)$ operations for $n$ observations. Previous research by the authors has shown that this amount of operations can be reduced to $\mathcal{O }(n^2)$ for observations without missing data. In the current research we present a reformulation of the equations that preserves this low amount of operations, even in the common situation of having some missing data.Our reformulation assumes that the noise is stationary to ensure a Toeplitz covariance matrix. However, most GNSS time-series exhibit power-law noise which is weakly non-stationary. To overcome this problem, we present a Toeplitz covariance matrix that provides an approximation for power-law noise that is accurate for most GNSS time-series.Numerical results are given for a set of synthetic data and a set of International GNSS Service (IGS) stations, demonstrating a reduction in computation time of a factor of 10–100 compared to the standard MLE method, depending on the length of the time-series and the amount of missing data.  相似文献   

20.
Proper understanding of how the Earth’s mass distributions and redistributions influence the Earth’s gravity field-related functionals is crucial for numerous applications in geodesy, geophysics and related geosciences. Calculations of the gravitational curvatures (GC) have been proposed in geodesy in recent years. In view of future satellite missions, the sixth-order developments of the gradients are becoming requisite. In this paper, a set of 3D integral GC formulas of a tesseroid mass body have been provided by spherical integral kernels in the spatial domain. Based on the Taylor series expansion approach, the numerical expressions of the 3D GC formulas are provided up to sixth order. Moreover, numerical experiments demonstrate the correctness of the 3D Taylor series approach for the GC formulas with order as high as sixth order. Analogous to other gravitational effects (e.g., gravitational potential, gravity vector, gravity gradient tensor), numerically it is found that there exist the very-near-area problem and polar singularity problem in the GC east–east–radial, north–north–radial and radial–radial–radial components in spatial domain, and compared to the other gravitational effects, the relative approximation errors of the GC components are larger due to not only the influence of the geocentric distance but also the influence of the latitude. This study shows that the magnitude of each term for the nonzero GC functionals by a grid resolution 15\(^{{\prime } }\,\times \) 15\(^{{\prime }}\) at GOCE satellite height can reach of about 10\(^{-16}\) m\(^{-1}\) s\(^{2}\) for zero order, 10\(^{-24 }\) or 10\(^{-23}\) m\(^{-1}\) s\(^{2}\) for second order, 10\(^{-29}\) m\(^{-1}\) s\(^{2}\) for fourth order and 10\(^{-35}\) or 10\(^{-34}\) m\(^{-1}\) s\(^{2}\) for sixth order, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号