共查询到20条相似文献,搜索用时 236 毫秒
1.
For the following problems - estimating the statistical parameters of the precise levelling, - adjusting the primary levelling networks and - estimating vertical crustal movements mathematical models are being sketched out. Results obtained in evaluating primary relevellings in the G.D.R. are reported. 相似文献
2.
Georges Blaha 《Journal of Geodesy》1982,56(4):281-299
The present paper deals with the least-squares adjustment where the design matrix (A) is rank-deficient. The adjusted parameters \(\hat x\) as well as their variance-covariance matrix ( \(\sum _{\hat x} \) ) can be obtained as in the “standard” adjustment whereA has the full column rank, supplemented with constraints, \(C\hat x = w\) , whereC is the constraint matrix andw is sometimes called the “constant vector”. In this analysis only the inner adjustment constraints are considered, whereC has the full row rank equal to the rank deficiency ofA, andAC T =0. Perhaps the most important outcome points to the three kinds of results
- A general least-squares solution where both \(\hat x\) and \(\sum _{\hat x} \) are indeterminate corresponds tow=arbitrary random vector.
- The minimum trace (least-squares) solution where \(\hat x\) is indeterminate but \(\sum _{\hat x} \) is detemined (and trace \(\sum _{\hat x} \) corresponds tow=arbitrary constant vector.
- The minimum norm (least-squares) solution where both \(\hat x\) and \(\sum _{\hat x} \) are determined (and norm \(\hat x\) , trace \(\sum _{\hat x} \) corresponds tow?0
3.
Warren G. Heller 《Journal of Geodesy》1981,55(4):354-369
The state of current and proposed moving-base gravity gradiometer instruments is briefly reviewed. The review perspective is directed toward their deployment as a source of additional gravimetric data during inertial surveys. In such gradiometer-aided surveys, the additional gravity gradient information could be used to:
- Improve surveyed gravity vector accuracy
- Extend the interval between zero velocity update stops
- Accomplish varying combinations of the above.
4.
Johannes Ihde 《Journal of Geodesy》1981,55(2):99-110
The investigations refer to the compartment method by using mean terrestrial free air anomalies only. Three main error influences of remote areas (distance from the fixed point >9°) on height anomalies and deflections of the vertical are being regarded:
- The prediction errors of mean terrestrial free air anomalies have the greatest influence and amount to about ±0″.2 in each component for deflections of the vertical and to ±3 m for height anomalies;
- The error of the compartment method, which originates from converting the integral formulas of Stokes and Vening-Meinesz into summation formulas, can be neglected if the anomalies for points and gravity profiles are compiled to 5°×5° mean values.
- The influences of the mean gravimetric correction terms of Arnold—estimated for important mountains of the Earth by means of an approximate formula—on height anomalies may amount to 1–2 m and on deflections of the vertical to 0″0.5–0″.1, and, therefore, they have to be taken into account for exact calculations.
5.
Yehuda Bock 《Journal of Geodesy》1983,57(1-4):294-311
The estimation of crustal deformations from repeated baseline measurements is a singular problem in the absence of prior information. One often applied solution is a free adjustment in which the singular normal matrix is augmented with a set of inner constraints. These constraints impose no net translation nor rotation for the estimated deformations X which may not be physically meaningful for a particular problem. The introduction of an available geophysical model from which an expected deformation vector \(\bar X\) and its covariance matrix \(\sum _{\bar X} \) can be computed will direct X to a physically more meaningful solution. Three possible estimators are investigated for estimating deformations from a combination of baseline measurements and geophysical models. 相似文献
6.
J. J. Levallois 《Journal of Geodesy》1983,57(1-4):312-331
The French astronomerJean PICARD (1620–1682) was certainly one of the leading scientists of his time. Friend of Huygens, of Hevelius, of Oldenburg, master of Römer, indefatigable traveller, he played a very important part in the development of positional astronomy and geodesy. - He first, had the idea of comparing the length units to a reproductible physical quantity, namely the length of the one second pendulum at Paris, and measured that length. - He conceived the first cross wire telescopes and adapted them on geodetic and astronomical instruments of his own, used throughout one century until 1780. - He obtained the first really reliable value of the earth radius, in his famous measurement of the meridional arc PARIS-AMIENS, being the original cell of the French triangulations. The following article is devoted to a recomputation and evaluation of the accuracy of that work, as compared with further operations, but independently concludes that this achievement gave the necessary impulse to the development of geodesy in France and probably abroad. 相似文献
7.
R R Navalgund V Jayaraman A S Kiran Kumar Tara Sharma Kurien Mathews K K Mohanty V K Dadhwal M B Potdar T P Singh R Ghosh V Tamilarasan T T Medhavy 《Journal of the Indian Society of Remote Sensing》1996,24(4):207-237
Although data available from various earth observation systems have been routinely used in many resource applications, however there have been gaps, and data needs of applications at different levels of details have not been met. There is a growing demand for availability of data at higher repetivity, at higher spatial resolution, in more and narrower spectral bands etc. Some of the thrust areas of applications particularly in the Indian context are;
- Management of natural resources to ensure sustainable increase in agricultural production,
- Study the state of the environment, its monitoring and assessment of the impact of. various development actions on the environment,
- Updating and generation of large scale topographical maps.
- Exploration/exploitation of marine and mineral resources and
- Operational meteorology and studying various land and oceanic processes to understand/predict global climate changes.
- Moderate spatial resolution (l50-300m), high repetivity (2 Days), minimum set of spectral bands (VIS, NIR, MIR. TIR) full coverage.
- Moderate to high spatial resolution (20-40m), high repetivity (4-6 Days), spectral bands (VIS, MR, MIR, TIR) full coverage.
- High spatial resolution (5-10m) muitispectral data with provision for selecting specific narrow bands (VIS, N1R. MIR), viewing from different angles.
- Synthetic aperture radar operating in at least two frequencies (C, X, Ku), two incidence angles/polarizations, moderate to high spatial resolution (20-40m), high repetivity (4-6 Days).
- Very high spatial resolution (1-2m) data in panchromatic band to provide terrain details at cadastral level (1:10,000).
- Stereo capability (1-2m height resolution) to help planning/execution of development plans.
- Moderate resolution sensor operating in VIS, NIR, MIR on a geostationary platform for observations at different sun angles necessary for the development of canopy reflectance inversion models.
- Diurnal (at least two i.e. pre-dawn and noon) temperature measurements of the earth surface.
- Ocean colour monitor with daily coverage.
- Multi-frequency microwave radiometer, scatterometer. altimeter, atmospheric sounder, etc.
8.
M. G. Sideris 《Journal of Geodesy》1996,70(8):470-479
Spectral methods have been a standard tool in physical geodesy applications over the past decade. Typically, they have been used for the efficient evaluation of convolution integrals, utilizing homogeneous, noise-free gridded data. This paper answers the following three questions:
- Can data errors be propagated into the results?
- Can heterogeneous data be used?
- Is error propagation possible with heterogeneous data?
9.
Geological studies of the area around Katta, in the southern part of the Ratnagiri District of Maharashtra, were carried out with the help of visual remote sensing techniques using LANDSAT imageries on 1:250,000 scale and aerial photographs on 1:60,000 scale. The major stratigraphic units represented in the area under study are the Archean Complex, Kaladgi Supergroup, Deccan Trap, Laterite and Alluvium. The Kaladgis unconformably overlie the Archean metasediments and also at places exhibit faulted contacts with the latter. The major part of the area is covered by a thick evergreen vegetation. The interpretation followed by field work and laboratory work revealed the following:
- The different lithologic units could be delineated on the aerial photographs.
- Different lineaments marked on the imagery were found to be due either to faults or fracture zones. Some of the older faults appear to have been rejuvenated after the formation of the laterites.
- Some of the lithologic horizons can be identified on the Landsat imagery by virtue of their spatial signatures.
10.
One of the most widely used method for the time-series analysis of continuous Global Navigation Satellite System (GNSS) observations is Maximum Likelihood Estimation (MLE) which in most implementations requires $\mathcal{O }(n^3)$ operations for $n$ observations. Previous research by the authors has shown that this amount of operations can be reduced to $\mathcal{O }(n^2)$ for observations without missing data. In the current research we present a reformulation of the equations that preserves this low amount of operations, even in the common situation of having some missing data.Our reformulation assumes that the noise is stationary to ensure a Toeplitz covariance matrix. However, most GNSS time-series exhibit power-law noise which is weakly non-stationary. To overcome this problem, we present a Toeplitz covariance matrix that provides an approximation for power-law noise that is accurate for most GNSS time-series.Numerical results are given for a set of synthetic data and a set of International GNSS Service (IGS) stations, demonstrating a reduction in computation time of a factor of 10–100 compared to the standard MLE method, depending on the length of the time-series and the amount of missing data. 相似文献
11.
Observable quantities in satellite gradiometry 总被引:1,自引:1,他引:0
Martin Vermeer 《Journal of Geodesy》1990,64(4):347-361
Deriving the observables for satellite gravity gradiometry, several workers have identified the invariants under spatial rotation of the gravitation gradient tensor for obtaining quantities insensitive to the precise (unrecoverable) attitude of the satellite. Extending this work we show:
- Considering that an approximate (not precise) attitude recovery for these, three-axes-stabilised, satellites is to be expected, one can identifythree independent invariants instead of two.
- Besides studying gradient tensor invariants for one observation time, one should also study (as withGPS observables) first and seconddifferences between successive tensor component values in time. Bias and trend patterns in the measured tensor components caused by satellite rotation uncertainty, and by attitude uncertainty in some cross components, are shown to cancel. Information thus obtained is exclusively high-frequency, however.
12.
Sridevi Jade Malay Mukul V. K. Gaur Kireet Kumar T. S. Shrungeshwar G. S. Satyal Rakesh Kumar Dumka Saigeetha Jagannathan M. B. Ananda P. Dileep Kumar Souvik Banerjee 《Journal of Geodesy》2014,88(6):539-557
We present new insights on the time-averaged surface velocities, convergence and extension rates along arc-normal transects in Kumaon, Garhwal and Kashmir–Himachal regions in the Indian Himalaya from 13 years of high-precision Global Positioning System (GPS) time series (1995–2008) derived from GPS data at 14 GPS permanent and 42 campaign stations between $29.5{-}35^{\circ }\hbox {N}$ and $76{-}81^{\circ }\hbox {E}$ . The GPS surface horizontal velocities vary significantly from the Higher to Lesser Himalaya and are of the order of 30 to 48 mm/year NE in ITRF 2005 reference frame, and 17 to 2 mm/year SW in an India fixed reference frame indicating that this region is accommodating less than 2 cm/year of the India–Eurasia plate motion ( ${\sim }4~\hbox {cm/year}$ ). The total arc-normal shortening varies between ${\sim }10{-}14~\hbox {mm/year}$ along the different transects of the northwest Himalayan wedge, between the Indo-Tsangpo suture to the north and the Indo-Gangetic foreland to the south indicating high strain accumulation in the Himalayan wedge. This convergence is being accommodated differentially along the arc-normal transects; ${\sim } 5{-}10~\hbox {mm/year}$ in Lesser Himalaya and 3–4 mm/year in Higher Himalaya south of South Tibetan Detachment. Most of the convergence in the Lesser Himalaya of Garhwal and Kumaon is being accommodated just south of the Main Central Thrust fault trace, indicating high strain accumulation in this region which is also consistent with the high seismic activity in this region. In addition, for the first time an arc-normal extension of ${\sim }6~\hbox {mm/year}$ has also been observed in the Tethyan Himalaya of Kumaon. Inverse modeling of GPS-derived surface deformation rates in Garhwal and Kumaon Himalaya using a single dislocation indicate that the Main Himalayan Thrust is locked from the surface to a depth of ${\sim }15{-}20~\hbox {km}$ over a width of 110 km with associated slip rate of ${\sim }16{-}18~\hbox {mm/year}$ . These results indicate that the arc-normal rates in the Northwest Himalaya have a complex deformation pattern involving both convergence and extension, and rigorous seismo-tectonic models in the Himalaya are necessary to account for this pattern. In addition, the results also gave an estimate of co-seismic and post-seismic motion associated with the 1999 Chamoli earthquake, which is modeled to derive the slip and geometry of the rupture plane. 相似文献
13.
Combining consecutive short arcs into long arcs for precise and efficient GPS Orbit Determination 总被引:1,自引:1,他引:0
G. Beutler E. Brockmann U. Hugentobler L. Mervart M. Rothacher R. Weber 《Journal of Geodesy》1996,70(5):287-299
The final products of theCODE Analysis Center (Center for Orbit Determination in Europe) of theInternational GPS Service for Geodynamics (IGS) stem fromoverlapping 3-day-arcs. Until 31 December, 1994 these long arcs were computedfrom scratch, i.e. by processing three days of observations of about 40 stations (by mid 1995 about 60 stations were used) of the IGS Global Network in our parameter estimation program GPSEST. Becauseone-day-arcs have to be produced first (for the purpose of error detection etc.) the actual procedure was rather time-consuming. In the present article we develop the mathematical tools necessary to form long arcs based on the normal equation systems of consecutive short arcs (one-day-solutions in the case of CODE). The procedure in its simplest version is as follows:
- Each short arc is described bysix initial conditions and a number of dynamical orbit parameters (e.g. radiation pressure parameters). The resulting long arc in turn shall be based onn consecutive short arcs and described bysix initial conditions and again the same number of dynamical parameters as in the short arcs..
- By asking position and velocity to be continuous at the boundaries of the short arcs we obtain a long arc which is actually defined by one set of initial conditions andn sets of dynamical parameters (ifn short arcs are combined)..
- By asking the dynamical parameters to be identical in consecutive short arcs, the resulting long arc is characterized by exactly the same number of orbit parameters as each of the short arcs.
- This procedure isnot yet optimized becauseformally all n sets of orbit parameters have to be set up and solved for in the long arc solution (although they are not independent). In order to allow for an optimized solution we derive all necessary relations to eliminate the unnecessary parameters in the combination. Each long arc is characterized by the actual number of independent orbit parameters. The resulting procedure isvery efficient.
14.
Although total least squares (TLS) is more rigorous than the weighted least squares (LS) method to estimate the parameters in an errors-in-variables (EIV) model, it is computationally much more complicated than the weighted LS method. For some EIV problems, the TLS and weighted LS methods have been shown to produce practically negligible differences in the estimated parameters. To understand under what conditions we can safely use the usual weighted LS method, we systematically investigate the effects of the random errors of the design matrix on weighted LS adjustment. We derive the effects of EIV on the estimated quantities of geodetic interest, in particular, the model parameters, the variance–covariance matrix of the estimated parameters and the variance of unit weight. By simplifying our bias formulae, we can readily show that the corresponding statistical results obtained by Hodges and Moore (Appl Stat 21:185–195, 1972) and Davies and Hutton (Biometrika 62:383–391, 1975) are actually the special cases of our study. The theoretical analysis of bias has shown that the effect of random matrix on adjustment depends on the design matrix itself, the variance–covariance matrix of its elements and the model parameters. Using the derived formulae of bias, we can remove the effect of the random matrix from the weighted LS estimate and accordingly obtain the bias-corrected weighted LS estimate for the EIV model. We derive the bias of the weighted LS estimate of the variance of unit weight. The random errors of the design matrix can significantly affect the weighted LS estimate of the variance of unit weight. The theoretical analysis successfully explains all the anomalously large estimates of the variance of unit weight reported in the geodetic literature. We propose bias-corrected estimates for the variance of unit weight. Finally, we analyze two examples of coordinate transformation and climate change, which have shown that the bias-corrected weighted LS method can perform numerically as well as the weighted TLS method. 相似文献
15.
Deformations of radio telescopes used in geodetic and astrometric very long baseline interferometry (VLBI) observations belong to the class of systematic error sources which require correction in data analysis. In this paper we present a model for all path length variations in the geometrical optics of radio telescopes which are due to gravitational deformation. The Effelsberg 100 m radio telescope of the Max Planck Institute for Radio Astronomy, Bonn, Germany, has been surveyed by various terrestrial methods. Thus, all necessary information that is needed to model the path length variations is available. Additionally, a ray tracing program has been developed which uses as input the parameters of the measured deformations to produce an independent check of the theoretical model. In this program as well as in the theoretical model, the illumination function plays an important role because it serves as the weighting function for the individual path lengths depending on the distance from the optical axis. For the Effelsberg telescope, the biggest contribution to the total path length variations is the bending of the main beam located along the elevation axis which partly carries the weight of the paraboloid at its vertex. The difference in total path length is almost \(-\) 100 mm when comparing observations at 90 \(^\circ \) and at 0 \(^\circ \) elevation angle. The impact of the path length corrections is validated in a global VLBI analysis. The application of the correction model leads to a change in the vertical position of \(+120\) mm. This is more than the maximum path length, but the effect can be explained by the shape of the correction function. 相似文献
16.
Graph theory is useful for analyzing time-dependent model parameters estimated from interferometric synthetic aperture radar (InSAR) data in the temporal domain. Plotting acquisition dates (epochs) as vertices and pair-wise interferometric combinations as edges defines an incidence graph. The edge-vertex incidence matrix and the normalized edge Laplacian matrix are factors in the covariance matrix for the pair-wise data. Using empirical measures of residual scatter in the pair-wise observations, we estimate the relative variance at each epoch by inverting the covariance of the pair-wise data. We evaluate the rank deficiency of the corresponding least-squares problem via the edge-vertex incidence matrix. We implement our method in a MATLAB software package called GraphTreeTA available on GitHub (https://github.com/feigl/gipht). We apply temporal adjustment to the data set described in Lu et al. (Geophys Res Solid Earth 110, 2005) at Okmok volcano, Alaska, which erupted most recently in 1997 and 2008. The data set contains 44 differential volumetric changes and uncertainties estimated from interferograms between 1997 and 2004. Estimates show that approximately half of the magma volume lost during the 1997 eruption was recovered by the summer of 2003. Between June 2002 and September 2003, the estimated rate of volumetric increase is \((6.2 \, \pm \, 0.6) \times 10^6~\mathrm{m}^3/\mathrm{year} \). Our preferred model provides a reasonable fit that is compatible with viscoelastic relaxation in the five years following the 1997 eruption. Although we demonstrate the approach using volumetric rates of change, our formulation in terms of incidence graphs applies to any quantity derived from pair-wise differences, such as range change, range gradient, or atmospheric delay. 相似文献
17.
John Langbein 《Journal of Geodesy》2017,91(8):985-994
Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, \(1/f^{\alpha }\) with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices. 相似文献
18.
Yi Bin Yao Bao Zhang Shun Qiang Yue Chao Qian Xu Wen Fei Peng 《Journal of Geodesy》2013,87(5):439-448
We can map zenith wet delays onto precipitable water with a conversion factor, but in order to calculate the exact conversion factor, we must precisely calculate its key variable $T_\mathrm{m}$ . Yao et al. (J Geod 86:1125–1135, 2012. doi:10.1007/s00190-012-0568-1) established the first generation of global $T_\mathrm{m}$ model (GTm-I) with ground-based radiosonde data, but due to the lack of radiosonde data at sea, the model appears to be abnormal in some areas. Given that sea surface temperature varies less than that on land, and the GPT model and the Bevis $T_\mathrm{m}$ – $T_\mathrm{s}$ relationship are accurate enough to describe the surface temperature and $T_\mathrm{m}$ , this paper capitalizes on the GPT model and the Bevis $T_\mathrm{m}$ – $T_\mathrm{s}$ relationship to provide simulated $T_\mathrm{m}$ at sea, as a compensation for the lack of data. Combined with the $T_\mathrm{m}$ from radiosonde data, we recalculated the GTm model coefficients. The results show that this method not only improves the accuracy of the GTm model significantly at sea but also improves that on land, making the GTm model more stable and practically applicable. 相似文献
19.
Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density, $N_m \mathrm{F2}$ N m F 2 , and the height, $h_m \mathrm{F2}$ h m F 2 . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve $N_m \mathrm{F2}$ N m F 2 and $h_m \mathrm{F2}$ h m F 2 values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between $0.5\times 10^{10}$ 0.5 × 10 10 and $3.6\times 10^{10}$ 3.6 × 10 10 elec/m $^{-3}$ ? 3 for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height ( $\sim $ ~ 2 %). 相似文献
20.
A. Lannes 《Journal of Geodesy》2013,87(4):323-335
The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515–534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the $\Lambda $ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The $\Lambda $ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the $\Lambda $ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given $\Lambda $ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel $\Lambda $ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C $^*$ conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93–101, 2012). 相似文献