首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
 A technique is presented for the development of a high-precision and high-resolution mean sea surface model utilising radar altimetric sea surface heights extracted from the geodetic phase of the European Space Agency (ESA) ERS-1 mission. The methodology uses a cubic-spline fit of dual ERS-1 and TOPEX crossovers for the minimisation of radial orbit error. Fourier domain processing techniques are used for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the initial model. The EGM96 gravity field and sea surface topography models are used as reference fields as part of the determination of spectral components required for the optimal interpolation algorithm. A comparison between the final model and 10 cycles of TOPEX sea surface heights shows differences of between 12.3 and 13.8 cm root mean square (RMS). An un-optimally interpolated surface comparison with TOPEX data gave differences of between 15.7 and 16.2 cm RMS. The methodology results in an approximately 10-cm improvement in accuracy. Further improvement will be attained with the inclusion of stacked altimetry from both current and future missions. Received: 22 December 1999 / Accepted: 6 November 2000  相似文献   

2.
 Using a local maximum filter, individual trees were extracted from a 1 m spatial resolution IKONOS image and represented as points. The spatial pattern of individual trees was determined to represent forest age (a surrogate for forest structure). Point attributes, based on the spatial pattern of trees, were generated via nearest neighbour statistics and used as the basis for aggregating points into forest structure units. The forest structure units allowed for the mapping of a forested area into one of three age categories: young (1–20 years), intermediate (21–120 years), and mature (>120 years). This research indicates a new approach to image processing, where objects generated from the processing of image data (rather than pixels or spectral values) are subjected to spatial statistical analysis to estimate an attribute relating an aspect of forest structure. Received: 22 April 2002 / Accepted: 23 November 2002  相似文献   

3.
 This paper presents and demonstrates a general approach to solving spatial dynamic models in continuous space and continuous time that characterize the behaviour of intertemporally and interspatially optimizing agents and estimating from discrete data the parameters of such models. The approach involves the use of a projection method to solve the models and a quasi-Newton algorithm to update quasi-FIML parameter estimates. Received: 26 July 2000 / Accepted: 31 January 2001  相似文献   

4.
Modeling line or surface phenomena digitally involves two tasks: discretization of the phenomenon, which yields a finite set of data, and subsequent interpolation, which reconstructs the continuum. Many mathematical techniques exist for the latter task, and most methods require a number of parameters to be specified. The shape of digital line or surface models between the data points (that is, the local shape) and the information derived from these models both depend on the selected method and, possibly, on the specification of parameters. The reconstruction of the continuum thus introduces uncertainty. This paper examines the sources and effects of this type of uncertainty. For this purpose, the modeling of lines and surfaces is separated into an abstraction, an implementation, and measurement. The individual factors affecting uncertainty of local shape in each step are identified and discussed. The paper concludes that local shape uncertainty, unlike positional uncertainty of given data, cannot be numerically assessed. Instead, measures of plausibility have to be used to denote the quality of digital models of lines and surfaces. Finally, the concept and potential problems of future empirical investigations are discussed.  相似文献   

5.
面向地质建模的三维体元拓扑数据模型研究   总被引:27,自引:0,他引:27  
在对地质对象的基本特征和计算机三维地质建模的基本要求进行讨论的基础上,提出了面向对象的三维体元拓扑数据模型。在该数据模型中,用面向对象的方法将地质对象抽象为点、线、面、体,体类又进一步划分为复合体、复杂体、简单体和体元四类。对所有对象类设计了12种拓扑关系和相应的数据结构。  相似文献   

6.
高精度的LiDAR点云配准是实现点云数据整体性和保证空间目标三维表面拓扑重建的关键,本文提出了基于Plücker直线的LiDAR点云配准模型,利用Plücker直线表示LiDAR待配准点云与基准点云间的同名直线,根据同名Plücker直线重合的几何拓扑关系,建立Plücker直线共线条件方程,再用最小二乘法确定待配准LiDAR点云与基准点云间的相对位姿参数。结果表明,Plücker直线共线条件配准模型几何约束性较强,配准精度较高。  相似文献   

7.
Geoprivacy protection is a significant concern when sharing data. To support sustainable land management by leveraging existing agricultural data, research is needed to identify how the polygon nature of static field parcels can be obfuscated to allow data sharing among individuals and organizations. In this study, five adaptive polygon-based obfuscation methods including PN*Rand, PDonut-k, PDensity-k, PAHilb, and PDonut_AHilb methods were developed and applied on the Irish Nutrient Management Planning Online (NMP Online) agricultural dataset. The polygon-based obfuscation methods introduced in this study were designed with the consideration of properties of spatial polygon objects including the spatial coordinates, shape and size of the polygon, topology, and spatial relationship between adjacent polygons that can be used to identify real-world objects. These methods were developed to guarantee that there is no false-identification and non-unique obfuscation which is important for static polygon objects in terms of accuracy and privacy protection. Qualitative approaches were developed to identify the optimal values of inner and outer radii of donut shape based on k-anonymity satisfaction and subsequently obtain the optimal value of k-anonymity. Several evaluation methods were implemented to compare the methods performance. Density-based methods particularly PDonut-AHilb provide the best trade-off between field parcel confidentiality and spatial pattern preservation and should be considered for researchers and practitioners obfuscating polygon data.  相似文献   

8.
Terrestrial Laser Scanning (TLS) is increasingly being used to collect mm-resolution surface data from a broad range of environments. When scanning complex surfaces, interactions between the surface topography, laser footprint and scanner precision can introduce errors into the point cloud. Quantification of these errors is, however, limited by the availability of independent measurement techniques. This research presents simulated TLS as a new approach to error quantification. Two sets of experiments are presented. The first set demonstrates that simulated TLS is able to reproduce real TLS data from a plane and a pebble. The second set uses simulated TLS to assess a methodology developed for the collection and processing of field TLS data. Simulated TLS data is collected from surfaces up to ~1 m2 created from regular arrays of uniform spheres (sphere diameters of 10 to 100 mm) and irregular arrays of mixed spheres (median sphere diameters of 16 to 94 mm). These data were analysed to (i) assess the effectiveness of the processing methodology at removing erroneous points; (ii) quantify the magnitude of errors in a digital surface model (DSM) interpolated from the processed point cloud; and (iii) investigate the extent to which the interpolated DSMs retained the geometric properties of the original surfaces. The processing methodology was found to be effective, especially on data from coarser surfaces, with the retained points typically having an inter-quartile range (IQR) of point errors of ~2 mm. DSM errors varied as a function of sphere size and packing, with DSM errors having an IQR of ~2 mm for the regular surfaces and ~4 mm for the irregular surfaces. Finally, whilst in the finer surfaces point and DSM errors were a substantial proportion of the sphere diameters, geometrical analysis indicated that the DSMs still reproduced properties of the original surface such as semivariance and some percentiles of the surface elevation distribution.  相似文献   

9.
The phenomenon known as 'terrain' is a continuous surface. However, when a digital terrain representation is based on a regular raster (i.e. a DEM) the digital surface is commonly not continuous. This is the case for the derivation of variables such as slope, aspect, and curvature values as performed in today's Geographic Information Systems (GIS). Often, there is no surface specified at all, as, for instance, when flow lines or watersheds are constructed. The discrepancy between the phenomenon to be modelled and its digital representation causes the terrain analysis results to be less accurate than they could be. Furthermore, if more than one type of terrain information is derived the results are likely to be based on different specifications of the seemingly same terrain surface. The combined application of the derivation results will likely introduce inconsistencies. This paper suggests founding the specification of digital terrain representations on a careful analysis of the properties of the phenomenon. The paper details the reasons for, and advantages of, continuous surface representations and emphasises the importance of a comprehensive documentation of the conceptual models underlying digital terrain representations. A review of suitable interpolation approaches for the specification of terrain surfaces is given. The paper discusses how the resulting digital surfaces are analysed and how measurement uncertainty may be accounted for.  相似文献   

10.
A new methodology for spatial interpolation of elevation data with variable density is proposed. The method is based on two-step interpolation and data processing to minimize interpolation artifacts caused by variable data density. In the first step, the parameterization of the spline interpolation method is focused on areas with sparse data that need smoother interpolation. Then, the resulting surface in these areas is randomly sampled to densify the original data set. In the second step, the parameterization of the interpolation method is focused on areas with a required high level of detail. The final resulting surface contains the properties of surfaces optimized for different data densities and levels of detail  相似文献   

11.
Detecting and Analyzing Mobility Hotspots using Surface Networks   总被引:3,自引:0,他引:3  
Capabilities for collecting and storing data on mobile objects have increased dramatically over the past few decades. A persistent difficulty is summarizing large collections of mobile objects. This article develops methods for extracting and analyzing hotspots or locations with relatively high levels of mobility activity. We use kernel density estimation (KDE) to convert a large collection of mobile objects into a smooth, continuous surface. We then develop a topological algorithm to extract critical geometric features of the surface; these include critical points (peaks, pits and passes) and critical lines (ridgelines and course‐lines). We connect the peaks and corresponding ridgelines to produce a surface network that summarizes the topological structure of the surface. We apply graph theoretic indices to analytically characterize the surface and its changes over time. To illustrate our approach, we apply the techniques to taxi cab data collected in Shanghai, China. We find increases in the complexity of the hotspot spatial distribution during normal activity hours in the late morning, afternoon and evening and a spike in the connectivity of the hotspot spatial distribution in the morning as taxis concentrate on servicing travel to work. These results match with scientific and anecdotal knowledge about human activity patterns in the study area.  相似文献   

12.
13.
Space born systems like Geoscience Laser Altimeter System (GLAS) onboard collect data for ice, cloud and Land. Elevation satellite (ICESat) collects an unparalleled data set as waveform over terrestrial targets, helps in evaluating the global elevation data. In this study we compared the Digital Elevation Surface (DES) generated by Cartosat-1 point data and DES generated by merging the Cartosat-1 data with ICESat data. Outputs in the form of interpolated surfaces were evaluated with the help of differential global positioning system (DGPS) points collected from study area. The study showed the results that the DES generated from Cartosat — 1 data had less elevation accuracy when compared with the DGPS data. While merging Cartosat-1 point height data with ICESat/GLAS data resulted in better accuracy. On the practical side for processing the interpolation, based on the research the ICESat /GLAS with Cartosat-1 height data can produce better DES compared to the Cartosat-1 stereo data. The DES was generated using geostatistical interpolation methods in which the global polynomial method proved to be the better for generating the surface compare to other interpolation techniques studied in this work. For co-kriging method, the accuracy decreases compare to the kriging interpolation, due to the complexity of parameters that were used for interpolation. On the theory side, based on this research the statement of which interpolation technique is better than the other cannot be mentioned easily, because these are based on the data type, parameters and also on method of interpolation. So research experiment should be more intensely and with more focused.  相似文献   

14.
Accurate absolute GPS positioning through satellite clock error estimation   总被引:11,自引:0,他引:11  
 An algorithm for very accurate absolute positioning through Global Positioning System (GPS) satellite clock estimation has been developed. Using International GPS Service (IGS) precise orbits and measurements, GPS clock errors were estimated at 30-s intervals. Compared to values determined by the Jet Propulsion Laboratory, the agreement was at the level of about 0.1 ns (3 cm). The clock error estimates were then applied to an absolute positioning algorithm in both static and kinematic modes. For the static case, an IGS station was selected and the coordinates were estimated every 30 s. The estimated absolute position coordinates and the known values had a mean difference of up to 18 cm with standard deviation less than 2 cm. For the kinematic case, data obtained every second from a GPS buoy were tested and the result from the absolute positioning was compared to a differential GPS (DGPS) solution. The mean differences between the coordinates estimated by the two methods are less than 40 cm and the standard deviations are less than 25 cm. It was verified that this poorer standard deviation on 1-s position results is due to the clock error interpolation from 30-s estimates with Selective Availability (SA). After SA was turned off, higher-rate clock error estimates (such as 1 s) could be obtained by a simple interpolation with negligible corruption. Therefore, the proposed absolute positioning technique can be used to within a few centimeters' precision at any rate by estimating 30-s satellite clock errors and interpolating them. Received: 16 May 2000 / Accepted: 23 October 2000  相似文献   

15.
We present an alternate mathematical technique than contemporary spherical harmonics to approximate the geopotential based on triangulated spherical spline functions, which are smooth piecewise spherical harmonic polynomials over spherical triangulations. The new method is capable of multi-spatial resolution modeling and could thus enhance spatial resolutions for regional gravity field inversion using data from space gravimetry missions such as CHAMP, GRACE or GOCE. First, we propose to use the minimal energy spherical spline interpolation to find a good approximation of the geopotential at the orbital altitude of the satellite. Then we explain how to solve Laplace’s equation on the Earth’s exterior to compute a spherical spline to approximate the geopotential at the Earth’s surface. We propose a domain decomposition technique, which can compute an approximation of the minimal energy spherical spline interpolation on the orbital altitude and a multiple star technique to compute the spherical spline approximation by the collocation method. We prove that the spherical spline constructed by means of the domain decomposition technique converges to the minimal energy spline interpolation. We also prove that the modeled spline geopotential is continuous from the satellite altitude down to the Earth’s surface. We have implemented the two computational algorithms and applied them in a numerical experiment using simulated CHAMP geopotential observations computed at satellite altitude (450 km) assuming EGM96 (n max = 90) is the truth model. We then validate our approach by comparing the computed geopotential values using the resulting spherical spline model down to the Earth’s surface, with the truth EGM96 values over several study regions. Our numerical evidence demonstrates that the algorithms produce a viable alternative of regional gravity field solution potentially exploiting the full accuracy of data from space gravimetry missions. The major advantage of our method is that it allows us to compute the geopotential over the regions of interest as well as enhancing the spatial resolution commensurable with the characteristics of satellite coverage, which could not be done using a global spherical harmonic representation. The results in this paper are based on the research supported by the National Science Foundation under the grant no. 0327577.  相似文献   

16.
面向空间数据连续地图综合问题,提出了一种基于骨架线端点匹配的面状要素渐变方法,通过在两个关键表达之间进行尺度内插,实时、动态地派生任意中间比例尺地图数据。首先,对面状要素在大小比例尺下的两重表达分别进行约束Delaunay三角网剖分并提取各自的骨架线特征;然后,使用最优子序双射优化技术对骨架端点进行匹配获得多边形边界上相对应的特征点序列;最后,在剖分边界的基础上进行分段常规线性内插,获得面状要素介于始末尺度之间的多尺度表达。实验结果表明,该算法充分顾及了空间数据弯曲结构特征,对于光滑边界面状要素的渐变变换具有良好的渐变效果,可用于空间数据的连续地图综合和多尺度表达。  相似文献   

17.
彩色图像中线状目标提取的透镜跟踪法   总被引:2,自引:1,他引:1  
刘新贵  孙群  张鹏  黄雅娟 《测绘科学》2004,29(3):65-66,72
地形图扫描矢量化是矢量数据获取的主要途征,以往的方法主要是先将彩色扫描图二值化或者进行分层处理,再实施要素跟踪,这样容易造成要素断裂和数据的不完整。为了克服图像细化和分层的影响,在线状目标的半自动跟踪中,提出了一种基于后台区域细化,然后匹配跟踪的算法,即透镜跟踪法,实现了直接在24位扫描图像上的线划跟踪。该算法在多比例尺地形图上进行了大量数据采集实验,并成功运用到作者组织的数字化生产中,结果表明可以大大提高追踪速度和准确率,通过后期的处理,跟踪出的数据基本满足生产的需要,在实际生产中能发挥积极的作用。  相似文献   

18.
 This paper presents a methodology to incorporate both hyperspectral properties and spatial coordinates of pixels in maximum likelihood classification. Indicator kriging of ground data is used to estimate, for each pixel, the prior probabilities of occurrence of classes which are then combined with spectral-based probabilities within a Bayesian framework. In the case study (mapping of in-stream habitats), accounting for spatial coordinates increases the overall producer's accuracy from 85.8% to 93.8%, while the Kappa statistic rises from 0.74 to 0.88. Best results are obtained using only indicator kriging-based probabilities, with a stunning overall accuracy of 97.2%. Significant improvements are observed for environmentally important units, such as pools (Kappa: 0.17 to 0.74) and eddy drop zones (Kappa: 0.65 to 0.87). The lack of benefit of using hyperspectral information in the present study can be explained by the dense network of ground observations and the high spatial continuity of field classification which might be spurious. Received: 12 April 2001 / Accepted: 7 September 2001  相似文献   

19.
The spline interpolation technique is applied to estimate locally the radial component of a planetary gravity field from residual acceleration data along a special direction, the direction of observation from the Earth. After the presentation of the theoretical framework, the method is tested on synthetic and real data in the case of Venus. It is shown that this spline technique can be used succesfully to build local models of radial gravity fields at the planet surface. Received: 13 March 1997 / Accepted: 17 November 1998  相似文献   

20.
We propose a methodology for local gravity field modelling from gravity data using spherical radial basis functions. The methodology comprises two steps: in step 1, gravity data (gravity anomalies and/or gravity disturbances) are used to estimate the disturbing potential using least-squares techniques. The latter is represented as a linear combination of spherical radial basis functions (SRBFs). A data-adaptive strategy is used to select the optimal number, location, and depths of the SRBFs using generalized cross validation. Variance component estimation is used to determine the optimal regularization parameter and to properly weight the different data sets. In the second step, the gravimetric height anomalies are combined with observed differences between global positioning system (GPS) ellipsoidal heights and normal heights. The data combination is written as the solution of a Cauchy boundary-value problem for the Laplace equation. This allows removal of the non-uniqueness of the problem of local gravity field modelling from terrestrial gravity data. At the same time, existing systematic distortions in the gravimetric and geometric height anomalies are also absorbed into the combination. The approach is used to compute a height reference surface for the Netherlands. The solution is compared with NLGEO2004, the official Dutch height reference surface, which has been computed using the same data but a Stokes-based approach with kernel modification and a geometric six-parameter “corrector surface” to fit the gravimetric solution to the GPS-levelling points. A direct comparison of both height reference surfaces shows an RMS difference of 0.6 cm; the maximum difference is 2.1 cm. A test at independent GPS-levelling control points, confirms that our solution is in no way inferior to NLGEO2004.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号