首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper reports the results of an in-depth study which investigated two algorithms for line simplification and caricatural generalization (namely, those developed by Douglas and Peucker, and Visvalingam, respectively) in the context of a wider program of research on scale-free mapping. The use of large-scale data for man-designed objects, such as roads, has led to a better understanding of the properties of these algorithms and of their value within the spectrum of scale-free mapping. The Douglas-Peucker algorithm is better at minimal simplification. The large-scale data for roads makes it apparent that Visvalingam's technique is not only capable of removing entire scale-related features, but that it does so in a manner which preserves the shape of retained features. This technique offers some prospects for the construction of scale-free databases since it offers some scope for achieving balanced generalizations of an entire map, consisting of several complex lines. The results also suggest that it may be easier to formulate concepts and strategies for automatic segmentation of in-line features using large-scale road data and Visvalingam's algorithm. In addition, the abstraction of center lines may be facilitated by the inclusion of additional filtering rules with Visvalingam's algorithm.  相似文献   

2.
Visvalingam's algorithm was designed for caricatural line generalization. A distinction must be made between the algorithm and its operational definition, which includes the metric used to drive it. When the algorithm was first introduced, it was demonstrated using the concept of the effective area of triangles. It was noted that alternative metrics could be used and that the metrics could be weighted, for example to take account of shape.

Ordnance Survey (Great Britain) and others are using Visvalingam's algorithm for generalizing coastlines and other natural features, with complex parameter-driven functions to weight the original metric. This paper shows how free software and data were used to scrutinize the implications of one of Matthew Bloch's simple and transparent weighting functions. The results look promising, when compared with manually produced mid and small-scale maps; and encourage further research focussed on weighting functions and related topics, such as self-intersection of lines and model-based generalization. The paper discusses why weights were used in some projects. It comments on their range of applicability and reiterates the original guidance provided for the use of weights. It also demonstrates how weights can undermine the algorithm's capacity to draw caricatures with very few points. The paper provides sufficient background and links to the authors’ test data and to open source software for the benefit of others wishing to undertake research in line generalization using Visvalingam's algorithm.  相似文献   

3.
A new method of cartographic line simplification is presented. Regular hexagonal tessellations are used to sample lines for simplification, where hexagon width, reflecting sampling fidelity, is varied in proportion to target scale and drawing resolution. Tesserae constitute loci at which new sets of vertices are defined by vertex clustering quantization, and these vertices are used to compose simplified lines retaining only visually resolvable detail at target scale. Hexagon scaling is informed by the Nyquist–Shannon sampling theorem. The hexagonal quantization algorithm is also compared to an implementation of the Li–Openshaw raster-vector algorithm, which undertakes a similar process using square raster cells. Lines produced by either algorithm using like tessera widths are compared for fidelity to the original line in two ways: Hausdorff distances to the original lines are statistically analyzed, and simplified lines are presented against input lines for visual inspection. Results show that hexagonal quantization offers advantages over square tessellations for vertex clustering line simplification in that simplified lines are significantly less displaced from input lines. Visual inspection suggests lines produced by hexagonal quantization retain informative geographical shapes for greater differences in scale than do those produced by quantization in square cells. This study yields a scale-specific cartographic line simplification algorithm, following Li and Openshaw's natural principle, which is readily applicable to cartographic linework. Open-source Java code implementing the hexagonal quantization algorithm is available online.  相似文献   

4.
5.
This paper is concerned with using linear features in aerial triangulation. Without loss of generality, the focus is on straight lines with the attempt to treat tie lines in the same fashion as tie points. The parameters of tie lines appear in the block adjustment like the tie points do. This requires a unique representation of lines in object space. We propose a four-parameter representation that also offers a meaningful stochastic interpretation of the line parameters. The proposed line representation lends itself to a parameterized form, allowing use of the collinearity model for expressing orientation and tie line parameters as a function of points measured on image lines. The paper describes in detail the derivation of the extended collinearity model and discusses the advantages of this new approach compared to the standard coplanarity model that is used in line photogrammetry. The intention of the paper is to make a contribution to feature-based aerial triangulation on the algorithmic level.  相似文献   

6.
同名点及高程平面约束的航空影像直线匹配算法   总被引:1,自引:1,他引:0  
针对直线匹配的难点问题及匹配约束的有效性,提出了同名点及高程平面约束的航空影像直线匹配算法。该算法在边缘点匹配结果和直线提取结果的基础上,首先利用直线邻域内的同名点确定候选直线及直线投影平面的高程值,再结合物方和像方相似性约束确定同名直线;然后根据直线索引对"一配多"的匹配结果进行整合,并对结果中的多直线进行合并,得到"一对一"的同名直线;最后利用"像方-物方-像方"的映射模式确定同名直线的同名端点。论文选取典型纹理特征的航空影像进行直线匹配试验,结果表明,本文算法能获得可靠的直线匹配结果。  相似文献   

7.
面向空间数据连续地图综合问题,提出了一种基于骨架线端点匹配的面状要素渐变方法,通过在两个关键表达之间进行尺度内插,实时、动态地派生任意中间比例尺地图数据。首先,对面状要素在大小比例尺下的两重表达分别进行约束Delaunay三角网剖分并提取各自的骨架线特征;然后,使用最优子序双射优化技术对骨架端点进行匹配获得多边形边界上相对应的特征点序列;最后,在剖分边界的基础上进行分段常规线性内插,获得面状要素介于始末尺度之间的多尺度表达。实验结果表明,该算法充分顾及了空间数据弯曲结构特征,对于光滑边界面状要素的渐变变换具有良好的渐变效果,可用于空间数据的连续地图综合和多尺度表达。  相似文献   

8.
《The Cartographic journal》2013,50(4):321-328
Abstract

Map generalisation is an abstraction process that seeks to transform the representation of cartographic objects from the original version into a coarser one. The characteristics of cartographic objects and the arrangement of map features have to be observed and preserved in a generalisation process. A method is developed for typifying drainages while preserving their structural characteristics, i.e.presenting the drainages with reduced number of rivers under the constraint of preserving the original structure in terms of the type and distribution of the rivers. We apply Töpfer's radical law to calculate the amount of the rivers to be retained on the generalised map. The drainages share the amount of retained rivers in proportion to the number of their tributaries. In each of the drainages, the shared amount is divided among the rivers based on the dendritic decomposition of the drainage. We implement and test the method in Java Environment. Results from case studies show that the method effectively preserves the original structures of the drainages on the generalised maps.  相似文献   

9.
王竞雪  朱庆  王伟玺 《测绘学报》2017,46(11):1850-1858
针对单直线匹配过程中缺乏考虑邻近直线特征之间关系,纹理断裂处单一直线描述符的弱可靠性,提出了一种顾及拓扑关系的立体影像直线特征可靠匹配算法。该算法首先根据直线间距离、角度等基本拓扑关系对参考影像、搜索影像上提取的直线进行编组;然后将编组得到的直线组作为匹配基元,充分利用直线特征组内的拓扑关系,依次采用核线约束、单应矩阵约束、象限约束、不规则三角形区域灰度相关约束对其进行匹配;最后将同名直线组分裂为两对同名单直线、并对分裂后的结果进行整合、拟合、检核等后处理,得到"一对一"的同名直线。选取典型纹理特征的航空影像和近景影像进行参数分析及直线匹配试验,结果表明,本文算法能获取可靠的直线匹配结果。  相似文献   

10.
ABSTRACT

Line integral convolution is a technique originally developed for visualizing vector fields, such as wind or water directions, that places densely packed lines following the direction of movement. Geisthövel and Hurni adapted line integral convolution to terrain generalization in 2018. Their method successfully removes details and retains sharp mountain ridges; it is particularly suited for creating generalized shaded relief. This paper extends line integral convolution generalization with a series of enhancements to reduce spurious artifacts, accentuate mountain ridges, control the level of detail in mountain slopes, and preserve sharp transitions to flat areas. The enhanced line integral convolution generalization effectively removes excessive terrain details without changing the position of terrain features. Sharp mountain ridgelines are accentuated, and transitions to flat waterbodies and valley bottoms are preserved. Shaded relief imagery derived from generalized elevation models is visually pleasing and resembles manually produced shaded relief.  相似文献   

11.
《测量评论》2013,45(84):268-274
Abstract

In the E.S.R., viii, 59, 191–194 (January 1946), J.H. Cole gives a very simple formula for finding the length of long lines on the spheroid (normal section arcs), given the coordinates of the end points. In the course of the computation the approximate azimuth of one end of the line is found, the error over a 500-mile line being of the order of 3″ or 4″. If the formula is amended so that the azimuth at the other end of the line is used in computing the length of the arc, the error is then less than 0″·1 over such a distance. An extra term is now given which makes this azimuth virtually correct over any distance. Numerical tests show that Cole's formula for length and the new one for azimuth are very accurate and convenient in all azimuths and latitudes.  相似文献   

12.
Abstract

Professional aerial photography missions are generally outside the reach of most students and faculty involved with teaching and research. Oblique aerial photography using handheld cameras for image acquisition from a light high‐wing aircraft offers an excellent learning experience for students in a first course in remote sensing and offers a useful research tool for graduate students and faculty engaged in environmental investigations. This paper is essentially a guide, covering all aspects of hand‐held camera aerial imaging and the subsequent processing needed to produce low‐obliques, stereograms, anaglyphs, and flight line mosaics. Scales, ground coverage distances, and stereogram and mosaic timing intervals, are included along with a section on the calculations used to produce these numbers. A list of additional resources concludes the paper.  相似文献   

13.
《测量评论》2013,45(10):226-238
Abstract

The Stereographic Projection, owing to the ease and accuracy with which it can be drawn on a small scale, offers natural attractiveness for the treatment of spherical geometry upon a plane surface. It would therefore be rash for a present-day writer to claim as novel what may well be an infringement of patent rights morally belonging to Hipparchus, who possibly knew most of what is worth knowing about the matter 2,000 years ago. However, since a fairly extensive delving into writings upon the subject has not brought to light anything quite on the lines here put forward, it may be worth while to systematize in this paper some processes which the present writer has found practically useful for some time past.  相似文献   

14.
Increased use of digital imagery has facilitated the opportunity to use features, in addition to points, in photogrammetric applications. Straight lines are often present in object space, and prior research has focused on incorporating straight–line constraints into bundle adjustment for frame imagery. In the research reported in this paper, object–space straight lines are used in a bundle adjustment with self–calibration. The perspective projection of straight lines in the object space produces straight lines in the image space in the absence of distortions. Any deviations from straightness in the image space are attributed to various distortion sources, such as radial and decentric lens distortions. Before incorporating straight lines into a bundle adjustment with self–calibration, the representation and perspective transformation of straight lines between image space and object space should be addressed. In this investigation, images of straight lines are represented as a sequence of points along the image line. Also, two points along the object–space straight line are used to represent that line. The perspective relationship between image– and object–space lines is incorporated in a mathematical constraint. The underlying principle in this constraint is that the vector from the perspective centre to an image point on a straight–line feature lies on the plane defined by the perspective centre and the two object points defining the straight line. This constraint has been embedded in a software application for bundle adjustment with self–calibration that can incorporate point as well as straight–line features. Experiments with simulated and real data have proved the feasibility and the efficiency of the algorithm proposed.  相似文献   

15.
一种顾及空间关系约束的线化简算法   总被引:1,自引:1,他引:0  
线要素化简在制图表达与综合领域一直是研究的热点和难点之一。然而,经典化简算法多针对单独线要素进行处理,缺乏对该线要素与周边线要素之间整体空间关系的考虑,并且,存在计算结果生硬(D-P算法)、局部极值点缺失,特别是在曲度较大之处出现相交异常(L-O算法)等问题。为此,本文提出一种顾及空间关系约束的线化简算法,建立线要素全局化简方法(LGSM)和矢量位移、面积位移等5类评价指标。采用等高线、河流和道路3类线要素实际数据进行了试验,充分检验了本文算法的优越性,其处理结果符合开方根模型规律,降低了曲线复杂度,在保证全局空间关系不变条件下,不仅更好地保持了曲线整体形状特征,而且光滑美观、精度高。  相似文献   

16.
G. T. M. 《测量评论》2013,45(19):289-299
Abstract

Introductory Remarks.—A line of constant bearing was known as a Rhumb line. Later Snel invented the name Loxodrome for the same line. The drawing of this line on a curvilinear graticule was naturally difficult and attempts at graphical working in the chart-house were not very successfuL Consequently, according to Germain, in 1318 Petrus Vesconte de Janua devised the Plate Carree projection (“Plane” Chart). This had a rectilinear graticule and parallel meridians, and distances on the meridians were made true. The projection gave a rectilinear rhumb line; but the bearing of this rhumb line was in general far from true and the representation of the earth's surface was greatly distorted in high latitudes. For the former reason it offered no real solution of the problem of the navigator, who required a chart on which any straight line would be a line not alone of constant bearing but also of true bearing; the first condition necessarily postulated a chart with rectilinear meridians, since a meridian is itself a rhumb line, and for the same reason it postulated rectilinear parallels. It follows, therefore, that the meridians also must be parallel inter se, like the parallels of latitude. The remaining desideratum—that for a true bearing—was attained in I569 by Gerhard Kramer, usually known by his Latin name of Mercator, in early life a pupil of Gemma Frisius of Louvain, who was the first to teach triangulation as a means for surveying a country. Let us consider, then, that a chart is required to show a straight line as a rhumb line of true bearing and let us consider the Mercator projection from this point of view.  相似文献   

17.
Abstract

<title/>

When the source data for the digital elevation model (DEM) are not known and any additional information or features such as skeleton lines of terrain is not available, a triangular regular network (TRN) is constructed with simple subdivision using one or two diagonals uniformly. Such a model gives inaccurate directions for interpolation because of the inaccurate diagonals used in triangulation and thereby, results in inaccurate contours representing artificial terrain features. In this study, a new method is developed based on slope information computed at DEM points determining accurate diagonals in the subdivision process, which is beneficial not only through the skeleton lines of a terrain but also all over the DEM. Consequently, it is shown that the proposed method is able to build a high fidelity TRN from a DEM without any additional information or features.  相似文献   

18.
ABSTRACT

The mapping of spatiotemporal point features plays an important role in geovisualization. However, such mapping suffers from low efficiency due to computational redundancy when similar symbols are used to visualize spatiotemporal point features. This paper presents a similarity-based approach to predict and avoid computational redundancy, which improves mapping efficiency. First, to identify computational redundancy, the similarity of point symbols is measured based on commonalities in symbol graphics and symbol drawing operations. Second, a similarity-enhanced method is proposed to comprehensively predict and avoid computational redundancies when mapping spatiotemporal point features. This approach was tested using two real-world spatiotemporal datasets. The results suggest that the proposed approach offers relatively large performance improvements.  相似文献   

19.
Abstract

This paper presents a new model for handling positional uncertainty in the process of line simplification. It considers that positional uncertainty in a simplified line is caused by (a) positional uncertainty in an initial line propagated through the process and (b) a deviation of the simplified line from the initial line. In order to describe the uncertainty in the simplified line, the maximum distance is defined as a measure. This measure is further adopted to determine parameters to a line simplification algorithm. Therefore, this model makes a step forward in the implementation of an uncertainty indicator for the line simplification. As compared existing models, the proposed uncertainty model in this paper is more comprehensive in uncertainty assessment for line simplification.  相似文献   

20.
Abstract

Measurement of areas is necessary in many studies based on maps. Where precise measuring equipment is not available, more simple methods must be used. The information so produced can be improved in value if the degree of accuracy of the measuring technique can be stated. This paper demonstrates a 'model' by which results of the measuring technique can be judged against calculations for the same units, which are exact to one thousandth of a square centimetre.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号