共查询到18条相似文献,搜索用时 530 毫秒
1.
Li-Openshaw算法的改进与评价 总被引:4,自引:0,他引:4
Li—Openshaw算法是一种基于客观综合自然规律的自适应线状要素综合算法,使用该算法可得到较合理真实的综合结果。在分析Li—Openshaw算法特点的基础上,依据线化简的原则和目的,对算法进行改进:①首先提出利用点与直线的关系来识别弯曲以找出所有局部极大值点的方法以保持曲线整体形状;②SVO圆形与待综合曲线发生多次相交时按照线的顺序索引找到第一个近似交点,选取与曲线上圆心与交点的中点最接近的原始数据点作为综合后的选取点。在此基础上,给出化简时间、位移标准差和位置误差等评价指标,提出基于分形理论的曲线形状结构特征的评估方法等对两种算法进行比较与评估,实验结果证明,同原算法相比,改进的Li—Openshaw算法在线化简中更好地保持曲线的整体形状,具有较高的位置精度,提高化简效率。 相似文献
2.
线要素化简算法的传递误差模型 总被引:1,自引:0,他引:1
在分析了算法对线要素化简过程中对其邻近其他地理要素的空间精度和空间关系产生影响的基础上,提出了线要素化简算法误差传播的研究方法,建立了线要素化简算法的传递误差模型,并利用误差椭圆将模型可视化。最后针对不同线要素,对不同算法的传递误差进行了分析和评估。 相似文献
3.
4.
5.
6.
线要素化简算法几何精度评估 总被引:2,自引:0,他引:2
在分析化简算法对线要素精度影响主要分为几何精度和属性精度两方面的基础上,针对算法化简过程中曲线在几何特征和点的位置发生变化的特点,对线化简算法的几何精度实施评估,并提出了线的曲折度、位置误差等几何精度评估指标,选择了几种典型的化简算法进行了评估实验,得出了较客观的结论。 相似文献
7.
基于层次信息量的线要素化简算法性能评价研究 总被引:1,自引:0,他引:1
化简算法是地图综合的一类基本算法,而算法性能评价是解决算法优化和算法选取的一个重要问题。现有评价指标更多地考虑要素化简前后的位置偏移,难以客观地评价算法性能。为此,本文以线要素为例,全面考虑线要素化简原则,从信息传递的角度,提出一种基于层次信息量的线要素化简算法性能评价方法。首先将线要素的信息划分为三个层次来描述,即:元素层次、邻域层次和整体层次,并发展相应的信息量计算方法。然后,从化简后各层次信息量的保持能力(或信息传递能力)来评价线要素化简算法的性能。其中,元素层次信息传递比评价关键点保持性能;邻域层次信息传递比评价弯曲保持性能;整体层次信息传递比评价整体形态保持性能。最后,以河网为例,采用层次信息量指标,对四种经典化简算法进行评价,分析验证了层次信息量评价指标的合理性,与经典评价指标的对比分析进一步验证了该指标的优越性。 相似文献
8.
通过迭代法得到Douglas-Peucker算法阚值与线要素化简质量相关特定属性的样本数据;利用曲线拟合法得到阈值与线要素长度和点数之间的函数关系;分析给定区间上阈值-点数关系函数的曲率,寻求最大曲率点对应的阈值作为化简算法最优阈值.从定性和定量两方面揭示了化简算法阈值选择对化简结果的影响规律,提出化简阈值的优化确定方法.适用于利用Douglas-Peucker算法化简海量线要素数据时分析化简阈值的影响及确定化简算法最优阈值. 相似文献
9.
线要素综合是地图综合的重要组成部分,其中尤以化简最为突出。在分析国内外线要素化简相关文献的基础上,提出了一种保持曲线弯曲特征的线要素化简算法。该方法采用保持可视弯曲特征的化简思路,在弯曲取舍的过程中对约束条件进行量化处理,较为准确地描述了弯曲化简前后的变化情况,经过实验检验,化简取得了较好的效果。 相似文献
10.
采用斜拉式弯曲划分的曲线化简方法 总被引:1,自引:1,他引:0
线要素化简一直是自动制图综合中的重要研究内容。分析已有线化简算法在线弯曲形态保持和单调弧段划分时只考虑一侧等不足,提出线要素的斜拉式弯曲划分和化简新方法。该方法对线要素采用斜剖方式划分弧段,兼顾线要素两侧的弯曲形态;识别出每个划分的单调弧段是U型弧段还是V型弧段,是大弧段还是小弧段,从而分别对其进行不同的处理;在处理过程中,每化简完一个单调弧段,重新对线要素进行弧段划分,然后再次对每个单调弧段进行化简,以此类推,因此该算法是一种动态化简过程。实例显示,本算法在线要素特征点保持、u型弧段和V型弧段保持、大弯曲的保持、整体形态保持等方面非常有效,且化简率非常高,充分证明本算法的科学性和优越性。 相似文献
11.
提出了一种基于地理特征约束的曲线化简方法。该方法依据曲线形态特征,利用约束Delaunay三角网模型对曲线弯曲进行了初步划分,利用弯曲探测方法识别基本弯曲和复合弯曲,利用弯曲追踪方法获取弯曲间的层次与相邻关系,实现了曲线形态的完全结构化;获取了曲线有效空间邻域内包含的其他地理要素知识,并依据曲线形态分解到各个弯曲中;设计了弯曲取舍的判断规则以及弯曲删除的完整实现过程。实例证明,本算法无论在线要素的整体形态保持上,还是在地理特征的一致性保持上都非常有效。 相似文献
12.
为保留等高线群简化过程中的地形特征信息,研究了一种等高线群渐进式简化方法。首先,从等高线数据中提取地形特征点及地形特征线;将提取的地形特征点和特征线作为约束条件予以量化,并作为控制变量;再依据渐进式图形简化的思想,利用控制变量对等高线上的非特征点、特征点、特征线及其关联的弯曲进行渐进式取舍,从而实现等高线群的动态简化。实验结果表明,该方法能够有效地保持地貌形态特征,同时提高了简化过程的智能化程度。 相似文献
13.
以弯曲骨架线为化简指标的海岸线综合方法 总被引:1,自引:0,他引:1
针对海岸线综合中以弯曲高度和弯曲深度为化简指标的不足,提出了以弯曲骨架线为指标的综合方法。在基于曲线单调段的弯曲识别的基础上,通过弯曲部位三角网的构建提取了弯曲骨架线。结合"扩陆缩海"原则进行了海岸线综合实验,验证了该方法在保持海岸线形态特征方面的有效性与可行性。 相似文献
14.
15.
1 IntroductionMapgeneralizationisoneoftheclassicalcartographicprob lems.Allmaps,aregeneralizedrepresentationsofthereality.Generalizationisnecessarytoimprovethedisplayqualityofsmallscalemaps,allowanalysiswithdifferentgradesofdetail;andreducedatastoragere… 相似文献
16.
The contour line is one of the basic elements of a topographic map. Existing contour line simplification methods are generally applied to maps without topological errors. However, contour lines acquired from a digital elevation model (DEM) may contain topological errors before simplification. Targeted at contour lines with topological errors, a progressive simplification method based on the two‐level Bellman–Ford algorithm is proposed in this study. Simplified contour lines and elevation error bands were extracted from the DEM. The contour lines of the elevation error bands were initially simplified with the Bellman–Ford (BF) algorithm. The contour lines were then segmented using the vertices of the initial simplification result and connected curves with the same bending direction were merged into a new curve. Subsequently, various directed graphs of the merged curves were constructed and a second simplification was made using the BF algorithm. Finally, the simplification result was selected based on the similarity between several simplification results and adjacent contour lines. The experimental results indicate that the main shapes of the contour groups can be maintained with this method and original topological errors are resolved. 相似文献
17.
Paulo Raposo 《制图学和地理信息科学》2013,40(5):427-443
A new method of cartographic line simplification is presented. Regular hexagonal tessellations are used to sample lines for simplification, where hexagon width, reflecting sampling fidelity, is varied in proportion to target scale and drawing resolution. Tesserae constitute loci at which new sets of vertices are defined by vertex clustering quantization, and these vertices are used to compose simplified lines retaining only visually resolvable detail at target scale. Hexagon scaling is informed by the Nyquist–Shannon sampling theorem. The hexagonal quantization algorithm is also compared to an implementation of the Li–Openshaw raster-vector algorithm, which undertakes a similar process using square raster cells. Lines produced by either algorithm using like tessera widths are compared for fidelity to the original line in two ways: Hausdorff distances to the original lines are statistically analyzed, and simplified lines are presented against input lines for visual inspection. Results show that hexagonal quantization offers advantages over square tessellations for vertex clustering line simplification in that simplified lines are significantly less displaced from input lines. Visual inspection suggests lines produced by hexagonal quantization retain informative geographical shapes for greater differences in scale than do those produced by quantization in square cells. This study yields a scale-specific cartographic line simplification algorithm, following Li and Openshaw's natural principle, which is readily applicable to cartographic linework. Open-source Java code implementing the hexagonal quantization algorithm is available online. 相似文献
18.
The main purpose of the research is to achieve fully automated approach for supplying multi-resolution databases with linear objects in each scale. Moreover, the proposed solutions maintain the repeatability and accuracy of output data wherever possible according to the input scale. These properties are achieved by keeping the minimal object dimensions as well as the appropriate data pre-processing, based on the classification of source points. The classification distinguishes three classes of points: constant (unchangeable), temporary, and inherited. These classes build a structure of cartographic control points. Based on these solutions, the authors proposed an algorithm for linear object simplification based on minimal object dimensions and cartographic control points. It was also confirmed that the simplification between constant points does not cause geometry discrepancies in relation to the global simplification of the whole line. 相似文献