首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Digital elevation model (DEM) source data are subject to both horizontal and vertical errors owing to improper instrument operation, physical limitations of sensors, and bad weather conditions. These factors may bring a negative effect on some DEM-based applications requiring low levels of positional errors. Although classical smoothing interpolation methods have the ability to handle vertical errors, they are prone to omit horizontal errors. Based on the statistical concept of the total least squares method, a total error-based multiquadric (MQ-T) method is proposed in this paper to reduce the effects of both horizontal and vertical errors in the context of DEM construction. In nature, the classical multiquadric (MQ) method is a vertical error regression procedure, whereas MQ-T is an orthogonal error regression model. Two examples, including a numerical test and a real-world example, are employed in a comparative performance analysis of MQ-T for surface modeling of DEMs. The numerical test indicates that MQ-T performs better than the classical MQ in terms of root mean square error. The real-world example of DEM construction with sample points derived from a total station instrument demonstrates that regardless of the sample interval and DEM resolution, MQ-T is more accurate than classical interpolation methods including inverse distance weighting, ordinary kriging, and Australian National University DEM. Therefore, MQ-T can be considered as an alternative interpolator for surface modeling with sample points subject to both horizontal and vertical errors.  相似文献   

2.
为了降低采样点水平和高程误差对数字高程模型(digital elevation model,DEM)建模精度的影响,受总体最小二乘算法启发,以较高精度的多面函数(multiquadric function,MQ)为基函数,发展了整体最小二乘MQ算法(MQ-T),并分别借助数值实验和实例分析验证模型计算精度。数值实验中,以高斯合成曲面为研究对象,设计了受不同误差分量影响的采样数据,借助MQ-T曲面建模,并将计算结果与传统MQ进行比较。结果表明,当采样点仅受高程误差分量影响时,MQ-T计算结果精度与MQ相当;当采样数据受水平误差分量影响时,MQ-T计算结果中误差小于MQ中误差。实例分析中,以全站仪获取的采样数据为研究对象,借助MQ-T构建测区DEM,并将计算结果与传统插值算法进行比较,如反距离加权(inverse distance weighted,IDW)法、克里金(Kriging)法和澳大利亚国立大学DEM专用插值软件((Australian National University DEM,ANUDEM)法。精度分析表明,随着采样点密度降低,各种插值算法精度逐步降低;不管采样密度多少,MQ-T计算精度始终高于传统插值算法;对山体阴影图分析表明,MQ-T相比Kriging法有一定峰值削平现象。  相似文献   

3.
Web Map Tile Services (WMTS) are widely used in many fields to quickly and efficiently visualize geospatial data for public use. To ensure that a WMTS can successfully fulfill users' expectations and requirements, the performance of a service must be measured to track latencies and bottlenecks that may downgrade the overall quality of service (QoS). Traditional synthetic workloads used to evaluate WMTS applications are usually generated by repeated static URLs, through randomized requests, or by an access log replay. These three methods do not take request characteristics and users' behaviors into consideration, while access logs are not available for systems still under development. Thus, the evaluation outcomes obtained by these methods cannot represent the real performance of online WMTS applications. In this article a new workload model named HELP (Hotspot/think‐timE/Length/Path) is proposed to measure the performance of a prototype WMTS. This model describes how users browse a WMTS map and statistically characterizes complete map navigation behaviors. Then, the HELP model is implemented in HP LoadRunner and used to generate a synthetic workload to evaluate the target WMTS. Experimental results illustrate that the performance representation of the HELP workload is more accurate than that of the other two models, and how a bottleneck in the target system was identified. Additional statistical analysis of request logs and “hotspots” visualizations further validate the proposed HELP workload.  相似文献   

4.
The availability of accurate rainfall data with high spatial resolution, especially in vast watersheds with low density of ground-measurements, is critical for planning and management of water resources and can increase the quality of the hydrological modeling predictions. In this study, we used two classical methods: the optimal interpolation and the successive correction method (SCM), for merging ground-measurements and satellite rainfall estimates. Cressman and Barnes schemes have been used in the SCM in order to define the error covariance matrices. The correction of bias in satellite rainfall data has been assessed by using four different algorithms: (1) the mean bias correction, (2) the regression equation, (3) the distribution transformation, and (4) the spatial transformation. The satellite rainfall data were provided by the Tropical Rainfall Measuring Mission, over the Brazilian Amazon Rainforest. Performances of the two merging data techniques are compared, qualitatively, by visual inspection and quantitatively, by a statistical analysis, collected from January 1999 to December 2010. The computation of the statistical indices shows that the SCM, with the Cressman scheme, provides slightly better results.  相似文献   

5.
非均匀地表蒸散遥感研究综述   总被引:2,自引:0,他引:2  
本文评述了目前常用的遥感估算地表蒸散方法,包括地表能量平衡模型、Penman-Monteith类模型、温度—植被指数特征空间方法、Priestley-Taylor类模型和其他方法。然而使用这些方法估算地表蒸散时会面临严重的尺度效应,而产生尺度效应的根本原因之一是地表异质性,在分析了非均匀下垫面对水热通量遥感反演造成的影响后,介绍了面积加权、校正因子补偿与温度降尺度3种尺度误差纠正方法;并从地面观测实验的角度简述了非均匀下垫面水热通量真实性检验的研究;最后探讨了将来建立更具时空代表性的非均匀下垫面地表蒸散遥感估算模型可能会面临的一些挑战。  相似文献   

6.
This paper aims at providing an answer as to whether generalization obtained with data-driven modelling can be used to gauge the plausibility of the physically based (PB) model’s prediction. Two statistical models namely; Weight of Evidence (WofE) and Logistic Regression (LR), and a PB model using the infinite slope assumptions were evaluated and compared with respect to their abilities to predict susceptible areas to shallow landslides at the 1:10.000 urban scale. Threshold-dependent performance metrics showed that the three methods produced statistically comparable results in terms of success and prediction rates. However, with the Area Under the receiver operator Curve (AUC), statistical models are more accurate (88.7 and 84.6% for LR and WofE, respectively) than the PB model (only 69.8%). Nevertheless, in such data-sparse situation, the usual approaches for validation, i.e. comparing observed with predicted data, are insufficient, formal uncertainty analysis (UA) is a means for evaluating the validity and reliability of the model. We then refitted the PB model using a stochastic modification of the infinite slope stability model input scheme using Monte Carlo (MC) method backed with sensitivity analysis (SA). For statistical models, we used an informal Student t-test for estimating the certainty of the predicted probability (PP) at each location. Both modelling outputs independently show a high validity; and whereas the level of confidence in LR and WofE models remained the same after performance re-evaluation, the accuracy of the PB model showed an improvement (AUC = 72%). This result is reasonable and provides a further validation of PB model. So, in urban slope analysis, where PB diagnostic is necessary, statistical and PB modelling may play equally supportive roles in landslide hazard assessment.  相似文献   

7.
With the gradual shift from 2D maps to a 3D virtual environment, various visual artifacts were generated by overlaying 2D map symbols on 3D terrain models. This work proposes a novel screen‐based method for rendering 2D vector lines with the accuracy of more than one pixel on the screen in real time. First, screen pixels are inversely projected onto a 3D terrain surface, and then onto the 2D vector plane. Next, these pixels are classified into three categories in terms of their intersection situation with the 2D lines. After that, a multiple sampling process is applied to the pixels that intersect with the 2D lines in order to eliminate visual artifacts, such as intermittence and aliasing (in pixel scale). Finally, a suitable point‐in‐polygon judgment is implemented to color each sample point quickly. The algorithm is realized in a heterogeneously parallel model so that the performance is improved and becomes acceptable.  相似文献   

8.
This paper proposes robust methods for local planar surface fitting in 3D laser scanning data. Searching through the literature revealed that many authors frequently used Least Squares (LS) and Principal Component Analysis (PCA) for point cloud processing without any treatment of outliers. It is known that LS and PCA are sensitive to outliers and can give inconsistent and misleading estimates. RANdom SAmple Consensus (RANSAC) is one of the most well-known robust methods used for model fitting when noise and/or outliers are present. We concentrate on the recently introduced Deterministic Minimum Covariance Determinant estimator and robust PCA, and propose two variants of statistically robust algorithms for fitting planar surfaces to 3D laser scanning point cloud data. The performance of the proposed robust methods is demonstrated by qualitative and quantitative analysis through several synthetic and mobile laser scanning 3D data sets for different applications. Using simulated data, and comparisons with LS, PCA, RANSAC, variants of RANSAC and other robust statistical methods, we demonstrate that the new algorithms are significantly more efficient, faster, and produce more accurate fits and robust local statistics (e.g. surface normals), necessary for many point cloud processing tasks. Consider one example data set used consisting of 100 points with 20% outliers representing a plane. The proposed methods called DetRD-PCA and DetRPCA, produce bias angles (angle between the fitted planes with and without outliers) of 0.20° and 0.24° respectively, whereas LS, PCA and RANSAC produce worse bias angles of 52.49°, 39.55° and 0.79° respectively. In terms of speed, DetRD-PCA takes 0.033 s on average for fitting a plane, which is approximately 6.5, 25.4 and 25.8 times faster than RANSAC, and two other robust statistical methods, respectively. The estimated robust surface normals and curvatures from the new methods have been used for plane fitting, sharp feature preservation and segmentation in 3D point clouds obtained from laser scanners. The results are significantly better and more efficiently computed than those obtained by existing methods.  相似文献   

9.
王猛  田丰 《地理空间信息》2011,(4):40-41,44
以我国东部平原地区钱营孜煤矿矿区为案例,根据实测的散乱数据,使用交叉统计检验和视觉特征分析了ANUDEM方法和与地理信息系统领域中常用传统插值方法的空间插值结果。分析表明,在平原地区,TIN方法生成的DEM具有较高的精度,ANUDEM方法生成的DEM能够准确反映水文地貌。  相似文献   

10.
This paper evaluates the accuracy of triangulated irregular networks (TINs) and lattices in ARC/INFO. The results of an empirical comparison of the two models over two selected topographic sites are presented. Both vector and raster data were used to build the models. Three pairs of models were constructed based on 1,600, 4,000, and 9,000 sample points for the study area of the State Botanical Garden of Athens, Georgia, and 400, 800, and 1,600 sample points for the study area of Lake Lucerne of Wisconsin. The two models were assessed based on the same number of input sample points. Overall, TINs performed better than lattices. The quality of lattices decreased more dramatically than that of TINs when the number of sample points used for the construction of the models decreased. With an increase in the number of sample points used, the difference in performance between the two models decreased. The results of the evaluation directly depend on the comparison criteria and modeling algorithms. The evaluation is slightly sensitive to test indices used and the distribution of test points. The spatial pattern of residuals on spot heights was quite different from that on randomly selected test points. Users should choose models carefully based on the purpose of their application, the accuracy required, and the computer resources that are available.  相似文献   

11.
Understanding the impacts of land cover pattern on the heat island effect is essential for sustainable urban development. Conventional model fitting methods have restricted ability to produce accurate estimates of the land cover‐temperature association due to the lack of procedures to address two important issues: spatial dependence in proximal spatial units and high correlations among predictor variables. In this study, we seek to develop an effective framework called spatially filtered ridge regression (SFRR) to estimate the variations in the quantity and distribution of land surface temperature (LST) in response to various land cover patterns. The SFRR effectively integrates spatial autoregressive models and ridge regression, and it achieves reliable parameter estimates with substantially reduced mean square errors. We show this by comparing the performance of the SFRR to other widely adopted models using Monte Carlo simulation followed by an empirical study over central Phoenix. Results highlight the great potential of the SFRR in producing accurate statistical estimates, providing a positive step toward informed and unbiased decision‐making across a wide variety of disciplines. (Code and data to reproduce the results in the case study are available at: https://github.com/cfan13/SFRRTGIS.git .)  相似文献   

12.
Abstract

The aim of this study was to determine how well the landslide susceptibility parameters, obtained by data-dependent statistical models, matched with the parameters used in the literature. In order to achieve this goal, 20 different environmental parameters were mapped in a well-studied landslide-prone area, the Asarsuyu catchment in northwest Turkey. A total of 4400 seed cells were generated from 47 different landslides and merged with different attributes of 20 different environmental causative variables into a database. In order to run a series of logistic regression models, different random landslide-free sample sets were produced and combined with seed cells. Different susceptibility maps were created with an average success rate of nearly 80%. The coherence among the models showed spatial correlations greater than 90%. Models converged in the parameter selection peculiarly, in that the same nine of 20 were chosen by different logistic regression models. Among these nine parameters, lithology, geological structure (distance/density), landcover-landuse, and slope angle were common parameters selected by both the regression models and literature. Accuracy assessment of the logistic models was assessed by absolute methods. All models were field checked with the landslides resulting from the 12 November 1999, Kayna?li Earthquake (Ms = 7.2).  相似文献   

13.
以车载激光点云数据为研究对象,提出了一种新的路面检测与重建的方法。该方法根据GPS定位器中的索引文件或激光扫描仪的旋转角度值提取扫描车的车行路径,利用提取到的车行路径去检测对应的路沿。然后,对检测到的路沿中出现的各种异常进行相应的平滑处理。在得到精确的路沿线后,可得到路面的三维模型,从而完成对路面的重建。输出的路面三维模型为OBJ格式,可满足相关领域对精确路面的需求,比如三维城市建模。  相似文献   

14.
Land cover monitoring using digital Earth data requires robust classification methods that allow the accurate mapping of complex land cover categories. This paper discusses the crucial issues related to the application of different up-to-date machine learning classifiers: classification trees (CT), artificial neural networks (ANN), support vector machines (SVM) and random forest (RF). The analysis of the statistical significance of the differences between the performance of these algorithms, as well as sensitivity to data set size reduction and noise were also analysed. Landsat-5 Thematic Mapper data captured in European spring and summer were used with auxiliary variables derived from a digital terrain model to classify 14 different land cover categories in south Spain. Overall, statistically similar accuracies of over 91% were obtained for ANN, SVM and RF. However, the findings of this study show differences in the accuracy of the classifiers, being RF the most accurate classifier with a very simple parameterization. SVM, followed by RF, was the most robust classifier to noise and data reduction. Significant differences in their performances were only reached for thresholds of noise and data reduction greater than 20% (noise, SVM) and 25% (noise, RF), and 80% (reduction, SVM) and 50% (reduction, RF), respectively.  相似文献   

15.
This paper compares and contrasts alternative methods for the construction of discontinuous population surface models based on the census and remotely sensed data from Northern Ireland. Two main methods of population distribution are employed: (1) a method based on redistribution from enumeration district (ED) and postcode centroids, and (2) a method based on dasymetric redistribution of ED population counts to suitable land cover zones from classified remotely sensed imagery. Refinements have been made to the centroid redistribution algorithm to accommodate an empirical measure of dispersion, and to allow redistribution in an anisotropic form. These refinements are evaluated against each other and the dasymetric method. The results suggest that all of the methods perform best in urban areas, and that while the refinements may improve the statistical performance of the models, this is at the expense of reduced spatial detail. In general, the techniques are highly sensitive to the spatial and population resolution of the input data.  相似文献   

16.
边界虚拟钻孔在复杂地质体3维建模中的引入与确定   总被引:1,自引:1,他引:0  
王润怀  李永树 《测绘学报》2007,36(4):468-475
在矿山复杂地质体3维建模中,由于数据缺乏致使模型难以控制地质体空间边界特征,直接影响到模型的准确性和实用价值。借助边界虚拟钻孔方法来增加复杂地质体边界数据点密度,从而有效控制3维形体的边界,在一定程度上能缓解这一问题。以地层受断层破坏为例,分别就地层与断层4种典型空间分布组合特征下边界虚拟钻孔的确定方法进行研究,提出相应解决方案。边界虚拟钻孔和普通钻孔在数据存储方式、数据结构和表达方式等方面具有一致性,可用类似的处理方式,因而可以加快3维地质建模,提高建模精度。  相似文献   

17.
Few studies have compared algorithms for mapping surface slope and aspect from digital elevation models. Those studies that have compared these algorithms treat slope and aspect angles independently. The evaluation and comparison of surface orientation algorithms may also be conducted by treating slope and aspect as characteristics of a bi-directional vector normal to the surface. Such a comparison is more appropriate for selecting an accurate surface orientation algorithm for applications that use bi-directional measurements, such as modeling solar radiation or removing the topographic effect from remotely sensed imagery. This study empirically compared the slope angle and bi-directional surface angle estimated from five slope/aspect algorithms using a synthetic terrain surface and an actual terrain surface. The most accurate algorithm is consistently that which uses only the four nearest neighboring elevations in the grid.  相似文献   

18.
Point-based and object-based building extractions were conducted in airborne LiDAR data in a sample area of Buffalo, New York. First, the earth surface points were filtered from the entire laser scan data set using a new filtering algorithm, which combines the TIN slope modelling and statistical analysis. The off-ground points were extracted for buildings in the study area using both point cluster analysis and object-oriented classifications. The accuracies of both approaches were tested using the digitised ground truth. The outcomes of accuracy testing of the point-based method are correctness: 88.74%, completeness: 92.67% and quality: 83.50%. The results of the accuracy of object-based building extraction are correctness: 87.21%, completeness: 60.14%, and quality: 55.26%. Reconstructions of 3D building models based on the extracted building points were performed. This study contributes scientific and technological knowledge for researchers in developing more effective methods in converting the LiDAR survey to a 3D GIS database.  相似文献   

19.
Fully and partially polarimetric SAR data in combination with textural features have been used extensively for terrain classification. However, there is another type of visual feature that has so far been neglected from polarimetric SAR classification: Color. It is a common practice to visualize polarimetric SAR data by color coding methods and thus it is possible to extract powerful color features from such pseudo color images so as to gather additional crucial information for an improved terrain classification. In this paper, we investigate the application of several individual visual features over different pseudo color generated images along with the traditional SAR and texture features for a novel supervised classification application of dual- and single-polarized SAR data. We then draw the focus on evaluating the effects of the applied pseudo coloring methods on the classification performance. An extensive set of experiments show that individual visual features or their combination with traditional SAR features introduce a new level of discrimination and provide noteworthy improvement of classification accuracies within the application of land use and land cover classification for dual- and single-pol image data.  相似文献   

20.
The form of visual representation affects both the way in which the visual representation is processed and the effectiveness of this processing. Different forms of visual representation may require the employment of different cognitive strategies in order to solve a particular task; at the same time, the different representations vary as to the extent to which they correspond with an individual’s preferred cognitive style. The present study employed a Navon-type task to learn about the occurrence of global/local bias. The research was based on close interdisciplinary cooperation between the domains of both psychology and cartography. Several different types of tasks were made involving avalanche hazard maps with intrinsic/extrinsic visual representations, each of them employing different types of graphic variables representing the level of avalanche hazard and avalanche hazard uncertainty. The research sample consisted of two groups of participants, each of which was provided with a different form of visual representation of identical geographical data, such that the representations could be regarded as ‘informationally equivalent’. The first phase of the research consisted of two correlation studies, the first involving subjects with a high degree of map literacy (students of cartography) (intrinsic method: N?=?35; extrinsic method: N?=?37). The second study was performed after the results of the first study were analyzed. The second group of participants consisted of subjects with a low expected degree of map literacy (students of psychology; intrinsic method: N?=?35; extrinsic method: N?=?27).The first study revealed a statistically significant moderate correlation between the students’ response times in extrinsic visualization tasks and their response times in a global subtest (r?=?0.384, p?<?0.05); likewise, a statistically significant moderate correlation was found between the students’ response times in intrinsic visualization tasks and their response times in the local subtest (r?=?0.387, p?<?0.05). At the same time, no correlation was found between the students’ performance in the local subtest and their performance in extrinsic visualization tasks, or between their scores in the global subtest and their performance in intrinsic visualization tasks. The second correlation study did not confirm the results of the first correlation study (intrinsic visualization/‘small figures test’: r?=?0.221; extrinsic visualization/‘large figures test’: r?=?0.135). The first phase of the research, where the data was subjected to statistical analysis, was followed by a comparative eye-tracking study, whose aim was to provide more detailed insight into the cognitive strategies employed when solving map-related tasks. More specifically, the eye-tracking study was expected to be able to detect possible differences between the cognitive patterns employed when solving extrinsic- as opposed to intrinsic visualization tasks. The results of an exploratory eye-tracking data analysis support the hypothesis of different strategies of visual information processing being used in reaction to different types of visualization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号