首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Kernel density estimation (KDE) is a classic approach for spatial point pattern analysis. In many applications, KDE with spatially adaptive bandwidths (adaptive KDE) is preferred over KDE with an invariant bandwidth (fixed KDE). However, bandwidths determination for adaptive KDE is extremely computationally intensive, particularly for point pattern analysis tasks of large problem sizes. This computational challenge impedes the application of adaptive KDE to analyze large point data sets, which are common in this big data era. This article presents a graphics processing units (GPUs)-accelerated adaptive KDE algorithm for efficient spatial point pattern analysis on spatial big data. First, optimizations were designed to reduce the algorithmic complexity of the bandwidth determination algorithm for adaptive KDE. The massively parallel computing resources on GPU were then exploited to further speed up the optimized algorithm. Experimental results demonstrated that the proposed optimizations effectively improved the performance by a factor of tens. Compared to the sequential algorithm and an Open Multiprocessing (OpenMP)-based algorithm leveraging multiple central processing unit cores for adaptive KDE, the GPU-enabled algorithm accelerated point pattern analysis tasks by a factor of hundreds and tens, respectively. Additionally, the GPU-accelerated adaptive KDE algorithm scales reasonably well while increasing the size of data sets. Given the significant acceleration brought by the GPU-enabled adaptive KDE algorithm, point pattern analysis with the adaptive KDE approach on large point data sets can be performed efficiently. Point pattern analysis on spatial big data, computationally prohibitive with the sequential algorithm, can be conducted routinely with the GPU-accelerated algorithm. The GPU-accelerated adaptive KDE approach contributes to the geospatial computational toolbox that facilitates geographic knowledge discovery from spatial big data.  相似文献   

2.
Fine-scale population distribution data at the building level play an essential role in numerous fields, for example urban planning and disaster prevention. The rapid technological development of remote sensing (RS) and geographical information system (GIS) in recent decades has benefited numerous population distribution mapping studies. However, most of these studies focused on global population and environmental changes; few considered fine-scale population mapping at the local scale, largely because of a lack of reliable data and models. As geospatial big data booms, Internet-collected volunteered geographic information (VGI) can now be used to solve this problem. This article establishes a novel framework to map urban population distributions at the building scale by integrating multisource geospatial big data, which is essential for the fine-scale mapping of population distributions. First, Baidu points-of-interest (POIs) and real-time Tencent user densities (RTUD) are analyzed by using a random forest algorithm to down-scale the street-level population distribution to the grid level. Then, we design an effective iterative building-population gravity model to map population distributions at the building level. Meanwhile, we introduce a densely inhabited index (DII), generated by the proposed gravity model, which can be used to estimate the degree of residential crowding. According to a comparison with official community-level census data and the results of previous population mapping methods, our method exhibits the best accuracy (Pearson R = .8615, RMSE = 663.3250, p < .0001). The produced fine-scale population map can offer a more thorough understanding of inner city population distributions, which can thus help policy makers optimize the allocation of resources.  相似文献   

3.
This article presents an algorithm for decentralized (in-network) data mining of the movement pattern flock among mobile geosensor nodes. The algorithm DDIG (Deferred Decentralized Information Grazing) allows roaming sensor nodes to ‘graze’ over time more information than they could access through their spatially limited perception range alone. The algorithm requires an intrinsic temporal deferral for pattern mining, as sensor nodes must be enabled to collect, memorize, exchange, and integrate their own and their neighbors' most current movement history before reasoning about patterns. A first set of experiments with trajectories of simulated agents showed that the algorithm accuracy increases with growing deferral. A second set of experiments with trajectories of actual tracked livestock reveals some of the shortcomings of the conceptual flocking model underlying DDIG in the context of a smart farming application. Finally, the experiments underline the general conclusion that decentralization in spatial computing can result in imperfect, yet useful knowledge.  相似文献   

4.
5.
Development impact fees (DIF) are used for the provision of public infrastructure services to adequately serve new developments. Korean Ministry of Land, Transport, and Maritime Affairs introduced the DIF zoning in 2008 and, like its US counterpart, it requires Korean localities to designate specific districts called ‘DIF zones’ based on the local population growth rate. This study examines the genetic algorithms as a method for DIF zoning-related geodemographic modelling using the Korean National Geographic Information Systems as property-level ancillary data. A borough of Hwasung City is taken as the case area since the city internally collected population data by ward-level enumeration areas for DIF zoning in 2008. The gridded population map is built from this source enumeration area population data to select the training dataset. The functional form of the genetic algorithm model has been formulated to have a hierarchical weighting system in which categorical weights of variable groups and individual weights of subordinate variables are sought bilaterally. The model is run by a carefully pretested set of reproductive plan parameters and the consequences are compared with the conventional regression models. It is found that the genetic algorithm solutions are quite comparable to the ones obtained by the regression methods, and it seemed, in this regard, worth adopting the two approaches simultaneously in a complementary manner to take unique advantages of each other either in facilitating the analysis processes or in obtaining more promising outputs for the DIF zoning as well as other geodemographic applications.  相似文献   

6.
The population distribution grid at fine scales better reflects the distribution of residents and plays an important role in investigating urban systems. The recent years have witnessed a growing trend of applying the nighttime light data to the estimation of population at micro levels. However, using the nighttime light data alone to estimate population may cause the overestimation problem due to excessively high light radiance in specific types of areas such as commercial zones and transportation hubs. In dealing with this issue, this study used taxi trajectory data that delineate people’s movements, and explored the utility of integrating the nighttime light and taxi trajectory data in the estimation of population in Shanghai at the spatial resolution of 500 m. First, the initial population distribution grid was generated based on the NPP-VIIRS nighttime light data. Then, a calibration grid was created with taxi trajectory data, whereby the initial population grid was optimized. The accuracy of the resultant population grid was assessed by comparing it with the refined survey data. The result indicates that the final population distribution grid performed better than the initial population grid, which reflects the effectiveness of the proposed calibration process.  相似文献   

7.
矢量数据向栅格数据转换的一种改进算法   总被引:13,自引:0,他引:13  
地理信息系统的发展与空间数据结构的优化密不可分,栅格数据与矢量数据之间的高效转换是GIS的关键技术之一。由于栅格数据十分有利于空间分析中的叠置分析,因而通常需要将矢量数据转换成栅格数据。该文分析对比了地理信息系统的两种基本数据结构,在总结已往矢量数据转换为栅格数据方法的基础上,依据边界代数多边形填充算法的基本原理,结合绘图作业时采用的正负法,提出了一种改进的折线边界(数据串)跟踪方法。该算法原理简单,不需进行复杂的距离比较运算,运算速度快,并且通过简单的角度判断保证了填充的精度。  相似文献   

8.
Geographic data themes modelled as planar partitions are found in many GIS applications (e.g. topographic data, land cover, zoning plans, etc.). When generalizing this kind of 2D map, this specific nature has to be respected and generalization operations should be carefully designed. This paper presents a design and implementation of an algorithm to perform a split operation of faces (polygonal areas).

The result of the split operation has to fit in with the topological data structure supporting variable-scale data. The algorithm, termed SPLITAREA, obtains the skeleton of a face using a constrained Delaunay triangulation. The new split operator is especially relevant in urban areas with many infrastructural objects such as roads. The contribution of this work is twofold: (1) the quality of the split operation is formally assessed by comparing the results on actual test data sets with a goal/metric we defined beforehand for the ‘balanced’ split and (2) the algorithm allows a weighted split, where different neighbours have different weights due to different compatibility. With the weighted split, the special case of unmovable boundaries is also explicitly addressed.

The developed split algorithm can also be used outside the generalization context in other settings. For example, to make two cross-border data sets fit, the algorithm could be applied to allow splitting of slivers.  相似文献   


9.
10.
With the ubiquity of advanced web technologies and location-sensing hand held devices, citizens regardless of their knowledge or expertise, are able to produce spatial information. This phenomenon is known as volunteered geographic information (VGI). During the past decade VGI has been used as a data source supporting a wide range of services, such as environmental monitoring, events reporting, human movement analysis, disaster management, etc. However, these volunteer-contributed data also come with varying quality. Reasons for this are: data is produced by heterogeneous contributors, using various technologies and tools, having different level of details and precision, serving heterogeneous purposes, and a lack of gatekeepers. Crowd-sourcing, social, and geographic approaches have been proposed and later followed to develop appropriate methods to assess the quality measures and indicators of VGI. In this article, we review various quality measures and indicators for selected types of VGI and existing quality assessment methods. As an outcome, the article presents a classification of VGI with current methods utilized to assess the quality of selected types of VGI. Through these findings, we introduce data mining as an additional approach for quality handling in VGI.  相似文献   

11.
The overall aim of the Questronic project has been to focus upon techniques in data recording, data preparation and processing for computer input. The development and subsequent commercialization of these specific routines has come to fruition with the production of the Ferranti MRT 100. This initial product of the Questronic project has major ramifications for behavioral-survey investigations.  相似文献   

12.
The analysis of local spatial autocorrelation for spatial attributes has been an important concern in geographical inquiry. In this paper, we propose a concept and algorithm of k-order neighbours based on Delaunay's triangulated irregular networks and redefine Getis and Ord's (1992) local spatial autocorrelation statistic as Gi(k) with weight coefficient wij(k) based on k-order neighbours for the study of local patterns in spatial attributes. To test the validity of these statistics, an experiment is performed using spatial data of the elderly population in Ichikawa City, Chiba Prefecture, Japan. The difference between the weight coefficients of the k-order neighbours and distance parameter to measure the spatial proximity of districts located in the city centre and near the city limits is found by Monte-Carlo simulation.  相似文献   

13.
戴芹  刘建波 《地理研究》2009,28(4):1136-1145
蚁群算法作为一种新型的智能优化算法,已经成功应用在许多领域,然而应用蚁群优化算法进行遥感数据处理则是一个新的研究热点。蚁群规则挖掘算法是基于分类规则挖掘进行分类,能够处理多特征的数据。因此,论文将蚁群规则挖掘算法应用到多特征遥感数据分类处理中,并采用北京地区的Landsat TM和 Envisat ASAR数据作为实验数据,对选择的遥感数据进行了多特征分类实验。实验结果分别与最大似然分类法、C4.5方法进行对比,分析表明:1)蚁群规则挖掘算法是一种无参数分类的智能方法,具有很好的鲁棒性,2)能够挖掘较简单的分类规则;3)能够充分利用多源遥感数据等。它可以充分利用多特征数据进行土地覆盖分类,从而能够提高分类的效率。  相似文献   

14.
Clustering of temporal event processes   总被引:1,自引:0,他引:1  
A temporal point process is a sequence of points, each representing the occurrence time of an event. Each temporal point process is related to the behavior of an entity. As a result, clustering of temporal point processes can help differentiate between entities, thereby revealing patterns of behaviors. This study proposes a hierarchical cluster method for clustering temporal point processes based on the discrete Fréchet (DF) distance. The DF cluster method is divided into four steps: (1) constructing a DF similarity matrix between temporal point processes; (2) constructing a complete linkage hierarchical tree based on the DF similarity matrix; (3) clustering the point processes with a threshold determined by locating the local maxima on the curve of the pseudo-F statistic (an index which measures the separability between clusters and the compactness in clusters); and (4) identifying inner patterns for each cluster formed by a series of dense intervals, each of which contains at least one event of all processes of the cluster. The contributions of the article are: (1) the proposed DF cluster method can cluster temporal point processes into different groups and (2) more importantly, it can identify the inner pattern of each cluster. Two synthetic data sets were created to illustrate the DF distance between temporal point process clusters (the first data set) and validate the proposed DF cluster method (the second data set), respectively. An experiment and a comparison with a method based on dynamic time warping show that DF cluster successfully identifies the preconfigured patterns in the second synthetic data set. The cluster method was then applied to a population migration history data set for the Northern Plains of the United States, revealing some interesting population migration patterns.  相似文献   

15.
16.
There has been a resurgence of interest in time geography studies due to emerging spatiotemporal big data in urban environments. However, the rapid increase in the volume, diversity, and intensity of spatiotemporal data poses a significant challenge with respect to the representation and computation of time geographic entities and relations in road networks. To address this challenge, a spatiotemporal data model is proposed in this article. The proposed spatiotemporal data model is based on a compressed linear reference (CLR) technique to transform network time geographic entities in three-dimensional (3D) (x, y, t) space to two-dimensional (2D) CLR space. Using the proposed spatiotemporal data model, network time geographic entities can be stored and managed in classical spatial databases. Efficient spatial operations and index structures can be directly utilized to implement spatiotemporal operations and queries for network time geographic entities in CLR space. To validate the proposed spatiotemporal data model, a prototype system is developed using existing 2D GIS techniques. A case study is performed using large-scale datasets of space-time paths and prisms. The case study indicates that the proposed spatiotemporal data model is effective and efficient for storing, managing, and querying large-scale datasets of network time geographic entities.  相似文献   

17.
Several algorithms have been proposed to generate a polygonal ‘footprint’ to characterize the shape of a set of points in the plane. One widely used type of footprint is the χ-shape. Based on the Delaunay triangulation (DT), χ-shapes guaranteed to be simple (Jordan) polygons. This paper presents for the first time an incremental χ-shape algorithm, capable of processing point data streams. Our incremental χ-shape algorithm allows both insertion and deletion operations, and can handle streaming individual points and multiple point sets. The experimental results demonstrated that the incremental algorithm is significantly more efficient than the existing, batch χ-shape algorithm for processing a wide variety of point data streams.  相似文献   

18.
Based on a box-accounting fractal dimension algorithm (BCFD) and a unique procedure of data processing, this paper computes planar fractal dimensions of 20 large US cities along with their surrounding urbanized areas. The results show that the value range of planar urban fractal dimension (D) is 1< D <2, with D for the largest city, New York City, and the smallest city, Omaha being 1.7014 and 1.2778 respectively. The estimated urban fractal dimensions are then regressed to the total urbanized areas, Log (C), and total urban population, Log (POP), with log-linear functions. In general, the linear functions can produce good-fits for Log (C) vs. D and Log (POP) vs. in terms of R2 values. The observation that cities may have virtually the same D or Log(C) value but quite disparate population sizes indicates that D itself says little about the specific orientation and configuration of an urban form and is not a good measure of urban population density. This paper also explores fractal dimension and fractal growth of Baltimore, MD for the 200-year span from 1792–1992. The results show that Baltimore's D also satisfies the inequality 1< D <2, with D =1.0157 in 1822 and D =1.7221 in 1992. D =0.6641 for Baltimore in 1792 is an exception due mainly to its relatively small urban image with respect to pixel size. While D always increases with Log (C) over the years, it is not always positively correlated to urban population, Log(POP).  相似文献   

19.
地理学时空数据分析方法   总被引:13,自引:4,他引:9  
随着地理空间观测数据的多年积累,地球环境、社会和健康数据监测能力的增强,地理信息系统和计算机网络的发展,时空数据集大量生成,时空数据分析实践呈现快速增长。本文对此进行了分析和归纳,总结了时空数据分析的7类主要方法,包括:时空数据可视化,目的是通过视觉启发假设和选择分析模型;空间统计指标的时序分析,反映空间格局随时间变化;时空变化指标,体现时空变化的综合统计量;时空格局和异常探测,揭示时空过程的不变和变化部分;时空插值,以获得未抽样点的数值;时空回归,建立因变量和解释变量之间的统计关系;时空过程建模,建立时空过程的机理数学模型;时空演化树,利用空间数据重建时空演化路径。通过简述这些方法的基本原理、输入输出、适用条件以及软件实现,为时空数据分析提供工具和方法手段。  相似文献   

20.
基于机载激光点云数据的电力线自动提取算法   总被引:1,自引:0,他引:1  
设计并开发了一种从机载激光扫描的三维点云数据中自动提取电力线的算法,采用局部高程分布直方图模式分类滤波、Hough特征空间中全局方向特征优先的线特征提取、悬挂点位置数学推算和局部分段多项式拟合的方法,有效解决了电力线提取过程中电力线点云与电塔点云的自动分类、电力线平面位置提取、电力线悬挂点提取、电力线拟合问题。最后通过实际的工程数据验证了该算法的实用性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号