首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The topic of geoprivacy is increasingly relevant as larger quantities of personal location data are collected and shared. The results of scientific inquiries are often spatially suppressed to protect confidentiality, limiting possible benefits of public distribution. Obfuscation techniques for point data hold the potential to enable the public release of more accurate location data without compromising personal identities. This paper examines the application of four spatial obfuscation methods for household survey data. Household privacy is evaluated by a nearest neighbor analysis, and spatial distribution is measured by a cross-k function and cluster analysis. A new obfuscation technique, Voronoi masking, is demonstrated to be distinctively equipped to balance between protecting both household privacy and spatial distribution.  相似文献   

3.
Transferring data from one geographic information system to another commonly requires a sequence of interfaces, software packages that convert data from one format to another. Construction is described of a model interface that uses a relational database management system and compiler-building tools that work from a machine-readable definition of a data format. Parallels are drawn with conversion and translation problems in other areas from which software tools might be obtained for automation of interface construction. Three interfacing strategies are examined. In particular, the advantages and limitations are discussed for the strategy of using a standard interchange format. The principal conclusion is that achieving a widely-accepted method of defining geographic data formats should be an important objective of efforts at standardization.  相似文献   

4.
小流域综合管理信息系统集成研究   总被引:14,自引:2,他引:14  
本文介绍的小流域管理与决策支持信息系统 ,是以小流域综合治理与科学决策为目标 ,以地块为基本操作单元的土壤侵蚀模型、生产潜力模型、成本效益模型与GIS的集成系统。该系统具有数据管理、查询、更新、处理、模型分析和输出等多种功能。分析模型与GIS集成的基本单元是地块 (landunit) ,在地块上实现模型参数的提取与传递、模型计算及显示与分析。在分析各种集成方式的基础上 ,选取了动态连接库及其扩展方式 ,实现了土壤侵蚀模型、生产潜力模型、成本效益模型、规划模块与GIS的紧密集成 ,以统一的图形用户界面 ,服务于水土保持的现代化管理和小流域综合治理  相似文献   

5.
With recent advances in remote sensing, location-based services and other related technologies, the production of geospatial information has exponentially increased in the last decades. Furthermore, to facilitate discovery and efficient access to such information, spatial data infrastructures were promoted and standardized, with a consideration that metadata are essential to describing data and services. Standardization bodies such as the International Organization for Standardization have defined well-known metadata models such as ISO 19115. However, current metadata assets exhibit heterogeneous quality levels because they are created by different producers with different perspectives. To address quality-related concerns, several initiatives attempted to define a common framework and test the suitability of metadata through automatic controls. Nevertheless, these controls are focused on interoperability by testing the format of metadata and a set of controlled elements. In this paper, we propose a methodology of testing the quality of metadata by considering aspects other than interoperability. The proposal adapts ISO 19157 to the metadata case and has been applied to a corpus of the Spanish Spatial Data Infrastructure. The results demonstrate that our quality check helps determine different types of errors for all metadata elements and can be almost completely automated to enhance the significance of metadata.  相似文献   

6.
There has been a resurgence of interest in time geography studies due to emerging spatiotemporal big data in urban environments. However, the rapid increase in the volume, diversity, and intensity of spatiotemporal data poses a significant challenge with respect to the representation and computation of time geographic entities and relations in road networks. To address this challenge, a spatiotemporal data model is proposed in this article. The proposed spatiotemporal data model is based on a compressed linear reference (CLR) technique to transform network time geographic entities in three-dimensional (3D) (x, y, t) space to two-dimensional (2D) CLR space. Using the proposed spatiotemporal data model, network time geographic entities can be stored and managed in classical spatial databases. Efficient spatial operations and index structures can be directly utilized to implement spatiotemporal operations and queries for network time geographic entities in CLR space. To validate the proposed spatiotemporal data model, a prototype system is developed using existing 2D GIS techniques. A case study is performed using large-scale datasets of space-time paths and prisms. The case study indicates that the proposed spatiotemporal data model is effective and efficient for storing, managing, and querying large-scale datasets of network time geographic entities.  相似文献   

7.
In recent years, social media emerged as a potential resource to improve the management of crisis situations such as disasters triggered by natural hazards. Although there is a growing research body concerned with the analysis of the usage of social media during disasters, most previous work has concentrated on using social media as a stand-alone information source, whereas its combination with other information sources holds a still underexplored potential. This article presents an approach to enhance the identification of relevant messages from social media that relies upon the relations between georeferenced social media messages as Volunteered Geographic Information and geographic features of flood phenomena as derived from authoritative data (sensor data, hydrological data and digital elevation models). We apply this approach to examine the micro-blogging text messages of the Twitter platform (tweets) produced during the River Elbe Flood of June 2013 in Germany. This is performed by means of a statistical analysis aimed at identifying general spatial patterns in the occurrence of flood-related tweets that may be associated with proximity to and severity of flood events. The results show that messages near (up to 10 km) to severely flooded areas have a much higher probability of being related to floods. In this manner, we conclude that the geographic approach proposed here provides a reliable quantitative indicator of the usefulness of messages from social media by leveraging the existing knowledge about natural hazards such as floods, thus being valuable for disaster management in both crisis response and preventive monitoring.  相似文献   

8.
滑坡负样本在统计型滑坡危险度制图中具有重要作用,能抑制统计模型对滑坡危险度的高估。当前滑坡负样本采样方法采集的负样本可信度未知,在负样本采样过程中,极有可能将那些潜在滑坡点错选为负样本,这些假的负样本会降低负样本集的质量和训练样本集的质量,进而影响统计模型的精度。本文基于“地理环境越相似、地理特征越相似”的地理学常识,认为与正样本有着相似地理环境的点极有可能是未来发生滑坡的点;与正样本的地理环境越不相似的点,则越有可能是负样本。基于此假设提出一种基于地理环境相似度的负样本可信度度量方法,将该方法应用于滑坡灾害频发的陇南山区油房沟流域,对油房沟进行滑坡负样本可信度评价制图;使用油房沟流域的滑坡发生初始面来验证该方法的有效性。结果发现:滑坡发生初始面上所有栅格点的负样本可信度平均值为0.26,超过95%的栅格点的负样本可信度都小于0.5,说明本文提出的负样本可信度度量方法合理。  相似文献   

9.
This paper presents an axiomatic formalization of a theory of top-level relations between three categories of entities: individuals, universals, and collections. We deal with a variety of relations between entities in these categories, including the sub-universal relation among universals and the parthood relation among individuals, as well as cross-categorial relations such as instantiation and membership. We show that an adequate understanding of the formal properties of such relations – in particular their behavior with respect to time – is critical for geographic information processing.

The axiomatic theory is developed using Isabelle, a computational system for implementing logical formalisms. All proofs are computer verified and the computational representation of the theory is available online.  相似文献   

10.
lintroductionGeographicinformahonSystem(GIS)hasbeendevelopedtosuchadegreethatitseemstobeakindofpanaceawhengeographicproblemsarediscussedwhereverinPapers,onclasses,inrePOrtSorinresearchplans.AtpresentitisasthoughthatgeographywasnotgeographywithoutmentioningGIS.Itremindsusofquanhtahvegeography1flatoncebroughtanewlookforgeogr'aphicresearchfromthe1960stothe1980s.DuringthatPeriod,quanhtahvegeographyhadthesimilargoodfortUnetoGIStodayasitwasfrequenhyappliedindiscussingconcernedgeognphicproblems…  相似文献   

11.
12.
The ever‐increasing number of spatial data sets accessible through spatial data clearinghouses continues to make geographic information retrieval and spatial data discovery major challenges. Such challenges have been addressed in the discipline of Information Retrieval through ranking of data according to inferred degrees of relevance. Spatial data, however, present an additional challenge as they are characteristically made up of geometry, attribute and, optionally, temporal components. As these components are mutually independent of one another, this paper suggests that they be ranked independently of one another. The representation of the results of the independent ranking of these three components of spatial data suggests that representation of the results of the ranking process requires an alternative approach to currently used textual ranked lists: visualisation of relevance in a three‐dimensional visualisation environment. To illustrate the possible application of such an approach, a prototype browser is presented.  相似文献   

13.
Environmental simulation models need automated geographic data reduction methods to optimize the use of high-resolution data in complex environmental models. Advanced map generalization methods have been developed for multiscale geographic data representation. In the case of map generalization, positional, geometric and topological constraints are focused on to improve map legibility and communication of geographic semantics. In the context of environmental modelling, in addition to the spatial criteria, domain criteria and constraints also need to be considered. Currently, due to the absence of domain-specific generalization methods, modellers resort to ad hoc methods of manual digitization or use cartographic methods available in off-the-shelf software. Such manual methods are not feasible solutions when large data sets are to be processed, thus limiting modellers to the single-scale representations. Automated map generalization methods can rarely be used with confidence because simplified data sets may violate domain semantics and may also result in suboptimal model performance. For best modelling results, it is necessary to prioritize domain criteria and constraints during data generalization. Modellers should also be able to automate the generalization techniques and explore the trade-off between model efficiency and model simulation quality for alternative versions of input geographic data at different geographic scales. Based on our long-term research with experts in the analytic element method of groundwater modelling, we developed the multicriteria generalization (MCG) framework as a constraint-based approach to automated geographic data reduction. The MCG framework is based on the spatial multicriteria decision-making paradigm since multiscale data modelling is too complex to be fully automated and should be driven by modellers at each stage. Apart from a detailed discussion of the theoretical aspects of the MCG framework, we discuss two groundwater data modelling experiments that demonstrate how MCG is not just a framework for automated data reduction, but an approach for systematically exploring model performance at multiple geographic scales. Experimental results clearly indicate the benefits of MCG-based data reduction and encourage us to continue expanding the scope of and implement MCG for multiple application domains.  相似文献   

14.
基于居住空间属性的人口数据空间化方法研究   总被引:1,自引:0,他引:1  
董南  杨小唤  蔡红艳 《地理科学进展》2016,35(11):1317-1328
精细尺度的人口分布是当前人口地理学研究的热点和难点,在灾害评估、资源配置、智慧城市建设等方面应用广泛。居住建筑物尺度作为精细尺度的重要内容,其人口数据空间化日益引起学术界的关注。本文以居住建筑斑块面积、斑块内建筑面积比重、建筑物层数、公摊率等居住空间属性为人口分布数量的指示因子,以居住建筑的轮廓斑块为人口分布位置的指示因子,利用街道界线和街道常住人口数据为控制单元,建立线性模型,获得了2015年宣城市宣州区6个街道的居住建筑物尺度的人口分布矢量数据,刻画了城市市区人口空间分布的细节信息。结果表明:①以居住空间属性作为人口空间分布的指示因子,获取的人口空间数据精度高,结果可信。29个社区(村)估算人数的相对误差绝对值的平均值低于7%,其中25个社区(村)的相对误差绝对值低于10%。在1102个居住建筑斑块中,估算人数在合理区内的斑块个数占比高于74%,轻微低估区(-10%, 0)和轻微高估区(0, 10%)的斑块总数占比高于9%;②由斑块面积和建筑物层数共同表征的建筑物体积,是建筑物尺度上影响人口空间分布的关键因素;斑块内建筑面积比重属性能进一步提高模型精度;公摊率属性具有“降高升低”作用,但将估算人数调节到合理区的“能力”较弱。  相似文献   

15.
Kernel density estimation (KDE) is a classic approach for spatial point pattern analysis. In many applications, KDE with spatially adaptive bandwidths (adaptive KDE) is preferred over KDE with an invariant bandwidth (fixed KDE). However, bandwidths determination for adaptive KDE is extremely computationally intensive, particularly for point pattern analysis tasks of large problem sizes. This computational challenge impedes the application of adaptive KDE to analyze large point data sets, which are common in this big data era. This article presents a graphics processing units (GPUs)-accelerated adaptive KDE algorithm for efficient spatial point pattern analysis on spatial big data. First, optimizations were designed to reduce the algorithmic complexity of the bandwidth determination algorithm for adaptive KDE. The massively parallel computing resources on GPU were then exploited to further speed up the optimized algorithm. Experimental results demonstrated that the proposed optimizations effectively improved the performance by a factor of tens. Compared to the sequential algorithm and an Open Multiprocessing (OpenMP)-based algorithm leveraging multiple central processing unit cores for adaptive KDE, the GPU-enabled algorithm accelerated point pattern analysis tasks by a factor of hundreds and tens, respectively. Additionally, the GPU-accelerated adaptive KDE algorithm scales reasonably well while increasing the size of data sets. Given the significant acceleration brought by the GPU-enabled adaptive KDE algorithm, point pattern analysis with the adaptive KDE approach on large point data sets can be performed efficiently. Point pattern analysis on spatial big data, computationally prohibitive with the sequential algorithm, can be conducted routinely with the GPU-accelerated algorithm. The GPU-accelerated adaptive KDE approach contributes to the geospatial computational toolbox that facilitates geographic knowledge discovery from spatial big data.  相似文献   

16.
We present a reactive data structure, that is, a spatial data structure with detail levels. The two properties, spatial organization and detail levels, are the basis for a geographic information system (GIS) with a multi-scale database. A reactive data structure is a novel type of data structure catering to multiple detail levels with rapid responses to spatial queries. It is presented here as a modification of the binary space partitioning tree that includes the levels of detail. This tree is one of the few spatial data structures that does not organize space in a rectangular manner. A prototype system has been implemented. An important result of this implementation is that it shows that binary space partitioning trees of real maps have O(n) storage space complexity in contrast to the theoretical worst case O(n2 ), with n the number of line segments in the map.  相似文献   

17.
Summary. This paper describes a new method of smoothing noisy data, such as palaeomagnetic directions, in which the optimum degree of smoothing is determined objectively from the internal evidence of the data alone. As well as providing a best-fitting smooth curve, the method indicates, by means of confidence limits, which oscillations or fluctuations in the fitted curve are real. The procedure, which is illustrated by an analysis of palaeomagnetic decimation directions from Lake Windermere, has potential applications throughout the Earth Sciences. It may be used in any investigation requiring the estimation of a smooth function from noisy data, provided certain basic assumptions are reasonably satisfied.  相似文献   

18.
Pattern analysis techniques currently common within geography tend to focus either on characterizing patterns of spatial and/or temporal recurrence of a single event type (e.g., incidence of flu cases) or on comparing sequences of a limited number of event types where relationships between events are already represented in the data (e.g., movement patterns). The availability of large amounts of multivariate spatiotemporal data, however, requires new methods for pattern analysis. Here, we present a technique for finding associations among many different event types where the associations among these varying event types are not explicitly represented in the data or known in advance. This pattern discovery method, known as T-pattern analysis, was first developed within the field of psychology for the purpose of finding patterns in personal interactions. We have adapted and extended the T-pattern method to take the unique characteristics of geographic data into account and implemented it within a geovisualization toolkit for an integrated computational-geovisual environment we call STempo. To demonstrate how T-pattern analysis can be employed in geographic research for discovering patterns in complex spatiotemporal data, we describe a case study featuring events from news reports about Yemen during the Arab Spring of 2011–2012. Using supplementary data from the Global Database of Events, Language, and Tone, we briefly summarize and reference a separate validation study, then evaluate the scalability of the T-pattern approach. We conclude with ideas for further extensions of the T-pattern technique to increase its utility for spatiotemporal analysis.  相似文献   

19.
基于电子地图兴趣点的城市建成区边界识别方法   总被引:27,自引:4,他引:23  
许泽宁  高晓路 《地理学报》2016,71(6):928-939
城市建成区边界是认识和研究城市的重要基础性信息,也是落实城市功能空间布局、实施界限管控的前提。但是,以往通过夜间灯光的强度、土地覆被或建筑覆盖率等信息获取城市空间范围的方法,由于受到数据精度和尺度限制,对城市社会经济活动的解释性不强,因而存在较大局限性。电子地图兴趣点(POI)作为城市空间分析的基础数据之一,直观且有效地反映了各类城市要素的集聚状况。本文基于POI与城市空间结构和城市要素空间分布的关联性,提出了一种新的通过POI密度分布来判别城市建成区边界的技术方法。为此,开发了Densi-Graph分析方法,用来分析POI密度等值线的变化趋势,在此基础上对城乡过渡地带的阈值识别的方法进行了理论分析,并讨论了单中心圆结构、双中心“鱼眼型”结构、双中心“子母型”结构等各类城市POI密度等值线的生长规律,证明了Densi-Graph分析方法的适用性。较之以往的城市建成区边界识别方法,这种方法的基础数据更加直观可信,分析结果也更加客观。运用这种方法,本文对全国地级以上城市的建成区边界进行了实证分析,探索了城市建成区边界的阈值及其与城市人口规模、城市所在区域之间的关系。  相似文献   

20.
One common problem with geographic data is that, for a specific geographic event, only occurrence information is available; information about the absence of the event is not available. We refer to these specific types of geospatial data as geographic one-class data (GOCD). Predicting the potential spatial distributions that a particular geographic event may occur from GOCD is difficult because traditional binary classification methods that require availability of both positive and negative training samples cannot be used. The objective of this research is to define GOCD and propose novel approaches for modelling potential spatial distributions of geographic events using GOCD. We investigate the effectiveness of one-class support vector machine (OCSVM), maximum entropy (MAXENT) and the newly proposed positive and unlabelled learning (PUL) algorithm for solving GOCD problems using a case study: species distribution modelling from synthetic data. Our experimental results indicate that generally OCSVM, MAXENT and PUL are effective in modelling the GOCD. Each method has advantages and disadvantages, but PUL seems to be the most promising method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号