首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
The analysis of social media content for the extraction of geospatial information and event‐related knowledge has recently received substantial attention. In this article we present an approach that leverages the complementary nature of social multimedia content by utilizing heterogeneous sources of social media feeds to assess the impact area of a natural disaster. More specifically, we introduce a novel social multimedia triangulation process that uses both Twitter and Flickr content in an integrated two‐step process: Twitter content is used to identify toponym references associated with a disaster; this information is then used to provide approximate orientation for the associated Flickr imagery, allowing us to delineate the impact area as the overlap of multiple view footprints. In this approach, we practically crowdsource approximate orientations from Twitter content and use this information to orient Flickr imagery accordingly and identify the impact area through viewshed analysis and viewpoint integration. This approach enables us to avoid computationally intensive image analysis tasks associated with traditional image orientation, while allowing us to triangulate numerous images by having them pointed towards the crowdsourced toponym location. The article presents our approach and demonstrates its performance using a real‐world wildfire event as a representative application case study.  相似文献   

2.
ABSTRACT

Researchers are continually finding new applications of satellite images because of the growing number of high-resolution images with wide spatial coverage. However, the cost of these images is sometimes high, and their temporal resolution is relatively coarse. Crowdsourcing is an increasingly common source of data that takes advantage of local stakeholder knowledge and that provides a higher frequency of data. The complementarity of these two data sources suggests there is great potential for mutually beneficial integration. Unfortunately, there are still important gaps in crowdsourced satellite image analysis by means of crowdsourcing in areas such as land cover classification and emergency management. In this paper, we summarize recent efforts, and discuss the challenges and prospects of satellite image analysis for geospatial applications using crowdsourcing. Crowdsourcing can be used to improve satellite image analysis and satellite images can be used to organize crowdsourced efforts for collaborative mapping.  相似文献   

3.
Reviews     
  相似文献   

4.
Crowdsourcing has become a popular means to acquire data about the Earth and its environment inexpensively, but the data-sets obtained are typically imperfect and of unknown quality. Two common imperfections with crowdsourced data are the contributions from cheats or spammers and missing cases. The effect of the latter two imperfections on a method to evaluate the accuracy of crowdsourced data via a latent class model was explored. Using simulated and real data-sets, it was shown that the method is able to derive useful information on the accuracy of crowdsourced data even when the degree of imperfection was very high. The practical potential of this ability to obtain accuracy information within the geospatial sciences and the realm of Digital Earth applications was indicated with reference to an evaluation of building damage maps produced by multiple bodies after the 2010 earthquake in Haiti. Critically, the method allowed data-sets to be ranked in approximately the correct order of accuracy and this could help ensure that the most appropriate data-sets are used.  相似文献   

5.
针对境外欠发达国家(或地区)地理空间数据和志愿者不足问题,为提高有限志愿者的贡献积极性和有效性,本文提出了一种综合多要素的地理空间数据众包任务推荐方法。首先采用网格将研究区域划分为若干任务;然后引入三角核函数计算用户空间偏好,结合时间遗忘率综合计算用户的时空偏好,借鉴TF-IDF和余弦相似度计算语义偏好,并融合时空、语义偏好获取初始兴趣推荐列表;最后利用隐语义模型预测用户标注每个任务的信誉(能力),并根据用户信誉对初始推荐列表重排序。为验证本文方法有效性,以有一定数据基础的巴基斯坦首都伊斯兰堡为试验区,采用OpenStreetMap平台收集的用户及众包数据开展任务区推荐试验,试验数据按照8:2的比例随机划分为训练集和测试集。试验结果表明,该方法不仅能提高推荐任务接受率,还能提高用户完成任务的有效性。  相似文献   

6.
Abstract

The geospatial sciences face grand information technology (IT) challenges in the twenty-first century: data intensity, computing intensity, concurrent access intensity and spatiotemporal intensity. These challenges require the readiness of a computing infrastructure that can: (1) better support discovery, access and utilization of data and data processing so as to relieve scientists and engineers of IT tasks and focus on scientific discoveries; (2) provide real-time IT resources to enable real-time applications, such as emergency response; (3) deal with access spikes; and (4) provide more reliable and scalable service for massive numbers of concurrent users to advance public knowledge. The emergence of cloud computing provides a potential solution with an elastic, on-demand computing platform to integrate – observation systems, parameter extracting algorithms, phenomena simulations, analytical visualization and decision support, and to provide social impact and user feedback – the essential elements of the geospatial sciences. We discuss the utilization of cloud computing to support the intensities of geospatial sciences by reporting from our investigations on how cloud computing could enable the geospatial sciences and how spatiotemporal principles, the kernel of the geospatial sciences, could be utilized to ensure the benefits of cloud computing. Four research examples are presented to analyze how to: (1) search, access and utilize geospatial data; (2) configure computing infrastructure to enable the computability of intensive simulation models; (3) disseminate and utilize research results for massive numbers of concurrent users; and (4) adopt spatiotemporal principles to support spatiotemporal intensive applications. The paper concludes with a discussion of opportunities and challenges for spatial cloud computing (SCC).  相似文献   

7.
ABSTRACT

Understanding the characteristics of tourist movement is essential for tourist behavior studies since the characteristics underpin how the tourist industry management selects strategies for attraction planning to commercial product development. However, conventional tourism research methods are not either scalable or cost-efficient to discover underlying movement patterns due to the massive datasets. With advances in information and communication technology, social media platforms provide big data sets generated by millions of people from different countries, all of which can be harvested cost efficiently. This paper introduces a graph-based method to detect tourist movement patterns from Twitter data. First, collected tweets with geo-tags are cleaned to filter those not published by tourists. Second, a DBSCAN-based clustering method is adapted to construct tourist graphs consisting of the tourist attraction vertices and edges. Third, network analytical methods (e.g. betweenness centrality, Markov clustering algorithm) are applied to detect tourist movement patterns, including popular attractions, centric attractions, and popular tour routes. New York City in the United States is selected to demonstrate the utility of the proposed methodology. The detected tourist movement patterns assist business and government activities whose mission is tour product planning, transportation, and development of both shopping and accommodation centers.  相似文献   

8.
It is sometimes easy to forget that massive crowdsourced data products such as Wikipedia and OpenStreetMap (OSM) are the sum of individual human efforts stemming from a variety of personal and institutional interests. We present a geovisual analytics tool called Crowd Lens for OpenStreetMap designed to help professional users of OSM make sense of the characteristics of the “crowd” that constructed OSM in specific places. The tool uses small multiple maps to visualize each contributor’s piece of the crowdsourced whole, and links OSM features with the free-form commit messages supplied by their contributors. Crowd Lens allows sorting and filtering contributors by characteristics such as number of contributions, most common language used, and OSM attribute tags applied. We describe the development and evaluation of Crowd Lens, showing how a multiple-stage user-centered design process (including testing by geospatial technology professionals) helped shape the tool’s interface and capabilities. We also present a case study using Crowd Lens to examine cities in six continents. Our findings should assist institutions deliberating OSM’s fitness for use for different applications. Crowd Lens is also potentially informative for researchers studying Internet participation divides and ways that crowdsourced products can be better comprehended with visual analytics methods.  相似文献   

9.
ABSTRACT

There is a critical need to develop a means for fast, task-driven discovery of geospatial data found in geoportals. Existing geoportals, however, only provide metadata-based means for discovery, with little support for task-driven discovery, especially when considering spatial–temporal awareness. To address this gap, this paper presents a Case-Based Reasoning-supported Geospatial Data Discovery (CBR-GDD) method and implementation that accesses geospatial data by tasks. The advantages of the CBR-GDD approach is that it builds an analogue reasoning process that provides an internal mechanism bridging tasks and geospatial data with spatial–temporal awareness, thus providing solutions based on past tasks. The CBR-GDD approach includes a set of algorithms that were successfully implemented via three components as an extension of geoportals: ontology-enhanced knowledge base, similarity assessment model, and case retrieval nets. A set of experiments and case studies validate the CBR-GDD approach and application, and demonstrate its efficiency.  相似文献   

10.
自驾游以自主性、灵活性、选择性以及多样性等内在特点吸引着许多人,而它的线路设计质量直接影响自驾游者的心理满意度。通过收集、梳理河南省优秀旅游资源的空间分布、景区详情、交通道路等信息,依据旅游学中旅游行为空间模式、数学中图论和旅行商问题、地理信息系统(GIS)等科学理论,设计河南省自助游的最佳旅游线路。本研究以具有河南代表性的旅游景区为实际空间数据源,以ArcGIS系统为空间分布、交通等属性信息的可视化工具,以图论和旅行商问题为路线设计的数学基础,构建出自驾游线路设计的模式。本研究方法简单科学,为广大自驾游者规划旅游线路提供了有实际意义的参考。  相似文献   

11.
Today, many real‐time geospatial applications (e.g. navigation and location‐based services) involve data‐ and/or compute‐intensive geoprocessing tasks where performance is of great importance. Cloud computing, a promising platform with a large pool of storage and computing resources, could be a practical solution for hosting vast amounts of data and for real‐time processing. In this article, we explored the feasibility of using Google App Engine (GAE), the cloud computing technology by Google, for a module in navigation services, called Integrated GNSS (iGNSS) QoS prediction. The objective of this module is to predict quality of iGNSS positioning solutions for prospective routes in advance. iGNSS QoS prediction involves the real‐time computation of large Triangulated Irregular Networks (TINs) generated from LiDAR data. We experimented with the Google App Engine (GAE) and stored a large TIN for two geoprocessing operations (proximity and bounding box) required for iGNSS QoS prediction. The experimental results revealed that while cloud computing can potentially be used for development and deployment of data‐ and/or compute‐intensive geospatial applications, current cloud platforms require improvements and special tools for handling real‐time geoprocessing, such as iGNSS QoS prediction, efficiently. The article also provides a set of general guidelines for future development of real‐time geoprocessing in clouds.  相似文献   

12.
13.
Abstract

This paper introduces a new concept, distributed geospatial information processing (DGIP), which refers to the process of geospatial information residing on computers geographically dispersed and connected through computer networks, and the contribution of DGIP to Digital Earth (DE). The DGIP plays a critical role in integrating the widely distributed geospatial resources to support the DE envisioned to utilise a wide variety of information. This paper addresses this role from three different aspects: 1) sharing Earth data, information, and services through geospatial interoperability supported by standardisation of contents and interfaces; 2) sharing computing and software resources through a GeoCyberinfrastructure supported by DGIP middleware; and 3) sharing knowledge within and across domains through ontology and semantic searches. Observing the long-term process for the research and development of an operational DE, we discuss and expect some practical contributions of the DGIP to the DE.  相似文献   

14.
《The Cartographic journal》2013,50(3):197-201
Abstract

This paper argues for the importance of retaining a map library presence on UK university campuses at a time when many are under threat of closure, and access to geospatial data is increasingly moving to web-based services. It is suggested that the need for local expertise is undiminished and map curators need to redefine themselves as geoinformation specialists, preserving their paper map collections, but also meeting some of the challenges of GIS, and contributing to national developments in the construction of distributed geolibraries and the provision of metadata, especially with regard to local data sets.  相似文献   

15.
随着海量分布式地理空间数据的持续增长和地理空间处理服务复杂程度的不断增加,异步地理空间信息服务已经成为当前地理空间信息领域的研究热点。开放地理信息联盟(Open GIS Consortium,OGC)针对地理空间信息服务互操作制定的一系列标准规范大多建立在同步协议之上,适用于较为简单的非实时计算环境,难以满足复杂、动态的异步信息处理需求。本文提出了一种异步网络处理服务实现方法,以网络处理服务(Web processing service,WPS)和网络通知服务(Web notification service,WNS)为核心,扩展了标准的网络处理服务请求,从而支持异步调用。  相似文献   

16.
ABSTRACT

Light detection and ranging (LiDAR) data are essential for scientific discoveries such as Earth and ecological sciences, environmental applications, and responding to natural disasters. While collecting LiDAR data over large areas is quite possible the subsequent processing steps typically involve large computational demands. Efficiently storing, managing, and processing LiDAR data are the prerequisite steps for enabling these LiDAR-based applications. However, handling LiDAR data poses grand geoprocessing challenges due to data and computational intensity. To tackle such challenges, we developed a general-purpose scalable framework coupled with a sophisticated data decomposition and parallelization strategy to efficiently handle ‘big’ LiDAR data collections. The contributions of this research were (1) a tile-based spatial index to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, (2) two spatial decomposition techniques to enable efficient parallelization of different types of LiDAR processing tasks, and (3) by coupling existing LiDAR processing tools with Hadoop, a variety of LiDAR data processing tasks can be conducted in parallel in a highly scalable distributed computing environment using an online geoprocessing application. A proof-of-concept prototype is presented here to demonstrate the feasibility, performance, and scalability of the proposed framework.  相似文献   

17.
Web Mapping APIs (WMAs), such as Google Maps API, are widely used by researchers across different fields to develop geospatial Web applications. Among maps and map functionalities provided through WMAs, route and direction are prominent and commonly available. Given that each WMA uses a different map database and a different set of assumptions, the routes they generate, for the same pairs of origin and destination addresses, are different. Considering the current void in literature on WMAs and the routes they generate, in this paper, select common WMAs are compared and analyzed based on their routing techniques. The results of these comparisons will benefit researchers by helping them better understand the behavior of WMAs in producing routes, which in turn can be used for selecting suitable WMAs for research projects or developing applications (such as navigation and location-based services). The process in which routes are evaluated can also be used as a guideline to help researchers explore behavior of WMAs in generating routes.  相似文献   

18.
Abstract

The emergence of Cloud Computing technologies brings a new information infrastructure to users. Providing geoprocessing functions in Cloud Computing platforms can bring scalable, on-demand, and cost–effective geoprocessing services to geospatial users. This paper provides a comparative analysis of geoprocessing in Cloud Computing platforms – Microsoft Windows Azure and Google App Engine. The analysis compares differences in the data storage, architecture model, and development environment based on the experience to develop geoprocessing services in the two Cloud Computing platforms; emphasizes the importance of virtualization; recommends applications of hybrid geoprocessing Clouds, and suggests an interoperable solution on geoprocessing Cloud services. The comparison allows one to selectively utilize Cloud Computing platforms or hybrid Cloud pattern, once it is understood that the current development of geoprocessing Cloud services is restricted to specific Cloud Computing platforms with certain kinds of technologies. The performance evaluation is also performed over geoprocessing services deployed in public Cloud platforms. The tested services are developed using geoprocessing algorithms from different vendors, GeoSurf and Java Topology Suite. The evaluation results provide a valuable reference on providing elastic and cost-effective geoprocessing Cloud services.  相似文献   

19.
当前云计算的发展已能支持高性能的地理空间服务,比如在数字城市和电子商务等行业。Apache基金支持下的开源软件框架Hadoop,可以用来构建一个云环境的集群用来存储和处理高性能的地理空间数据。开放地理空间联盟(OGC)的Web三维服务(W3DS)就是这样一个很好的三维的地理空间数据服务标准。在标准的云计算环境下将是一个更好的应用示范。基于此,本文研究了OGC的W3DS服务在云计算环境下的实验结果。实验采用Apache的Hadoop框架作为三维地理空间信息服务实验展示的基础。实验结果对展示高性能的三维地理空间信息提供了有价值的参考。  相似文献   

20.
Disaster response operations require fast and coordinated actions based on real‐time disaster situation information. Although crowdsourced geospatial data applications have been demonstrated to be valuable tools for gathering real‐time disaster situation information, they only provide limited utility for disaster response coordination because of the lack of semantic compatibility and interoperability. To help overcome the semantic incompatibility and heterogeneity problems, we use Geospatial Semantic Web (GSW) technologies. We then combine GSW technologies with Web Feature Service requests to access multiple servers. However, a GSW‐based geographic information system often has poor performance due to the complex geometric computations required. The objective of this research is to explore how to use optimization techniques to improve the performance of an interoperable geographic situation‐awareness system (IGSAS) based on GSW technologies for disaster response. We conducted experiments to evaluate various client‐side optimization techniques for improving the performance of an IGSAS prototype for flood disaster response in New Haven, Connecticut. Our experimental results show that the developed prototype can greatly reduce the runtime costs of geospatial semantic queries through on‐the‐fly spatial indexing, tile‐based rendering, efficient algorithms for spatial join, and caching, especially for those spatial‐join geospatial queries that involve a large number of spatial features and heavy geometric computation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号