首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
2.
The geospatial sensor web is set to revolutionise real-time geospatial applications by making up-to-date spatially and temporally referenced data relating to real-world phenomena ubiquitously available. The uptake of sensor web technologies is largely being driven by the recent introduction of the OpenGIS Sensor Web Enablement framework, a standardisation initiative that defines a set of web service interfaces and encodings to task and query geospatial sensors in near real time. However, live geospatial sensors are capable of producing vast quantities of data over a short time period, which presents a large, fluctuating and ongoing processing requirement that is difficult to adequately provide with the necessary computational resources. Grid computing appears to offer a promising solution to this problem but its usage thus far has primarily been restricted to processing static as opposed to real-time data sets. A new approach is presented in this work whereby geospatial data streams are processed on grid computing resources. This is achieved by submitting ongoing processing jobs to the grid that continually poll sensor data repositories using relevant OpenGIS standards. To evaluate this approach a road-traffic monitoring application was developed to process streams of GPS observations from a fleet of vehicles. Specifically, a Bayesian map-matching algorithm is performed that matches each GPS observation to a link on the road network. The results show that over 90% of observations were matched correctly and that the adopted approach is capable of achieving timely results for a linear time geoprocessing operation performed every 60 seconds. However, testing in a production grid environment highlighted some scalability and efficiency problems. Open Geospatial Consortium (OGC) data services were found to present an IO bottleneck and the adopted job submission method was found to be inefficient. Consequently, a number of recommendations are made regarding the grid job-scheduling mechanism, shortcomings in the OGC Web Processing Service specification and IO bottlenecks in OGC data services.  相似文献   

3.
Since the introduction of Xerox PARC Map Viewer, there is a high growth in the number of Web GIS (Geographical Information System) applications for public use in different contexts. These applications instruct, advise and provide the tools for spatial analysis to their users, and the people who use them depend or rely on these systems. Many of these users are non-experts who have no GIS expertise and a limited understanding of spatial data handling. These inherent characteristics of non-expert interaction establish risk and uncertainty, which are further increased due to the complexity of Web GIS interfaces. These issues of uncertainty, risk perception and dependence are all trust-related aspects. Online trust has been repeatedly identified as a major concept for online information systems and its value recognised as it influences the intentions to use and the acceptance of online systems and the overall user experience. However, there is a very limited understanding as to exactly how trust is constructed when people, especially non-experts, interact with Web GIS. To improve knowledge in this domain, this article explores the theoretical foundations on how trust can be investigated in this context. Trust studies (mainly from the e-commerce domain) suggest that a trust-oriented interface design may improve the trustworthiness of online systems, and such attention can be given to Web GIS interfaces. Such studies are reviewed and their applicability is considered in the Web GIS context, taking into consideration their special characteristics. A case study is used to discuss how some features may potentially influence the trustworthiness of Web GIS applications. This article concludes by suggesting future research directions for the implementation of a holistic approach, which is necessary to investigate trust in this context.  相似文献   

4.
While the business intelligence sector, involving data warehouses and online analytical processing (OLAP) technologies, is experiencing strong growth in the IT marketplace, relatively little attention has been devoted to the problem of utilizing such tools in conjunction with GIS. This study contributes to the development of this research area by examining the issues involved in the design and implementation of an integrated data warehouse and GIS system that delivers analytical OLAP and mapping results in real‐time across the Web. The case study chosen utilizes individual records from the US 1880 population census, which have recently been made available by the North Atlantic Population Project. Although historical datasets of this kind present a number of challenges for data warehousing, the results indicate that the integrated approach adopted offers a much more flexible and powerful analytical methodology for this kind of large social science dataset than has hitherto been available.  相似文献   

5.
安徽省级资源环境空间信息服务系统研究与开发   总被引:2,自引:0,他引:2  
研究建立一个结构规范、服务便捷的省级资源环境空间信息网,实现资源环境空间信息的网络共享和服务已经是迫在眉睫。安徽省资源环境空间信息网首次在省政府网络上实现了资源环境空间信息的快速访问,具有数据最大、覆盖面全、类型多样的特点,并且采用SuperMap国产GIS软件进行开发,运行稳定、安全可靠,具有很好的可扩展性。系统地介绍了安徽省资源环境空间信息网的系统结构、数据库建设、数据更新和系统的主要功能。  相似文献   

6.
Spatial joins are join operations that involve spatial data types and operators. Spatial access methods are often used to speed up the computation of spatial joins. This paper addresses the issue of benchmarking spatial join operations. For this purpose, we first present a WWW-based benchmark generator to produce sets of rectangles. Using a Web browser, experimenters can specify the number of rectangles in a sample, as well as the statistical distributions of their sizes, shapes, and locations. Second, using the generator and a well-defined set of statistical models we define several tests to compare the performance of three spatial join algorithms: nested loop, scan-and-index, and synchronized tree traversal. We also added two real-life data sets from the Sequoia 2000 storage benchmark. Our results show that the relative performance of the different techniques mainly depends on the selectivity factor of the join predicate. All of the statistical models and algorithms are available on the Web, which allows for easy verification and modification of our experiments.  相似文献   

7.
分析国内外国土资源调查中GIS方法研究和应用的现状和趋势 ,提出基于多层次GIS集成的国土资源大调查思路 ,介绍和总结在大调查中应该关注的地理信息系统方法和应用。同时指出我国国土资源大调查的重点在开展微型化嵌入式GIS、三维 /四维GIS、网络GIS等方面的研究。  相似文献   

8.
文章介绍了Mobile GIS开发的意义,分析了开发的优势和制约因素,提出了采用流行的Web Service技术作为平台,根据手机端硬件资源的特点提出了一种用精简型的SOAP协议来完成数据传输工作的Mobile GIS的解决方案,在Mobile GIS研究和实践方面做了有益的尝试。  相似文献   

9.
基于Web Service的空间数据共享平台   总被引:13,自引:0,他引:13  
分析传统GIS软件存在的数据共享、集成困难和应用间互操作复杂的问题,提出了基于Web Service的数据共享平台的简单原型和实现技术。阐述如何基于开放的规范和协议构建开放的数据存储一体化模型,并在此基础上,提出了分布式环境下基于Web Service数据节点集群、互操作方案,以及上层充一的应用开发模式,包括提供以面向对象的思想定义的、构建上层应用的组件支持。重点研究了在空间数据文本化之后数据的索引、查询问题,提出了结合元数据和GML的两阶段数据查询方案。并对该平台下空间数据的语义共享方式做初步探讨。  相似文献   

10.
WebGIS是当前GIS研究的一个热点课题 ,该文首先论述了实现WebGIS的结构体系 ,进而分析了WebGIS的基本体系构成———两层结构和三层结构 ,在此基础上对基于Java平台的WebGIS体系结构进行了设计 ,实践表明 ,该结构可以实现GIS的远程服务功能 ,基于Java的WebGIS可以加快对用户请求的响应速度 ,并且客户端成为强大的智能型 ,降低了服务器的处理负载 ,具有使负载均衡的特点。  相似文献   

11.
WebGIS (also known as web‐based GIS and Internet GIS) denotes a type of Geographic Information System (GIS), whose client is implemented in a Web browser. WebGISs have been developed and used extensively in real‐world applications. However, when such a complex web‐based system involves the dissemination of large volumes of data and/or massive user interactions, its performance can become an issue. In this paper, we first identify several major potential performance problems with WebGIS. Then, we discuss several possible techniques to improve the performance. These techniques include the use of pyramids and hash indices on the server side to handle large images. To resolve server‐side conflicts originating from concurrent massive access and user interactions, we suggest clustering and multithreading techniques. Multithreading is also used to break down the long sequential, layer‐based data access to concurrent data access on the client side. Caching is suggested as a means to enhance concurrent data access for the same datasets on both the server and the client sides. The technique of client‐side dynamic data requests is used to improve data transmission. Compressed binary representation is implemented on both sides to reduce transmission volume. We also compare the performance of a prototype WebGIS with and without these techniques.  相似文献   

12.
OpenStreetMap (OSM) is an online public access database that allows for the collaborative collection of local geographic information. We employ this mapping technology to discuss a new social theory of poverty that moves away from income poverty to an economy that directly produces individuals' basic needs. Focusing on urban farming in Philadelphia as an example, we use OSM to support the argument that money, land, labor, and capital do not limit food production in the city. OSM is a type of “commons” that allows community members to depict features of interest to them that might otherwise be underrepresented in official or commercially produced maps such as Google Maps. Using the concept of facilitated volunteered geographic information (VGI) we developed an open framework for combining residents' local knowledge of food resources with expert guidance in data input. We believe this helps overcome problems with ad hoc data submission efforts to which collaborative online projects are susceptible. The program for “tagging” food resources in OSM was deployed in a public “map-a-thon” event we organized in Philadelphia, bringing together technical experts and food enthusiasts. To share the results, we present the Philly Fresh Food Map as an interactive online Web map that can be used and updated by the public.  相似文献   

13.
The increasing research interest in global climate change and the rise of the public awareness have generated a significant demand for new tools to support effective visualization of big climate data in a cyber environment such that anyone from any location with an Internet connection and a web browser can easily view and comprehend the data. In response to the demand, this paper introduces a new web-based platform for visualizing multidimensional, time-varying climate data on a virtual globe. The web-based platform is built upon a virtual globe system Cesium, which is open-source, highly extendable and capable of being easily integrated into a web environment. The emerging WebGL technique is adapted to support interactive rendering of 3D graphics with hardware graphics acceleration. To address the challenges of transmitting and visualizing voluminous, complex climate data over the Internet to support real-time visualization, we develop a stream encoding and transmission strategy based on video-compression techniques. This strategy allows dynamic provision of scientific data in different precisions to balance the needs for scientific analysis and visualization cost. Approaches to represent, encode and decode processed data are also introduced in detail to show the operational workflow. Finally, we conduct several experiments to demonstrate the performance of the proposed strategy under different network conditions. A prototype, PolarGlobe, has been developed to visualize climate data in the Arctic regions from multiple angles.  相似文献   

14.
The increasing popularity of web map services has motivated the development of more scalable services in the spatial data infrastructures. Tiled map services have emerged as a scalable alternative to traditional map services. Instead of rendering map images on the fly, a collection of pre-generated image tiles can be served very fast from a server-side cache. However, during the start-up of the service, the cache is initially empty and users experience a poor quality of service. Tile prefetching attempts to improve hit rates by proactively fetching map images without waiting for client requests.

While most popular prefetching policies in traditional web caching consider only the previous access history to make predictions, significant improvements could be achieved in web mapping by taking into account the background geographic information.

This work proposes a regressive model to predict which areas are likely to be requested in the future based on spatial cross-correlation between an unconstrained catalog of geographic features and a record of past cache requests. Tiles that are anticipated to be most frequently requested can be pre-generated and cached for faster retrieval. Trace-driven simulations with several million cache requests from two different nation-wide public web map services in Spain demonstrate that accurate predictions and performance gains can be obtained with the proposed model.  相似文献   

15.
Geospatial tile popularity reflects the general characteristics of user preferences in tile access. However, tile access has both long-term popularity features (characterized as stable) and short-term popularity features (characterized as explosive). Specific features of tile popularity are an important theoretical basis for improving the accuracy of caching and prefetching. This article considers both long-term and short-term popularity features for tile access and presents a Markov prefetching model in a cluster-based caching system based on a Zipf distribution. First, it describes the navigation path and the transition probability path for tile access based on the global features of tile access to find a way to estimate the transition tile access probabilities based on the access pattern, which satisfies Zipf's law. Then, based on temporal and spatial local changes in tile access patterns, the basic Markov model is used to prefetch tiles with the highest probability in the follow-up state for current hot tiles and these tiles are labeled as the set of prefetched objects. Finally, based on the access probability for prefetched tiles, they are evenly distributed in a cluster-based caching system. This method takes into account both global and local space–time changes in tile access patterns. This method not only makes the set of cached objects relatively stable but also adapts to changes in access distribution. Experimental results reveal that this method has a higher prefetch hit rate and a shorter average response time for tile requests and thus can improve the efficiency and stability of cluster-based caching systems.  相似文献   

16.
Geocoding is an uncertain process that associates an address or a place name with geographic coordinates. Traditionally, geocoding is performed locally on a stand-alone computer with the geocoding tools usually bundled in GIS software packages. The use of such tools requires skillful operators who know about the issues of geocoding, that is, reference databases and complicated geocoding interpolation techniques. These days, with the advancement in the Internet and Web services technologies, online geocoding provides its functionality to the Internet users with ease; thus, they are often unaware of such issues. With an increasing number of online geocoding services, which differ in their reference databases, the geocoding algorithms, and the strategy for dealing with inputs and outputs, it is crucial for the service requestors to realize the quality of the geocoded results of each service before choosing one for their applications. This is primarily because any errors associated with the geocoded addresses will be propagated to subsequent decisions, activities, modeling, and analysis. This article examines the quality of five online geocoding services: Geocoder.us, Google, MapPoint, MapQuest, and Yahoo!. The quality of each geocoding service is evaluated with three metrics: match rate, positional accuracy, and similarity. A set of addresses from the US Environmental Protection Agency (EPA) database were used as a baseline. The results were statistically analyzed with respect to different location characteristics. The outcome of this study reveals the differences among the online geocoding services on the quality of their geocoding results and it can be used as a general guideline for selecting a suitable service that matches an application's needs.  相似文献   

17.
Spatial data infrastructure (SDI) actors have great expectations for the second-generation SDI currently under development. However, SDIs have many implementation problems at different levels that are delaying the development of the SDI framework. The aims of this article are to identify these difficulties, in the literature and based on our own experience, in order to determine how mature and useful the current SDI phenomena are. We can then determine whether a general reconceptualization is necessary or rather a set of technical improvements and good practices needs to be developed before the second-generation SDI is completed. This study is based on the following aspects: metadata about data and services, data models, data download, data and processing services, data portrayal and symbolization, and mass market aspects. This work aims to find an equilibrium between user-focused geoportals and web service interconnection (the user side vs. the server side). These deep reflections are motivated by a use case in the healthcare area in which we employed the Catalan regional SDI. The use case shows that even one of the best regional SDI implementations can fail to provide the required information and processes even when the required data exist. Several previous studies recognize the value of applying Web 2.0 and user participation approaches but few of these studies provide a real implementation. Another objective of this work is to show that it is easy to complement the classical, international standard-based SDI with a participative Web 2.0 approach. To do so, we present a mash-up portal built on top of the Catalan SDI catalogues.  相似文献   

18.
Current data sharing in the Internet environment is supported using metadata at the file level. This approach has three fundamental shortcomings. First, sharing data from different sources with different semantics, data models, and acquisition methods usually requires data conversion and/or integration like data conflation. This can be tedious and error‐prone. Second, data updated from one source cannot be automatically propagated to other related data or applications. Finally, data sharing at the file level makes it difficult to provide feature‐level data for searching, accessing, and exchanging in real time over the Internet. This paper addresses these three issues by proposing a standards‐based framework for sharing geospatial data in the transportation application domain. The proposed framework uses a standard data model—geospatial data model proposed by the Geospatial One‐Stop initiative to harmonize the semantics and data models without the use of data integration methods. It uses Geography Markup Language (GML) for geospatial data coding and feature relationship, which provides a basis to propagate the data update from one source to related other sources and applications, and to search and extract data at the feature level. The framework uses the Web Feature Service (WFS) to search, access and extract data at the feature level from distributed sources. Finally, the Scalable Vector Graphics (SVG) standard was used for data display on the Web browser. Two transportation network datasets are used in the prototype case study to implement the proposed framework. The prototype allows the user to access and extract data at the feature level on the Web from distributed sources without downloading the full data file. It shows that the proposed standards‐based feature‐level data‐sharing system is capable of sharing data without data conflation, accessing, and exchanging data in real time at the feature level. The prototype also shows that changes in one database can be automatically reflected or propagated in another related database without data downloading.  相似文献   

19.
Volunteered Geographic Information, social media, and data from Information and Communication Technology are emerging sources of big data that contribute to the development and understanding of the spatiotemporal distribution of human population. However, the inherent anonymity of these crowd-sourced or crowd-harvested data sources lack the socioeconomic and demographic attributes to examine and explain human mobility and spatiotemporal patterns. In this paper, we investigate an Internet-based demographic data source, personal microdata databases publicly accessible on the World Wide Web (hereafter web demographics), as potential sources of aspatial and spatiotemporal information regarding the landscape of human dynamics. The objectives of this paper are twofold: (1) to develop an analytical framework to identify mobile population from web demographics as an individual-level residential history data, and (2) to explore their geographic and demographic patterns of migration. Using web demographics of Vietnamese–Americans in Texas collected in 2010 as a case study, this paper (1) addresses entity resolution and identifies mobile population through the application of a Cost-Sensitive Alternative Decision Tree (CS-ADT) algorithm, (2) investigates migration pathways and clusters to include both short- and long-distance patterns, and (3) analyze the demographic characteristics of mobile population and the functional relationship with travel distance. By linking the physical space at the individual level, this unique methodology attempts to enhance the understanding of human movement at multiple spatial scales.  相似文献   

20.
The OGC Web Service (OWS) schemas have the characteristics of a complex element structure, are distributed and large scale, have differences in element naming, and are available in different versions. Applying conventional matching approaches may lead to not only poor quality, but also bad performance. In this article, the OWS schema file decomposition, fragment presentation, fragment identification, fragment element match, and combination of match results are developed based on the extended FRAG-BASE (fragment-based) schema-matching method. Different versions of Web Feature Service (WFS) and Web Coverage Service (WCS) schema-matching experiments show that the average recall of the extended FRAG-BASE matching for the schemas is above 80%, the average precision reaches 90%, the average overall achieves 85%, and the matching efficiency increases by 50% as compared with that of the COMA and CONTEXT matcher. The multi-version WFS retrieval under the Antarctic Spatial Data Infrastructure (AntSDI) data service environment demonstrates the feasibility and superiority of the extended FRAG-BASE method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号