首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
In a Web service‐based distributed environment, individual services must be chained together dynamically to solve a complex real world problem. The Semantic Web Service has shown promise for automatic chaining of Web services. This paper addresses semi‐automatic geospatial service chaining through Semantic Web Services‐based process planning. Process planning includes three phases: process modeling, process model instantiation and workflow execution. Ontologies and Artificial Intelligence (AI) planning methods are employed in process planning to help a user dynamically create an executable workflow for earth science applications. In particular, the approach was implemented in a common data and service environment enabled by interoperable standards from OGC and W3C. A case study of the chaining process for wildfire prediction illustrates the applicability of this approach.  相似文献   

4.
We present LOST-Tree, a new spatio-temporal structure to manage sensor data loading and caching in a sensor web browser. In the same way that the World Wide Web needs a web browser to load and display web pages, the World-Wide Sensor Web needs a sensor web browser to access distributed and heterogeneous sensor networks. However, most existing sensor web browsers are just mashups of sensor locations and base maps that do not consider the scalability issues regarding transmitting large amounts of sensor readings over the Internet. While caching is an effective solution for alleviating the latency and bandwidth problems, a method for efficiently loading sensor data1 from sensor web servers is currently missing. Therefore, we present LOST-Tree as a sensor data loading component that also manages the client-side cache on a sensor web browser. By applying LOST-Tree, redundant transmissions are avoided, enabling efficient loading with cached sensor data. We demonstrate that LOST-Tree is lightweight and scalable, in terms of sensor data volume. We implemented LOST-Tree in the GeoCENS sensor web browser for evaluation with a real sensor web dataset.  相似文献   

5.
Current data sharing in the Internet environment is supported using metadata at the file level. This approach has three fundamental shortcomings. First, sharing data from different sources with different semantics, data models, and acquisition methods usually requires data conversion and/or integration like data conflation. This can be tedious and error‐prone. Second, data updated from one source cannot be automatically propagated to other related data or applications. Finally, data sharing at the file level makes it difficult to provide feature‐level data for searching, accessing, and exchanging in real time over the Internet. This paper addresses these three issues by proposing a standards‐based framework for sharing geospatial data in the transportation application domain. The proposed framework uses a standard data model—geospatial data model proposed by the Geospatial One‐Stop initiative to harmonize the semantics and data models without the use of data integration methods. It uses Geography Markup Language (GML) for geospatial data coding and feature relationship, which provides a basis to propagate the data update from one source to related other sources and applications, and to search and extract data at the feature level. The framework uses the Web Feature Service (WFS) to search, access and extract data at the feature level from distributed sources. Finally, the Scalable Vector Graphics (SVG) standard was used for data display on the Web browser. Two transportation network datasets are used in the prototype case study to implement the proposed framework. The prototype allows the user to access and extract data at the feature level on the Web from distributed sources without downloading the full data file. It shows that the proposed standards‐based feature‐level data‐sharing system is capable of sharing data without data conflation, accessing, and exchanging data in real time at the feature level. The prototype also shows that changes in one database can be automatically reflected or propagated in another related database without data downloading.  相似文献   

6.
Map mashups, as a common way of presenting geospatial information on the Web, are generally created by spatially overlaying thematic information on top of various base maps. This simple overlay approach often raises geometric deficiencies due to geometric uncertainties in the data. This issue is particularly apparent in a multi-scale context because the thematic data seldom have synchronised level of detail with the base map. In this study, we propose, develop, implement and evaluate a relative positioning approach based on shared geometries and relative coordinates to synchronise geometric representations for map mashups through several scales. To realise the relative positioning between datasets, we adopt a Linked Data–based technical framework in which the data are organised according to ontologies that are designed based on the GeoSPARQL vocabulary. A prototype system is developed to demonstrate the feasibility and usability of the relative positioning approach. The results show that the approach synchronises and integrates the geometries of thematic data and the base map effectively, and the thematic data are automatically tailored for multi-scale visualisation. The proposed framework can be used as a new way of modelling geospatial data on the Web, with merits in terms of both data visualisation and querying.  相似文献   

7.
8.
《The Journal of geography》2012,111(6):217-225
Abstract

This article situates geospatial technologies as a constructivist tool in the K-12 classroom and examines student experiences with real-time authentic geospatial data provided through a hybrid adventure learning environment. Qualitative data from seven student focus groups demonstrate the effectiveness of using real-time authentic data, peer collaboration, and geospatial technologies in learning geography. We conclude with recommendations about geospatial technology curricula, geospatial lesson design, providing preservice teachers with geographic technological pedagogical content knowledge, and encouraging further research to investigate the impact, affordances, and pedagogical implications of geospatial technologies and data in the K–12 classroom.  相似文献   

9.
The use of standards in the geospatial domain, such as those defined by the Open Geospatial Consortium (OGC), for exchanging data has brought a great deal of interoperability upon which systems can be built in a reliable way. Unfortunately, these standards are becoming increasingly complex, making their implementation an arduous task. The use of appropriate software metrics can be very useful to quantify different properties of the standards that ultimately may suggest different solutions to deal with problems related to their complexity. In this regard, we present in this article an attempt to measure the complexity of the schemas associated with the OGC implementation specifications. We use a comprehensive set of metrics to provide a multidimensional view of this complexity. These metrics can be used to evaluate the impact of design decisions, study the evolution of schemas, and so on. We also present and evaluate different solutions that could be applied to overcome some of the problems associated with the complexity of the schemas.  相似文献   

10.
Discrete global grid systems (DGGSs) are considered to be promising structures for global geospatial information representation. Square and triangular DGGSs have had the advantage over hexagonal ones in geospatial data processing over the past few decades. Despite a significant body of research supporting hexagonal grids as the superior alternative, the application thereof has been hindered partly owing to the lack of a hierarchy. This study presents an original perspective to combine two types of aperture 4 hexagonal discrete grid systems into a hierarchy. Each cell of the hierarchy is assigned a unique code using a linear quadtree that constructs the hexagonal quaternary balanced structure (HQBS). The mathematical system described by HQBS addressing and the vector operations, including addition, subtraction, multiplication, and division, are defined. Essential spatial operations for HQBS cell retrieval, transformation between HQBS codes and other coordinate systems, and arrangement of HQBS cells on spherical surfaces were studied and implemented. The accuracy and efficiency of algorithms were validated through experiments. The results indicate that the average efficiency of cell retrieval using the HQBS is higher than that using other schemes, thus proving it to be more efficient.  相似文献   

11.
Different versions of the Web Coverage Service (WCS) schemas of the Open Geospatial Consortium (OGC) reflect semantic conflict. When applying the extended FRAG-BASE schema-matching approach (a schema-matching method based on COMA++, including an improved schema decomposition algorithm and schema fragments identification algorithm, which enable COMA++-based support to OGC Web Service schema matching), the average recall of WCS schema matching is only 72%, average precision is only 82% and average overall is only 57%. To improve the quality of multi-version WCS retrieval, we propose a schema-matching method that measures node semantic similarity (NSS). The proposed method is based on WordNet, conjunctive normal form and a vector space model. A hybrid algorithm based on label meanings and annotations is designed to calculate the similarity between label concepts. We translate the semantic relationships between nodes into a propositional formula and verify the validity of this formula to confirm the semantic relationships. The algorithm first computes the label and node concepts and then calculates the conceptual relationship between the labels. Finally, the conceptual relationship between nodes is computed. We then use the NSS method in experiments on different versions of WCS. Results show that the average recall of WCS schema matching is greater than 83%; average precision reaches 92%; and average overall is 67%.  相似文献   

12.
The aim of this article is to provide a basis in evidence for (or against) the much-quoted assertion that 80% of all information is geospatially referenced. For this purpose, two approaches are presented that are intended to capture the portion of geospatially referenced information in user-generated content: a network approach and a cognitive approach. In the network approach, the German Wikipedia is used as a research corpus. It is considered a network with the articles being nodes and the links being edges. The Network Degree of Geospatial Reference (NDGR) is introduced as an indicator to measure the network approach. We define NDGR as the shortest path between any Wikipedia article and the closest article within the network that is labeled with coordinates in its headline. An analysis of the German Wikipedia employing this approach shows that 78% of all articles have a coordinate themselves or are directly linked to at least one article that has geospatial coordinates. The cognitive approach is manifested by the categories of geospatial reference (CGR): direct, indirect, and non-geospatial reference. These are categories that may be distinguished and applied by humans. An empirical study including 380 participants was conducted. The results of both approaches are synthesized with the aim to (1) examine correlations between NDGR and the human conceptualization of geospatial reference and (2) to separate geospatial from non-geospatial information. From the results of this synthesis, it can be concluded that 56–59% of the articles within Wikipedia can be considered to be directly or indirectly geospatially referenced. The article thus describes a method to check the validity of the ‘80%-assertion’ for information corpora that can be modeled using graphs (e.g., the World Wide Web, the Semantic Web, and Wikipedia). For the corpus investigated here (Wikipedia), the ‘80%-assertion’ cannot be confirmed, but would need to be reformulated as a ‘60%-assertion’.  相似文献   

13.
Monitoring and predicting traffic conditions are of utmost importance in reacting to emergency events in time and for computing the real-time shortest travel-time path. Mobile sensors, such as GPS devices and smartphones, are useful for monitoring urban traffic due to their large coverage area and ease of deployment. Many researchers have employed such sensed data to model and predict traffic conditions. To do so, we first have to address the problem of associating GPS trajectories with the road network in a robust manner. Existing methods rely on point-by-point matching to map individual GPS points to a road segment. However, GPS data is imprecise due to noise in GPS signals. GPS coordinates can have errors of several meters and, therefore, direct mapping of individual points is error prone. Acknowledging that every GPS point is potentially noisy, we propose a radically different approach to overcome inaccuracy in GPS data. Instead of focusing on a point-by-point approach, our proposed method considers the set of relevant GPS points in a trajectory that can be mapped together to a road segment. This clustering approach gives us a macroscopic view of the GPS trajectories even under very noisy conditions. Our method clusters points based on the direction of movement as a spatial-linear cluster, ranks the possible route segments in the graph for each group, and searches for the best combination of segments as the overall path for the given set of GPS points. Through extensive experiments on both synthetic and real datasets, we demonstrate that, even with highly noisy GPS measurements, our proposed algorithm outperforms state-of-the-art methods in terms of both accuracy and computational cost.  相似文献   

14.
As an important spatiotemporal simulation approach and an effective tool for developing and examining spatial optimization strategies (e.g., land allocation and planning), geospatial cellular automata (CA) models often require multiple data layers and consist of complicated algorithms in order to deal with the complex dynamic processes of interest and the intricate relationships and interactions between the processes and their driving factors. Also, massive amount of data may be used in CA simulations as high-resolution geospatial and non-spatial data are widely available. Thus, geospatial CA models can be both computationally intensive and data intensive, demanding extensive length of computing time and vast memory space. Based on a hybrid parallelism that combines processes with discrete memory and threads with global memory, we developed a parallel geospatial CA model for urban growth simulation over the heterogeneous computer architecture composed of multiple central processing units (CPUs) and graphics processing units (GPUs). Experiments with the datasets of California showed that the overall computing time for a 50-year simulation dropped from 13,647 seconds on a single CPU to 32 seconds using 64 GPU/CPU nodes. We conclude that the hybrid parallelism of geospatial CA over the emerging heterogeneous computer architectures provides scalable solutions to enabling complex simulations and optimizations with massive amount of data that were previously infeasible, sometimes impossible, using individual computing approaches.  相似文献   

15.
刘艳  顾春艳 《地理研究》2012,31(1):187-194
航空情报资料是空中交通地理活动所必需或所产生的航空地理数据,在信息化管理过程中面临着多源异构数据的集中管理、统一维护和分布使用等需求。GML作为开放的空间数据模型标准,为航空地理数据的交换和共享提供了要素编码方法和数据交换规范。针对航空信息化系统建设中对规范的航空地理数据的应用需求,在研究航空地理数据特点的基础上,通过分析航空地理数据与GML模型之间的映射关系,以航线管理系统中基础航空情报数据库的建设为例,基于GML规范设计了航空地理数据模型,阐明了数据处理流程,为建立标准化的航空信息数据仓库进行了有益的尝试。  相似文献   

16.
极区是研究各种高空大气物理现象和日地关系的理想场所,通过GPS获取的电离层TEC信息具有高精度、全天候、大范围等优势,所以利用GPS研究极区电离层有重要意义。本文所设计的极区电离层信息监测和发布系统,包含数据回传、数据处理、数据发布三个部分。数据回传包括卫星网络回传和格式转换。数据处理是利用每天回传的GPS双频数据,解算单站VTEC,解决了硬件延迟的求解问题,并用二次建模获得测站上空的VTEC,计算结果表明该方法能较好地给出电离层TEC的大小及变化规律。数据发布是通过中国极地科学考察管理信息系统提供实时的查询服务。  相似文献   

17.
This paper presents a new technique for information fusion. Unlike most previous work on information fusion, this paper explores the use of instance‐level (extensional) information within the fusion process. This paper proposes an algorithm that can be used automatically to infer the schema‐level structure necessary for information fusion from instance‐level information. The approach is illustrated using the example of geospatial land‐cover data. The method is then extended to operate under uncertainty, such as in cases where the data are inaccurate or imprecise. The paper describes the implementation of the fusion method within a software prototype. Finally, the paper discusses several key topics for future research, including applications of this work to spatial‐data mining and the semantic web.  相似文献   

18.
自2005年以来,我国科考队员利用双频GPS在北极黄河站附近的Austre Lovénbreen 和 Pedersenbreen两条冰川上每年一次开展高精度的冰川运动观测,获取了冰川表面监测标杆的精确位置和运动速度。2009年4月,我国考察队员在这两条冰川上开展了密集的GPS点位数据采集,藉此开展北极两条冰川的冰面地形测量。在分析单频GPS动态单点定位数据用于冰面地形测量的可行性基础上,经过平差计算获得了两条冰川的冰面地形数据,进而生成冰面DEM和等高线,制作冰面地形图。经与高精度控制点比较,冰面DEM高程的误差为0.78m,在冰川季节性高程波动和年消融的变化范围之内。由于SMART-V1型GPS设备是当前冰川研究工作中应用较多的pulseEKKO型探地雷达配套的一个重要部件,本文的结论对于同类仪器开展冰川测量工作具有参考价值,对基于高密度的GPS动态单点定位测量方法用于冰面地形测量的数据处理具有指导意义。  相似文献   

19.
Over recent years, massive geospatial information has been produced at a prodigious rate, and is usually geographically distributed across the Internet. Grid computing, as a recent development in the landscape of distributed computing, is deemed as a good solution for distributed geospatial data management and manipulation. Thus, the Grid computing technology can be applied to integrate various distributed resources into a ‘super-computer’ that enables efficient distributed geospatial query processing. In order to realize this vision, an effective mechanism for building the distributed geospatial query workflow in the Grid environment needs to be elaborately designed. The workflow-building technology aims to automatically transform the global geospatial query into an equivalent distributed query process in the Grid. In response to this goal, detailed steps and algorithms for building the distributed geospatial query workflow in the Grid environment are discussed in this article. Moreover, we develop corresponding software tools that enable Grid-based geospatial queries to be run against multiple data resources. Experimental results demonstrate that the proposed methodology is feasible and correct.  相似文献   

20.
Several algorithms have been proposed to generate a polygonal ‘footprint’ to characterize the shape of a set of points in the plane. One widely used type of footprint is the χ-shape. Based on the Delaunay triangulation (DT), χ-shapes guaranteed to be simple (Jordan) polygons. This paper presents for the first time an incremental χ-shape algorithm, capable of processing point data streams. Our incremental χ-shape algorithm allows both insertion and deletion operations, and can handle streaming individual points and multiple point sets. The experimental results demonstrated that the incremental algorithm is significantly more efficient than the existing, batch χ-shape algorithm for processing a wide variety of point data streams.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号