首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Cloud computing has been considered as the next-generation computing platform with the potential to address the data and computing challenges in geosciences. However, only a limited number of geoscientists have been adapting this platform for their scientific research mainly due to two barriers: 1) selecting an appropriate cloud platform for a specific application could be challenging, as various cloud services are available and 2) existing general cloud platforms are not designed to support geoscience applications, algorithms and models. To tackle such barriers, this research aims to design a hybrid cloud computing (HCC) platform that can utilize and integrate the computing resources across different organizations to build a unified geospatial cloud computing platform. This platform can manage different types of underlying cloud infrastructure (e.g., private or public clouds), and enables geoscientists to test and leverage the cloud capabilities through a web interface. Additionally, the platform also provides different geospatial cloud services, such as workflow as a service, on the top of common cloud services (e.g., infrastructure as a service) provided by general cloud platforms. Therefore, geoscientists can easily create a model workflow by recruiting the needed models for a geospatial application or task on the fly. A HCC prototype is developed and dust storm simulation is used to demonstrate the capability and feasibility of such platform in facilitating geosciences by leveraging across-organization computing and model resources.  相似文献   

2.
Topographic maps and aerial photographs are particularly useful when geoscientists are faced with fieldwork tasks such as selecting paths for observation, establishing sampling schemes, or defining field regions. These types of images are crucial in bedrock geologic mapping, a cognitively complex field-based problem-solving task. Geologic mapping requires the geologist to correctly identify rock types and three-dimensional bedrock structures from often partial or poor-quality outcrop data while navigating through unfamiliar terrain. This paper compares the walked routes of novice to expert geologists working in the field (n = 66) with the results of a route planning and navigation survey of a similar population of geologists (n = 77). Results show clearly that those geologists with previous mapping experience make quick and decisive determinations about field areas from available imagery and maps, regardless of whether they are or not physically present in the field area. Recognition of geologic features enabled experts to form and verbalize a specific plan for travel through a landscape based on those features. Novices were less likely to develop specific travel route plans and were less likely to identify critical landscape cues from aerial photographs.  相似文献   

3.
This article presents a spatiotemporal model for scheduling applications that is driven by the events and activities individuals plan and manage every day. The framework is presented using an ontological approach where ontologies at different levels of generalization, e.g. domain, application, and task ontologies, are linked together through participation and inheritance relationships. S_Events are entered into a schedule as a new S_Entry, or modifications can be made to existing entries including reschedule, postpone, change location, and delete as schedules vary over time. These schedule updates are formalized through changes to planned start and end times and the planned locations of S_Entries are expressed using SWRL, a semantic web rule language. SWRL is also used for reasoning about schedule changes and the space‐time conflicts that can occur. The sequence of entries in a schedule gives rise to S_trajectories representing the locations that individuals plan to visit in order to carry out their schedule, adding an additional spatial element to the framework. A prototype Geoscheduler application maps S_Entries against a timeline, offering a spatiotemporal visualization of scheduled activities showing the evolution of a schedule over space‐time and affecting spatiotemporal accessibility for individuals.  相似文献   

4.
Virtual globes have been developed to showcase different types of data combining a digital elevation model and basemaps of high resolution satellite imagery. Hence, they became a standard to share spatial data and information, although they suffer from a lack of toolboxes dedicated to the formatting of large geoscientific dataset. From this perspective, we developed Geolokit: a free and lightweight software that allows geoscientists – and every scientist working with spatial data – to import their data (e.g., sample collections, structural geology, cross-sections, field pictures, georeferenced maps), to handle and to transcribe them to Keyhole Markup Language (KML) files. KML files are then automatically opened in the Google Earth virtual globe and the spatial data accessed and shared. Geolokit comes with a large number of dedicated tools that can process and display: (i) multi-points data, (ii) scattered data interpolations, (iii) structural geology features in 2D and 3D, (iv) rose diagrams, stereonets and dip-plunge polar histograms, (v) cross-sections and oriented rasters, (vi) georeferenced field pictures, (vii) georeferenced maps and projected gridding.Therefore, together with Geolokit, Google Earth becomes not only a powerful georeferenced data viewer but also a stand-alone work platform. The toolbox (available online at http://www.geolokit.org) is written in Python, a high-level, cross-platform programming language and is accessible through a graphical user interface, designed to run in parallel with Google Earth, through a workflow that requires no additional third party software. Geolokit features are demonstrated in this paper using typical datasets gathered from two case studies illustrating its applicability at multiple scales of investigation: a petro-structural investigation of the Ile d’Yeu orthogneissic unit (Western France) and data collection of the Mariana oceanic subduction zone (Western Pacific).  相似文献   

5.
Geospatially Enabled Scientific Workflows offer a promising toolset to help researchers in the earth observation domain with many aspects of the scientific process. One such aspect is that of access to distributed earth observation data and computing resources. Earth observation research often utilizes large datasets requiring extensive CPU and memory resources in their processing. These resource intensive processes can be chained; the sequence of processes (and their provenance) makes up a scientific workflow. Despite the exponential growth in capacity of desktop computers, their resources are often insufficient for the scientific workflow processing tasks at hand. By integrating distributed computing capabilities into a geospatially enabled scientific workflow environment, it is possible to provide researchers with a mechanism to overcome the limitations of the desktop computer. Most of the effort on extending scientific workflows with distributed computing capabilities has focused on the web services approach, as exemplified by the OGC's Web Processing Service and by GRID computing. The approach to leveraging distributed computing resources described in this article uses instead remote objects via RPyC and the dynamic properties of the Python programming language. The Vistrails environment has been extended to allow for geospatial processing through the EO4Vistrails package ( http://code.google.com/p/eo4vistrails/ ). In order to allow these geospatial processes to be seamlessly executed on distributed resources such as cloud computing nodes, the Vistrails environment has been extended with both multi‐tasking capabilities and distributed processing capabilities. The multi‐tasking capabilities are required in order to allow Vistrails to run side‐by‐side processes, a capability it does not currently have. The distributed processing capabilities are achieved through the use of remote objects and mobile code through RPyC.  相似文献   

6.
介绍了VC++开发AutoCAD Map 3D 2008组合环境配置以及工程属性设置,并详细说明了地图采集数据检查程序的流程和代码实现.  相似文献   

7.
Recent technological advances in geospatial data gathering have created massive data sets with better spatial and temporal resolution than ever before. These large spatiotemporal data sets have motivated a challenge for Geoinformatics: how to model changes and design good quality software. Many existing spatiotemporal data models represent how objects and fields evolve over time. However, to properly capture changes, it is also necessary to describe events. As a contribution to this research, this article presents an algebra for spatiotemporal data. Algebras give formal specifications at a high‐level abstraction, independently of programming languages. This helps to develop reliable and expressive applications. Our algebra specifies three data types as generic abstractions built on real‐world observations: time series, trajectory and coverage. Based on these abstractions, it defines object and event types. The proposed data types and functions can model and capture changes in a large range of applications, including location‐based services, environmental monitoring, public health, and natural disasters.  相似文献   

8.
ABSTRACT

A method based on workflow technology and Open Geospatial Consortium (OGC) specification is proposed to establish a universal workflow conceptual model in the network environment. In this paper, the soil fertility evaluation conceptual model was developed as an evaluation method of soil fertility by analyzing the GIS-based fertility evaluation method and extracting the dynamic variable to verify the feasibility model. This validation process involves determining the instantiation of the conceptual model. The proposed conceptual model achieved the following goals. All data acquisition and processing functions were packaged into an OGC-compliant service model; these service models were organized into a processing chain in a certain order on the workflow platform by Petri-Net; the fertility evaluation was realized on the workflow platform by calling the processing chain. Results showed that processing functions and data can be shared in the network environment, and the network workflow model can be realized by the workflow technology. The successful implementation of fertility evaluation proved the feasibility of the network-based universal workflow conceptual model. In addition, the flexibility of our modeling method is demonstrated by reconstructing the workflow model.  相似文献   

9.
Abstract

The geospatial sciences face grand information technology (IT) challenges in the twenty-first century: data intensity, computing intensity, concurrent access intensity and spatiotemporal intensity. These challenges require the readiness of a computing infrastructure that can: (1) better support discovery, access and utilization of data and data processing so as to relieve scientists and engineers of IT tasks and focus on scientific discoveries; (2) provide real-time IT resources to enable real-time applications, such as emergency response; (3) deal with access spikes; and (4) provide more reliable and scalable service for massive numbers of concurrent users to advance public knowledge. The emergence of cloud computing provides a potential solution with an elastic, on-demand computing platform to integrate – observation systems, parameter extracting algorithms, phenomena simulations, analytical visualization and decision support, and to provide social impact and user feedback – the essential elements of the geospatial sciences. We discuss the utilization of cloud computing to support the intensities of geospatial sciences by reporting from our investigations on how cloud computing could enable the geospatial sciences and how spatiotemporal principles, the kernel of the geospatial sciences, could be utilized to ensure the benefits of cloud computing. Four research examples are presented to analyze how to: (1) search, access and utilize geospatial data; (2) configure computing infrastructure to enable the computability of intensive simulation models; (3) disseminate and utilize research results for massive numbers of concurrent users; and (4) adopt spatiotemporal principles to support spatiotemporal intensive applications. The paper concludes with a discussion of opportunities and challenges for spatial cloud computing (SCC).  相似文献   

10.
The implementation of social network applications on mobile platforms has significantly elevated the activity of mobile social networking. Mobile social networking offers a channel for recording an individual’s spatiotemporal behaviors when location-detecting capabilities of devices are enabled. It also facilitates the study of time geography on an individual level, which has previously suffered from a scarcity of georeferenced movement data. In this paper, we report on the use of georeferenced tweets to display and analyze the spatiotemporal patterns of daily user trajectories. For georeferenced tweets having both location information in longitude and latitude values and recorded creation time, we apply a space–time cube approach for visualization. Compared to the traditional methodologies for time geography studies such as the travel diary-based approach, the analytics using social media data present challenges broadly associated with those of Big Data, including the characteristics of high velocity, large volume, and heterogeneity. For this study, a batch processing system has been developed for extracting spatiotemporal information from each tweet and then creating trajectories of each individual mobile Twitter user. Using social media data in time geographic research has the benefits of study area flexibility, continuous observation and non-involvement with contributors. For example, during every 30-minute cycle, we collected tweets created by about 50,000 Twitter users living in a geographic region covering New York City to Washington, DC. Each tweet can indicate the exact location of its creator when the tweet was posted. Thus, the linked tweets show a Twitter users’ movement trajectory in space and time. This study explores using data intensive computing for processing Twitter data to generate spatiotemporal information that can recreate the space–time trajectories of their creators.  相似文献   

11.
SensePlace3 (SP3) is a geovisual analytics framework and web application that supports overview + detail analysis of social media, focusing on extracting meaningful information from the Twitterverse. SP3 leverages social media related to crisis events. It differs from most existing systems by enabling an analyst to obtain place-relevant information from tweets that have implicit as well as explicit geography. Specifically, SP3 includes not just the ability to utilize the explicit geography of geolocated tweets but also analyze implicit geography by recognizing and geolocating references in both tweet text, which indicates locations tweeted about, and in Twitter profiles, which indicates locations affiliated with users. Key features of SP3 reported here include flexible search and filtering capabilities to support information foraging; an ingest, processing, and indexing pipeline that produces near real-time access for big streaming data; and a novel strategy for implementing a web-based multi-view visual interface with dynamic linking of entities across views. The SP3 system architecture was designed to support crisis management applications, but its design flexibility makes it easily adaptable to other domains. We also report on a user study that provided input to SP3 interface design and suggests next steps for effective spatiotemporal analytics using social media sources.  相似文献   

12.
ABSTRACT

Maps are explicitly positioned within the realms of power, representation, and epistemology; this article sets out to explore how these ideas are manifest in the academic Geographic Information Science (GIScience) literature. We analyze 10 years of literature (2005–2014) from top tier GIScience journals specific to the geoweb and geographic crowdsourcing. We then broaden our search to include three additional journals outside the technical GIScience journals and contrast them to the initial findings. We use this comparison to discuss the apparent technical and social divide present within the literature. Our findings demonstrate little explicit engagement with topics of social justice, marginalization, and empowerment within our subset of almost 1200 GIScience papers. The social, environmental, and political nature of participation, mapmaking, and maps necessitates greater reflection on the creation, design, and implementation of the geoweb and geographic crowdsourcing. We argue that the merging of the technical and social has already occurred in practice, and for GIScience to remain relevant for contributors and users of crowdsourced maps, researchers and practitioners must heed two decades of calls for substantial and critical engagement with the geoweb and crowdsourcing as social, environmental, and political processes.  相似文献   

13.
ABSTRACT

Dot maps have become a popular way to visualize discrete geographic data. Yet, beyond showing how the data are spatially distributed, dot maps are often visually cluttered in terms of consistency, overlap, and representativeness. Existing clutter reduction techniques like jittering, refinement, distortion, and aggregation also address this issue, but do so by arbitrarily displacing dots from their exact location, removing dots from the map, changing the spatial reference of the map, or reducing its level of detail, respectively. We present BinSq, a novel visualization technique to compare variations in dot density patterns without visual clutter. Based on a careful synthesis of existing clutter reduction techniques, BinSq reduces the wide variety of dot density variations on the map to a representative subset of density intervals that are more distinguishable. The subset is derived from a nested binning operation that introduces order and regularity to the map. Thereafter, a dot prioritization operation improves the representativeness of the map by equalizing visible data values to correspond with the actual data. In this paper, we describe the algorithmic implementation of BinSq, explore its parametric design space, and discuss its capabilities in comparison to six existing clutter reduction techniques for dot maps.  相似文献   

14.
首先对湖北省多年的查螺资料、各种地理图件及环境数据进行整理;再利用ArcGIS软件,将图件数据导入GeoDatabase;然后以该省的村级矢量图为底图,对应的国际代码为关联字段,利用Joins and Relates工具,关联整理后的钉螺信息及环境数据表,构建钉螺螺情综合数据库。该数据库的构建统一了不同的来源数据,具有直观性和可视性,为政府和研究人员分析螺情提供了一个动态、定量分析的决策工具,为建立钉螺扩散预警模型奠定了基础。  相似文献   

15.
Crowdsourcing functions of the living city from Twitter and Foursquare data   总被引:1,自引:0,他引:1  
ABSTRACT

Urban functions are closely related to people’s spatiotemporal activity patterns, transportation needs, and a city’s business distribution and development trends. Studies investigating urban functions have used different data sources, such as remotely sensed imageries, observation, photography, and cognitive maps. However, these data sources usually suffer from low spatial, temporal, and thematic resolution. This article attempts to investigate human activities to understand urban functions through crowdsourcing social media data. In this study, we mined Twitter and Foursquare data to extract and analyze six types of human activities. The spatiotemporal analysis revealed hotspots for different activity intensities at different temporal resolution. We also applied the classified model in a real-time system to extract information of various urban functions. This study demonstrates the significance and usefulness of social sensing in analyzing urban functions. By combining different platforms of social media data and analyzing people’s geo-tagged city experience, this article contributes to leverage voluntary local knowledge to better depict human dynamics, discover spatiotemporal city characteristics, and convey information about cities.  相似文献   

16.
决策树结合混合像元分解的中国竹林遥感信息提取   总被引:1,自引:0,他引:1  
竹林是中国亚热带地区特殊而重要的森林资源,现有方法难以实现全国范围竹林时空分布信息快速准确提取。针对此问题,本研究利用2003年、2008年、2014年MODIS NDVI、反射率产品数据和省域Landsat分类数据,提出了基于决策树结合混合像元分解的全国竹林信息提取方法。首先,通过最大似然法获取中国林地分布信息;然后,在林地信息的基础上,构建决策树模型提取中国竹林分布信息;最后,采用线性最小二乘法混合像元分解得到中国竹林丰度图,并计算竹林面积。研究结果表明:(1)最大似然法提取的3个时期中国林地信息的生产者与用户精度均在90%以上,Kappa系数均值为0.93,为竹林信息提取奠定了基础。(2)C5.0算法构建的决策树模型能够很好的提取中国竹林时空分布信息,3个时期竹林分类精度均在80%左右。(3)在混合像元分解的基础上,统计得到的全国各省竹林估算面积与清查面积具有较高的相关性,R~2分别为0.98、0.97和0.95,RMSE范围为3.92万—9.58万ha,说明估算得到全国竹林面积与实际情况较为吻合。本研究所提出基于MODIS遥感数据运用C5.0算法决策树结合混合像元分解的方法,实现了全国竹林时空分布信息的准确提取,为全国竹林资源信息动态监测及管理提供了技术手段和数据支撑。  相似文献   

17.
A Robust Set-Inversion via Interval Analysis method in a bounded-error framework is used to compute three-dimensional location zones in real time, at a given confidence level. This approach differs significantly from the usual Gaussian error model paradigm, since the satellite positions and the pseudorange measurements are represented by intervals encompassing the true value with a particular level of confidence. The method computes a location zone recursively, using contractions and bisections of an arbitrarily large initial location box. Such an approach can also handle an arbitrary number of erroneous measurements using a q-relaxed solver and allows the integration of geographic and cartographic information such as digital elevation models or three-dimensional maps. With enough data redundancy, inconsistent measurements can be detected and even rejected. The integrity risk of the location zone comes only from the measurement bounds settings, since the solver is guaranteed. A method for setting these bounds for a particular location zone confidence level is proposed. An experimental validation using real L1 code measurements and a digital elevation model is also reported in order to illustrate the performance of the method on real data.  相似文献   

18.
General principles underlying the study of spatial inequality are outlined from a Soviet perspective before more specific coverage of guidelines for its portrayal in cartographic form. Questions addressed in the development of a center-periphery model for isarithmic mapping of socioeconomic differences in Hungary include selection of appropriate indices and samplings of data points for mapping, and methods for data normalization and comparison. Examples of both aggregate and more narrowly focused maps of living conditions are included. Translated from: Vestnik Moskovskogo Universiteta, geografiya, 1985, No. 4, pp. 68-74.  相似文献   

19.
The reliability of habitat maps that have been generated using Geographic Information Systems (GIS) and image processing of remotely sensed data can be overestimated. Habitat suitability and spatially explicit population viability models are often based on these products without explicit knowledge of the effects of these mapping errors on model results. While research has considered errors in population modeling assumptions, there is no standardized method for measuring the effects of inaccuracies resulting from errors in landscape classification. Using landscape‐scale maps of existing vegetation developed for the USDA Forest Service in southern California from Landsat Thematic Mapper satellite data and GIS modeling, we performed a sensitivity analysis to estimate how mapping errors in vegetation type, forest canopy cover, and tree crown size might affect delineation of suitable habitat for the California spotted owl (Strix occidentalis occidentalis). The resulting simulated uncertainty maps showed an increase in the estimated area of suitable habitat types. Further analysis measuring the fragmentation of the additional patches showed that they were too small to be useful as habitat areas.  相似文献   

20.
已有的时空数据库研究集中在模型和语言上,缺乏时空数据库的系统实现。针对这一问题,给出了一种应用Informix数据刀片技术进行时空数据库系统实现的方法,论述了底层的时空数据模型、时空数据刀片的实现过程以及实验结果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号