首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 547 毫秒
1.
《The Cartographic journal》2013,50(4):274-285
Abstract

A structure recognition technique is presented that can be employed for contextual building and built-up area generalisation in medium-scale topographic maps. Owing to various spatial configurations, a contextual mechanism is necessary to achieve acceptable results in cartographic generalisation. Spatial structures are usually implicit in data, and advanced analysis and processing methods are required to detect them. This technique is based on auxiliary geometric data structures and spatial analysis methods. A case study is performed with a topographic data set, using an interface developed in an object-oriented geographic information system (O-O GIS). The proposed approach was found to assist and improve automation.  相似文献   

2.
ABSTRACT

Data on land use and land cover (LULC) are a vital input for policy-relevant research, such as modelling of the human population, socioeconomic activities, transportation, environment, and their interactions. In Europe, CORINE Land Cover has been the only data set covering the entire continent consistently, but with rather limited spatial detail. Other data sets have provided much better detail, but either have covered only a fraction of Europe (e.g. Urban Atlas) or have been thematically restricted (e.g. Copernicus High Resolution Layers). In this study, we processed and combined diverse LULC data to create a harmonised, ready-to-use map covering 41 countries. By doing so, we increased the spatial detail (from 25 to one hectare) and the thematic detail (by seven additional LULC classes) compared to the CORINE Land Cover. Importantly, we decomposed the class ‘Industrial and commercial units’ into ‘Production facilities’, ‘Commercial/service facilities’ and ‘Public facilities’ using machine learning to exploit a large database of points of interest. The overall accuracy of this thematic breakdown was 74%, despite the confusion between the production and commercial land uses, often attributable to noisy training data or mixed land uses. Lessons learnt from this exercise are discussed, and further research direction is proposed.  相似文献   

3.
ABSTRACT

For evaluating the progresses towards achieving the Sustainable Development Goals (SDGs), a global indicator framework was developed by the UN Inter-Agency and Expert Group on Sustainable Development Goals Indicators. In this paper, we propose an improved methodology and a set of workflows for calculating SDGs indicators. The main improvements consist of using moderate and high spatial resolution satellite data and state-of-the-art deep learning methodology for land cover classification and for assessing land productivity. Within the European Network for Observing our Changing Planet (ERA-PLANET), three SDGs indicators are calculated. In this research, harmonized Landsat and Sentinel-2 data are analyzed and used for land productivity analysis and yield assessment, as well as Landsat 8, Sentinel-2 and Sentinel-1 time series are utilized for crop mapping. We calculate for the whole territory of Ukraine SDG indicators: 15.1.1 – ‘Forest area as proportion of total land area’; 15.3.1 – ‘Proportion of land that is degraded over total land area’; and 2.4.1 – ‘Proportion of agricultural area under productive and sustainable agriculture’. Workflows for calculating these indicators were implemented in a Virtual Laboratory Platform. We conclude that newly available high-resolution remote sensing products can significantly improve our capacity to assess several SDGs indicators through dedicated workflows.  相似文献   

4.
Abstract

Several innovative ‘participatory sensing’ initiatives are under way in East Africa. They can be seen as local manifestations of the global notion of Digital Earth. The initiatives aim to amplify the voice of ordinary citizens, improve citizens' capacity to directly influence public service delivery and hold local government accountable. The popularity of these innovations is, among other things, a local reaction to the partial failure of the millennium development goals (MDGs) to deliver accurate statistics on public services in Africa. Empowered citizens, with access to standard mobile phones, can ‘sense’ via text messages and report failures in the delivery of local government services. The public disclosure of these reports on the web and other mass media may pressure local authorities to take remedial action. In this paper, we outline the potential and research challenges of a ‘participatory sensing’ platform, which we call a ‘human sensor web.’ Digital Africa's first priority could be to harness continent-wide and national data as well as local information resources, collected by citizens, in order to monitor, measure and forecast MDGs.  相似文献   

5.
Abstract

Grid computing is deemed as a good solution to the digital earth infrastructure. Various geographically dispersed geospatial resources can be connected and merged into a ‘supercomputer’ by using the grid-computing technology. On the other side, geosensor networks offer a new perspective for collecting physical data dynamically and modeling a real-time virtual world. Integrating geosensor networks and grid computing in geosensor grid can be compared to equipping the geospatial information grid with ‘eyes’ and ‘ears.’ Thus, real-time information in the physical world can be processed, correlated, and modeled to enable complex and advanced geospatial analyses on geosensor grid with capability of high-performance computation. There are several issues and challenges that need to be overcome before geosensor grid comes true. In this paper, we propose an integrated framework, comprising the geosensor network layer, the grid layer and the application layer, to address these design issues. Key technologies of the geosensor grid framework are discussed. And, a geosensor grid testbed is set up to illustrate the proposed framework and improve our geosensor grid design.  相似文献   

6.
《The Cartographic journal》2013,50(4):321-328
Abstract

Map generalisation is an abstraction process that seeks to transform the representation of cartographic objects from the original version into a coarser one. The characteristics of cartographic objects and the arrangement of map features have to be observed and preserved in a generalisation process. A method is developed for typifying drainages while preserving their structural characteristics, i.e.presenting the drainages with reduced number of rivers under the constraint of preserving the original structure in terms of the type and distribution of the rivers. We apply Töpfer's radical law to calculate the amount of the rivers to be retained on the generalised map. The drainages share the amount of retained rivers in proportion to the number of their tributaries. In each of the drainages, the shared amount is divided among the rivers based on the dendritic decomposition of the drainage. We implement and test the method in Java Environment. Results from case studies show that the method effectively preserves the original structures of the drainages on the generalised maps.  相似文献   

7.
Obituary     
《The Cartographic journal》2013,50(4):315-322
Abstract

title/>

In the area of volunteered geographical information (VGI), the issue of spatial data quality is a clear challenge. The data that are contributed to VGI projects do not comply with standard spatial data quality assurance procedures, and the contributors operate without central coordination and strict data collection frameworks. However, similar to the area of open source software development, it is suggested that the data hold an intrinsic quality assurance measure through the analysis of the number of contributors who have worked on a given spatial unit. The assumption that as the number of contributors increases so does the quality is known as ‘Linus’ Law’ within the open source community. This paper describes three studies that were carried out to evaluate this hypothesis for VGI using the OpenStreetMap dataset, showing that this rule indeed applies in the case of positional accuracy.  相似文献   

8.
ABSTRACT

The human–cyber–physical world produces a considerable volume of multi-modal spatio-temporal data, thus leading to information overload. Visual variables are used to transform information into visual forms that are perceived by the powerful human vision system. However, previous studies of visual variables focused on methods of ‘drawing information’ without considering ‘intelligence’ derived from balancing ‘importance’ and ‘unimportance’. This paper proposes semantic visual variables to support an augmented geovisualization that aims to avoid exposing users to unnecessary information by highlighting goal-oriented content over redundant details. In this work, we first give definitions of several concepts and then design a semiotic model for depicting the mechanisms of augmented geovisualization. We also provide an in-depth discussion of semantic visual variables based on a hierarchical organization of the original visual variables, and we analyse the critical influencing factors that affect the choice of visualization forms and visual variables. Finally, a typical application is used to illustrate the relevance of this study.  相似文献   

9.
ABSTRACT

Bertin’s first book, Semiology of Graphics, was published in 1967. His second book, Graphics and Graphic Information Processing, was subsequently published in 1977. The word “processing” in the title of the second book is interesting because in those days there were no personal computers with an interactive display system. But in Bertin’s laboratory there were many kinds of tool kits – basically manually developed thematic maps and data analysis. Bertin’s methods were concerned with making a thematic map and data visualization. Maps, and more generally graphics, were represented by sets of cartographic symbols. Thus, they are abstractions that demand both theoretical and technical literacy to represent and understand them. If the representation is systematic, a sort of tool kit might be necessary, because the representation demands consistency based on the theory. Otherwise a cartographer faces the risk of an unstable and unintelligible representation. In this paper, we discuss the discrimination between tool kits intended either for an automated system or a process assisting system. The latter process might be useful and necessary to develop a graphic way of thinking. This investigation refers to Bertin’s books, materials conserved at the National Archives in Paris, and other related software developed later.

Abbreviation: EHESS: Ecole des Hautes Etudes en Sciences Sociales inherited Ecole Pratique des Hautes Etudes since 1975  相似文献   

10.
Abstract

Many of the traditional data visualization techniques, which proved to be supportive for exploratory analysis of datasets of moderate sizes, fail to fulfil their function when applied to large datasets. There are two approaches to coping with large amounts of data: data selection, when only a portion of data is displayed, and data aggregation, i.e. grouping data items and considering the groups instead of the original data. None of these approaches alone suits the needs of exploratory data analysis, which requires consideration of data on all levels: overall (considering a dataset as a whole), intermediate (viewing and comparing collective characteristics of arbitrary data subsets, or classes), and elementary (accessing individual data items). Therefore, it is necessary to combine these approaches, i.e. build a tool showing the whole set and arbitrarily defined subsets (object classes) in an aggregated way and superimposing this with a representation of arbitrarily selected individual data items.

We have achieved such a combination of approaches by modifying the technique of parallel coordinate plot. These modifications are described and analysed in the paper.  相似文献   

11.
ABSTRACT

Since Al Gore created the vision for Digital Earth in 1998, a wide range of research in this field has been published in journals. However, little attention has been paid to bibliometric analysis of the literature on Digital Earth. This study uses a bibliometric analysis methodology to study the publications related to Digital Earth in the Science Citation Index database and Social Science Citation Index database (via the Web of Science online services) during the period from 1998 to 2015. In this paper, we developed a novel keyword set for ‘Digital Earth’. Using this keyword set, 11,061 scientific articles from 23 subject categories were retrieved. Based on the searched articles, we analyzed the spatiotemporal characteristics of publication outputs, the subject categories and the major journals. Then, authors’ performance, affiliations, cooperation, and funding institutes were evaluated. Finally, keywords were examined. Through keyword clustering, research hotspots in the field of Digital Earth were detected. We assume that the results coincide well with the position of Digital Earth research in the context of big data.  相似文献   

12.
ABSTRACT

This paper aims to demonstrate results and considerations regarding the use of remote sensing big data for archaeological and Cultural Heritage management large scale applications. For this purpose, the Earth Engine© developed by Google© was exploited. Earth Engine© provides a robust and expandable cloud platform where several freely distributed remote sensing big data, such as Landsat, can be accessed, analysed and visualized. Two different applications are presented here as follows: the first one is based on the evaluation of multi-temporal Landsat series datasets for the detection of buried Neolithic tells (‘magoules’) in the area of Thessaly, in Greece using linear orthogonal equations. The second case exploits European scale multi-temporal DMSP-OLS Night-time Lights Time Series to visualize the impact of urban sprawl in the vicinity of UNESCO World Heritage sites and monuments. Both applications highlight the considerable opportunities that big data can offer to the fields of archaeology and Cultural Heritage, while the studies also demonstrate the great challenges that still are needed to be overcome in order to make the exploitation of big data process manageable and fruitful for future applications.  相似文献   

13.
Abstract

One of the most important challenges for cartographers is to be able to transmit information appearing in maps in a simple manner to users. The commonest strategy to do so consists in displaying visual information in a hierarchical way, that is, making it some elements to appear as being more important than others. Nevertheless, recent research has shown that people also pay attention to configurational information, or information about relationships among elements appearing in a map, to retrieve hierarchical information of it. This is the topic of this paper. It aims to investigate the role of metric and configurational information in enabling people to retrieve hierarchical information from maps. The main problem consisted in identifying ‘the main street’ of different layouts whose paths were sometimes widened to make them appear more important. The main findings show that people retrieved hierarchical information by paying attention to a combination of metric and configurational factors.

One of the main challenges that cartographers face is to transmit information in a way that is simple to understand for everyone. The most frequent strategy for this is to display the information in a hierarchical way; that is, by exaggerating the size or width of specific elements and thus assigning them a greater degree of importance. Nevertheless, recent research has shown that when reading maps, people also read configurational information. This is the topic of this paper. It aims to investigate the role of metric and configurational information in enabling people to retrieve hierarchical information from maps. For this, a set of exercises was designed and carried out where people were asked to identify the main street of different specially designed layouts. The main findings show that people retrieved hierarchical information by paying attention to a combination of metric and configurational factors.  相似文献   

14.
Abstract

Cartography in general, and building solid landscape models in particular, requires an interdisciplinary set of skills in order to be done well. Traditional handcrafted construction methods provide quality results, but are extremely labour-intensive and therefore costly. Modern methods using digital terrain models (DTMs) and computer numerical control (CNC) milling are fast and accurate, but the finished models are visually less than optimal. Solutions are proposed using DTMs and CNC milling to create landscape models in which the initial shaping is done mechanically and the fine details are carved by hand. This ‘balanced approach’ to landscape modelling combines the time- and cost-advantages of modern digital technology with the quality of traditional handcrafted techniques resulting in highly accurate landscape models which still retain the artistic ‘feel’ of the human touch.  相似文献   

15.
Abstract

Shuttle Radar Topography Mission (SRTM-GL1), Advanced Space Borne Thermal Emission and Reflection Radiometer- Global DEM (GDEM-V2), recently released Advanced Land Observing Satellite (‘DAICHI’) DEM (AW3D30) and Indian National Cartosat-1 DEM v3 (CartoDEM-V3.1) provide free topographic data at a 30-m resolution for Indian peninsula. In this research study, the vertical accuracy of DEM is evaluated for above data-sets and compared with high accuracy dual frequency GNSS of a millimetre accuracy. The extensive field investigation is carried out using a stratified random fast static DGPS survey for collecting 117 high accuracy ground control points in a predominantly agriculture catchment. Further, the effect of land cover, slope and low-lying coastal zone on DEM vertical accuracy was also analysed and presented in this study.  相似文献   

16.
Line generalisation by repeated elimination of points   总被引:1,自引:0,他引:1  
Abstract

This paper presents a new approach to line generalisation which uses the concept of 'effective area' for progressive simplification of a line by point elimination. Two coastlines are used to compare the performance of this, with that of the widely used Douglas-Peucker, algorithm. The results from the area-based algorithm compare favourably with manual generalisation of the same lines. It is capable of achieving both imperceptible minimal simplifications and caricatural generalisations. By careful selection of cut-off values, it is possible to use the same algorithm for scale-dependent and scale-independent generalisations. More importantly, it offers scope for modelling cartographic lines as consisting of features within features so that their geometric manipulation may be modified by application- and/or user-defined rules and weights. The paper examines the merits and limitations of the algorithm and the opportunities it offers for further research and progress in the field of line generalisation.  相似文献   

17.
ABSTRACT

Light detection and ranging (LiDAR) data are essential for scientific discoveries such as Earth and ecological sciences, environmental applications, and responding to natural disasters. While collecting LiDAR data over large areas is quite possible the subsequent processing steps typically involve large computational demands. Efficiently storing, managing, and processing LiDAR data are the prerequisite steps for enabling these LiDAR-based applications. However, handling LiDAR data poses grand geoprocessing challenges due to data and computational intensity. To tackle such challenges, we developed a general-purpose scalable framework coupled with a sophisticated data decomposition and parallelization strategy to efficiently handle ‘big’ LiDAR data collections. The contributions of this research were (1) a tile-based spatial index to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, (2) two spatial decomposition techniques to enable efficient parallelization of different types of LiDAR processing tasks, and (3) by coupling existing LiDAR processing tools with Hadoop, a variety of LiDAR data processing tasks can be conducted in parallel in a highly scalable distributed computing environment using an online geoprocessing application. A proof-of-concept prototype is presented here to demonstrate the feasibility, performance, and scalability of the proposed framework.  相似文献   

18.
Abstract

While graphic variables in 2D maps have been extensively investigated, 4D cartography is still a widely unexplored field. In this paper, we investigate the usefulness of 4D maps (three spatial dimensions plus time) for cartographically illustrating spatio-temporal environmental phenomena. The presented approach focuses mostly on explorative research rather than on enhancement and extension of existing methods and principles. The user study described in the paper shows that 4D cartography is not a well-explored research area and that many experienced map users try to apply their knowledge from 2D maps to 4D dynamic visualisations. Thus, in order to foster the discussion within the community, we formulated several basic research questions for the area of 4D cartography, which range from methods for representing time in 4D visualisations and understanding the temporal context to finding generic methods to achieve optimized temporal generalisation and a consistent definition of graphical variables for 3D and 4D.  相似文献   

19.
《The Cartographic journal》2013,50(3):249-267
Abstract

Analysis of eye movements has been used for decades as a method for assessing the performance of visual stimuli. Until recently, this has mainly been applied to static and non-cartographic stimuli, but due to technological developments and reduced cost of equipment, interactive and cartographic applications are now feasible. suggest a new analysis method which applies Hägerstrand’s Space-Time-Cube (STC; ) to eye movement data. However, in an interactive three-dimensional STC, identifying and exploring key behaviours can be difficult. In order to ameliorate these difficulties, we propose a variation of the STC method, which uses two-dimensional projections of the STC onto the XT and YT planes. These two-dimensional projections are found to facilitate rapid identification of significant patterns in the data set. A prototype implementing this and other dynamical methods has been developed, and is presented with examples illustrating the benefits of working with two-dimensional projections of the STC.  相似文献   

20.
ABSTRACT

Turning Earth observation (EO) data consistently and systematically into valuable global information layers is an ongoing challenge for the EO community. Recently, the term ‘big Earth data’ emerged to describe massive EO datasets that confronts analysts and their traditional workflows with a range of challenges. We argue that the altered circumstances must be actively intercepted by an evolution of EO to revolutionise their application in various domains. The disruptive element is that analysts and end-users increasingly rely on Web-based workflows. In this contribution we study selected systems and portals, put them in the context of challenges and opportunities and highlight selected shortcomings and possible future developments that we consider relevant for the imminent uptake of big Earth data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号