首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本文采用用户分类调查和模糊评价方法,对互联网地图搜索引擎案例的视觉质量现状进行综合评估和横向比较,并对评语细化差异进行分类分用户的多变量可视化分析.总体评价结果为"良",国外比国内略好.其中,各指标权重表明,颜色配置、界面构图和道路表达对互联网地图的视觉效果起首要的决定作用.最后,基于用户为中心和跨学科协同的理念,提出...  相似文献   

2.
Using geographic information systems to link administrative databases with demographic, social, and environmental data allows researchers to use spatial approaches to explore relationships between exposures and health. Traditionally, spatial analysis in public health has focused on the county, ZIP code, or tract level because of limitations to geocoding at highly resolved scales. Using 2005 birth and death data from North Carolina, we examine our ability to geocode population‐level datasets at three spatial resolutions – zip code, street, and parcel. We achieve high geocoding rates at all three resolutions, with statewide street geocoding rates of 88.0% for births and 93.2% for deaths. We observe differences in geocoding rates across demographics and health outcomes, with lower geocoding rates in disadvantaged populations and the most dramatic differences occurring across the urban‐rural spectrum. Our results suggest that highly resolved spatial data architectures for population‐level datasets are viable through geocoding individual street addresses. We recommend routinely geocoding administrative datasets to the highest spatial resolution feasible, allowing public health researchers to choose the spatial resolution used in analysis based on an understanding of the spatial dimensions of the health outcomes and exposures being investigated. Such research, however, must acknowledge how disparate geocoding success across subpopulations may affect findings.  相似文献   

3.
This article describes a methodology for allocating demographic microdata to small enumeration areas such as census tracts, in the presence of underlying ambiguities. Maximum Entropy methods impute population weights that are constrained to match a set of census tract‐level summary statistics. Once allocated, the household characteristics are summarized to revise estimates of tract‐level demographic summary statistics, and to derive measures of ambiguity. The revised summary statistics are compared with original tract summaries within a context of expected variation. Allocation ambiguity is quantified for each household as a function of the distribution of imputed sample weights over all census tracts, and by computed metrics of confusion and variety of allocation to any census tract. The process reported here allows differentiation of households with regard to inherent ambiguity in the allocation decision. Ambiguity assessment represents an important component that has been neglected in spatial allocation work to date but can be seen as important additional knowledge for demographers and users of small area estimates. For the majority of tested variables, the revised tract level summaries correlate highly with original tract summary statistics. In addition to assessments for individual households, it is also possible to compute average allocation ambiguity for individual tracts, and to associate this with demographic characteristics not utilized in the allocation process.  相似文献   

4.
The need for better Web search tools is getting increasing attention nowadays. About 20% of the queries currently submitted to search engines include geographic references. Thus, it is particularly important to work with the semantics of such queries, both by understanding the terminology and by recognizing geographic references in natural language text. In this paper, we explore the use of natural language expressions, which we call positioning expressions, to perform geographic searches on the Web, without resorting to geocoded data or gazetteers. Such positioning expressions denote the location of a subject of interest with respect to a landmark. Our approach leads to a query expansion technique that can be explored by virtually any keyword‐based search engine. Results obtained in our experiments show an expressive improvement over the traditional keyword‐based search and a potential path for tackling many kinds of common geographic queries.  相似文献   

5.
Current search engines in most geospatial data portals tend to induce users to focus on one single-data characteristic dimension (e.g. popularity and release date). This approach largely fails to take account of users’ multidimensional preferences for geospatial data, and hence may likely result in a less than optimal user experience in discovering the most applicable dataset. This study reports a machine learning framework to address the ranking challenge, the fundamental obstacle in geospatial data discovery, by (1) identifying a number of ranking features of geospatial data to represent users’ multidimensional preferences by considering semantics, user behavior, spatial similarity, and static dataset metadata attributes; (2) applying a machine learning method to automatically learn a ranking function; and (3) proposing a system architecture to combine existing search-oriented open source software, semantic knowledge base, ranking feature extraction, and machine learning algorithm. Results show that the machine learning approach outperforms other methods, in terms of both precision at K and normalized discounted cumulative gain. As an early attempt of utilizing machine learning to improve the search ranking in the geospatial domain, we expect this work to set an example for further research and open the door towards intelligent geospatial data discovery.  相似文献   

6.
Integrating Arc Hydro Features with a Schematic Network   总被引:2,自引:0,他引:2  
A framework for integrating GIS features with processing engines to simulate hydrologic behavior is presented. The framework is designed for compatibility with the ArcGIS ModelBuilder environment, and utilizes the data structure provided by the SchemaLink and SchemaNode feature classes from the ArcGIS Hydro data model. SchemaLink and SchemaNode form the links and nodes, respectively, in a schematic network representing the connectivity between hydrologic features pertinent to the movement of surface water in the landscape. A specific processing engine is associated with a given schematic feature, depending on the type of feature the schematic feature represents. Processing engines allow features to behave as individual hydrologic processors in the landscape. The framework allows two types of processes for each feature, a Receive process and a Pass process. Schematic network features operate with four types of values: received values, incremental values, total values, and passed values. The framework assumes that the schematic network is dendritic, and that no backwater effects occur between schematic features. A case study is presented for simulating bacterial loading in Galveston Bay in Texas from point and nonpoint sources. A second case study is presented for simulating rainfall‐runoff response and channel routing for the Llano River in Texas.  相似文献   

7.
We introduce a new method for visualizing and analyzing information landscapes of ideas and events posted on public web pages through customized web-search engines and keywords. This research integrates GIScience and web-search engines to track and analyze public web pages and their web contents with associated spatial relationships. Web pages searched by clusters of keywords were mapped with real-world coordinates (by geolocating their Internet Protocol addresses). The resulting maps represent web information landscapes consisting of hundreds of populated web pages searched by selected keywords. By creating a Spatial Web Automatic Reasoning and Mapping System prototype, researchers can visualize the spread of web pages associated with specific keywords, concepts, ideas, or news over time and space. These maps may reveal important spatial relationships and spatial context associated with selected keywords. This approach may provide a new research direction for geographers to study the diffusion of human thought and ideas. A better understanding of the spatial and temporal dynamics of the ‘collective thinking of human beings’ over the Internet may help us understand various innovation diffusion processes, human behaviors, and social movements around the world.  相似文献   

8.
为解决企业和个人开展基于在线空间信息服务数据源以及技术问题,对基于地图API的Web地图服务及应用模式进行了研究。首先对企业和个人Web地图服务的需求和特点进行了分析,提出当前Web地图应用开发存在的问题及解决途径;其次,对开放式地图API的特点、功能和工作原理进行了总结,提出了基于地图API的Web地图服务技术框架;最后,以一个校园地图服务应用系统为例开展了基于开放式API的Web地图服务应用实践。结果表明,地图API可解决网络地图服务数据源问题,节省开发成本,具有较好的应用前景。  相似文献   

9.
GNSS ambiguity resolution is the key issue in the high-precision relative geodetic positioning and navigation applications. It is a problem of integer programming plus integer quality evaluation. Different integer search estimation methods have been proposed for the integer solution of ambiguity resolution. Slow rate of convergence is the main obstacle to the existing methods where tens of ambiguities are involved. Herein, integer search estimation for the GNSS ambiguity resolution based on the lattice theory is proposed. It is mathematically shown that the closest lattice point problem is the same as the integer least-squares (ILS) estimation problem and that the lattice reduction speeds up searching process. We have implemented three integer search strategies: Agrell, Eriksson, Vardy, Zeger (AEVZ), modification of Schnorr–Euchner enumeration (M-SE) and modification of Viterbo-Boutros enumeration (M-VB). The methods have been numerically implemented in several simulated examples under different scenarios and over 100 independent runs. The decorrelation process (or unimodular transformations) has been first used to transform the original ILS problem to a new one in all simulations. We have then applied different search algorithms to the transformed ILS problem. The numerical simulations have shown that AEVZ, M-SE, and M-VB are about 320, 120 and 50 times faster than LAMBDA, respectively, for a search space of dimension 40. This number could change to about 350, 160 and 60 for dimension 45. The AEVZ is shown to be faster than MLAMBDA by a factor of 5. Similar conclusions could be made using the application of the proposed algorithms to the real GPS data.  相似文献   

10.
This research proposes a method for capturing “relatedness between geographical entities” based on the co‐occurrences of their names on web pages. The basic assumption is that a higher count of co‐occurrences of two geographical places implies a stronger relatedness between them. The spatial structure of China at the provincial level is explored from the co‐occurrences of two provincial units in one document, extracted by a web information retrieval engine. Analysis on the co‐occurrences and topological distances between all pairs of provinces indicates that: (1) spatially close provinces generally have similar co‐occurrence patterns; (2) the frequency of co‐occurrences exhibits a power law distance decay effect with the exponent of 0.2; and (3) the co‐occurrence matrix can be used to capture the similarity/linkage between neighboring provinces and fed into a regionalization method to examine the spatial organization of China. The proposed method provides a promising approach to extracting valuable geographical information from massive web pages.  相似文献   

11.
ABSTRACT

Massive social media data produced from microblog platforms provide a new data source for studying human dynamics at an unprecedented scale. Meanwhile, population bias in geotagged Twitter users is widely recognized. Understanding the demographic and socioeconomic biases of Twitter users is critical for making reliable inferences on the attitudes and behaviors of the population. However, the existing global models cannot capture the regional variations of the demographic and socioeconomic biases. To bridge the gap, we modeled the relationships between different demographic/socioeconomic factors and geotagged Twitter users for the whole contiguous United States, aiming to understand how the demographic and socioeconomic factors relate to the number of Twitter users at county level. To effectively identify the local Twitter users for each county of the United States, we integrate three commonly used methods and develop a query approach in a high-performance computing environment. The results demonstrate that we can not only identify how the demographic and socioeconomic factors relate to the number of Twitter users, but can also measure and map how the influence of these factors vary across counties.  相似文献   

12.
The open service network for marine environmental data (NETMAR) project uses semantic web technologies in its pilot system which aims to allow users to search, download and integrate satellite, in situ and model data from open ocean and coastal areas. The semantic web is an extension of the fundamental ideas of the World Wide Web, building a web of data through annotation of metadata and data with hyperlinked resources. Within the framework of the NETMAR project, an interconnected semantic web resource was developed to aid in data and web service discovery and to validate Open Geospatial Consortium Web Processing Service orchestration. A second semantic resource was developed to support interoperability of coastal web atlases across jurisdictional boundaries. This paper outlines the approach taken to producing the resource registry used within the NETMAR project and demonstrates the use of these semantic resources to support user interactions with systems. Such interconnected semantic resources allow the increased ability to share and disseminate data through the facilitation of interoperability between data providers. The formal representation of geospatial knowledge to advance geospatial interoperability is a growing research area. Tools and methods such as those outlined in this paper have the potential to support these efforts.  相似文献   

13.
The volume of publically available geospatial data on the web is rapidly increasing due to advances in server-based technologies and the ease at which data can now be created. However, challenges remain with connecting individuals searching for geospatial data with servers and websites where such data exist. The objective of this paper is to present a publically available Geospatial Search Engine (GSE) that utilizes a web crawler built on top of the Google search engine in order to search the web for geospatial data. The crawler seeding mechanism combines search terms entered by users with predefined keywords that identify geospatial data services. A procedure runs daily to update map server layers and metadata, and to eliminate servers that go offline. The GSE supports Web Map Services, ArcGIS services, and websites that have geospatial data for download. We applied the GSE to search for all available geospatial services under these formats and provide search results including the spatial distribution of all obtained services. While enhancements to our GSE and to web crawler technology in general lie ahead, our work represents an important step toward realizing the potential of a publically accessible tool for discovering the global availability of geospatial data.  相似文献   

14.
The future information needs of stakeholders for hydrogeological and hydro‐climate data management and assessment in New Zealand may be met with an Open Geospatial Consortium (OGC) standards‐compliant publicly accessible web services framework which aims to provide integrated use of groundwater information and environmental observation data in general. The stages of the framework development described in this article are search and discovery as well as data collection and access with (meta)data services, which are developed in a community process. The concept and prototype implementation of OGC‐compliant web services for groundwater and hydro‐climate data include demonstration data services that present multiple distributed datasets of environmental observations. The results also iterate over the stakeholder community process and the refined profile of OGC services for environmental observation data sharing within the New Zealand Spatial Data Infrastructure (SDI) landscape, including datasets from the National Groundwater Monitoring Program and the New Zealand Climate Database along with datasets from affiliated regional councils at regional‐ and sub‐regional scales. With the definition of the New Zealand observation data profile we show that current state‐of‐the‐art standards do not necessarily need to be improved, but that the community has to agree upon how to use these standards in an iterative process.  相似文献   

15.
This study shows how aerial photographs can be of value in a population census. The census and the enumeration district maps were used initially to obtain population data and the housing stock was derived from the aerial photographs. From these the population densities were determined of a number of sample enumeration districts containing a single type of house. Another set of enumeration districts was selected and the housing stock again derived from the aerial photographs. By considering the type and quantity of housing stock and the population density of each housing type, the population figures were estimated for each enumeration district. The values of these population estimates were then compared with the values recorded in the census. The overall population estimate had an error of only 2%, but the estimates for some of the individual enumeration districts showed greater errors. These errors are assessed and analysed and some suggestions are made to improve the methodology used in this study.  相似文献   

16.
The ever‐increasing population in cities intensifies environmental pollution that increases the number of asthmatic patients. Other factors that may influence the prevalence of asthma are atmospheric parameters, physiographic elements and personal characteristics. These parameters can be incorporated into a model to monitor and predict the health conditions of asthmatic patients in various contexts. Such a model is the base for any asthma early warning system. This article introduces a novel ubiquitous health system to monitor asthmatic patients. Ubiquitous systems can be effective in monitoring asthmatic patients through the use of intelligent frameworks. They can provide powerful reasoning and prediction engines for analyzing various situations. Our proposed model encapsulates several tools for preprocessing, reasoning and prediction of asthma conditions. In the preprocessing phase, outliers in the atmospheric datasets were detected and missing sensor data were estimated using a Kalman filter, while in the reasoning phase, the required information was inferred from the raw data using some rule‐based inference techniques. The asthmatic conditions of patients were predicted accurately by a Graph‐Based Support Vector Machine in a Context Space (GBSVMCS) which functions anywhere, anytime and with any status. GBSVMCS is an improved version of the common Support Vector Machine algorithm with the addition of unlabeled data and graph‐based rules in a context space. Based on the stored value for a patient's condition and his/her location/time, asthmatic patients can be monitored and appropriate alerts will be given. Our proposed model was assessed in Region 3 of Tehran, Iran for monitoring three different types of asthma: allergic, occupational and seasonal asthma. The input data to our system included air pollution data, the patients’ personal information, patients’ locations, weather data and geographical information for 270 different situations. Our results showed that 90% of the system's predictions were correct. The proposed model also improved the estimation accuracy by 15% in comparison to conventional methods.  相似文献   

17.
This article reports on a study performed to understand the geographic and linguistic coverage of web resources, focusing on the example of tourism‐related themes in Switzerland. Search engine queries of web documents were used to gather counts for phrases in four different languages. The study focused on selected populated places and tourist attractions in Switzerland from three gazetteer datasets: topographic gazetteer data from the Swiss national mapping agency (SwissTopo); POI data from a commercial data provider (Tele Atlas) and user generated geographic content (geonames.org). The web counts illustrate the geographic extent and trends of web coverage of tourism for different languages. Results show that coverage for local languages, i.e. German, French and Italian, is more strongly related to the region of the spoken language. Correlation of the web counts to typical tourism indicators, e.g. population and number of hotel nights rented per year, are also computed and compared.  相似文献   

18.
This study adopts a near real‐time space‐time cube approach to portray a dynamic urban air pollution scenario across space and time. Originating from time geography, space‐time cubes provide an approach to integrate spatial and temporal air pollution information into a 3D space. The base of the cube represents the variation of air pollution in a 2D geographical space while the height represents time. This way, the changes of pollution over time can be described by the different component layers of the cube from the base up. The diurnal ambient ozone (O3) pollution in Houston, Texas is modeled in this study using the space‐time air pollution cube. Two methods, land use regression (LUR) modeling and spatial interpolation, were applied to build the hourly component layers for the air pollution cube. It was found that the LUR modeling performed better than the spatial interpolation in predicting air pollution level. With the availability of real‐time air pollution data, this approach can be extended to produce real‐time air pollution cube is for more accurate air pollution measurement across space and time, which can provide important support to studies in epidemiology, health geography, and environmental regulation.  相似文献   

19.
Online representations of places are becoming pivotal in informing our understanding of urban life. Content production on online platforms is grounded in the geography of their users and their digital infrastructure. These constraints shape place representation, that is, the amount, quality, and type of digital information available in a geographic area. In this article we study the place representation of user‐generated content (UGC) in Los Angeles County, relating the spatial distribution of the data to its geo‐demographic context. Adopting a comparative and multi‐platform approach, this quantitative analysis investigates the spatial relationship between four diverse UGC datasets and their context at the census tract level (about 685,000 geo‐located tweets, 9,700 Wikipedia pages, 4 million OpenStreetMap objects, and 180,000 Foursquare venues). The context includes the ethnicity, age, income, education, and deprivation of residents, as well as public infrastructure. An exploratory spatial analysis and regression‐based models indicate that the four UGC platforms possess distinct geographies of place representation. To a moderate extent, the presence of Twitter, OpenStreetMap, and Foursquare data is influenced by population density, ethnicity, education, and income. However, each platform responds to different socio‐economic factors and clusters emerge in disparate hotspots. Unexpectedly, Twitter data tend to be located in denser, more deprived areas, and the geography of Wikipedia appears peculiar and harder to explain. These trends are compared with previous findings for the area of Greater London.  相似文献   

20.
ABSTRACT

Big Earth Data has experienced a considerable increase in volume in recent years due to improved sensing technologies and improvement of numerical-weather prediction models. The traditional geospatial data analysis workflow hinders the use of large volumes of geospatial data due to limited disc space and computing capacity. Geospatial web service technologies bring new opportunities to access large volumes of Big Earth Data via the Internet and to process them at server-side. Four practical examples are presented from the marine, climate, planetary and earth observation science communities to show how the standard interface Web Coverage Service and its processing extension can be integrated into the traditional geospatial data workflow. Web service technologies offer a time- and cost-effective way to access multi-dimensional data in a user-tailored format and allow for rapid application development or time-series extraction. Data transport is minimised and enhanced processing capabilities are offered. More research is required to investigate web service implementations in an operational mode and large data centres have to become more progressive towards the adoption of geo-data standard interfaces. At the same time, data users have to become aware of the advantages of web services and be trained how to benefit from them most.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号