首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The National Elevation, Hydrography and Land Cover datasets of the United States have been synthesized into a geospatial dataset called NHDPlus which is referenced to a spheroidal Earth, provides geospatial data layers for topography on 30 m rasters, and has vector coverages for catchments and river reaches. In this article, we examine the integration of NHDPlus with the Noah-distributed model. In order to retain compatibility with atmospheric models, Noah-distributed utilizes surface domain fields referenced to a spherical rather than spheroidal Earth in its computation of vertical land surface/atmosphere water and energy budgets (at coarse resolution) as well as horizontal cell-to-cell water routing across the land surface and through the shallow subsurface (at fine resolution). Two data-centric issues affecting the linkage between Noah-distributed and NHDPlus are examined: (1) the shape of the Earth; and (2) the linking of gridded landscape with a vector representation of the stream and river network. At mid-latitudes the errors due to projections between spherical and spheroidal representations of the Earth are significant. A catchment-based "pour point" technique is developed to link the raster and vector data to provide lateral inflow from the landscape to a one-dimensional river model. We conclude that, when Noah-distributed is run uncoupled to an atmospheric model, it is advantageous to implement Noah-distributed at the native spatial scale of the digital elevation data and the spheroidal Earth of the NHDPlus dataset rather than transforming the NHDPlus dataset to fit the coarser resolution and spherical Earth shape of the Noah-distributed model.  相似文献   

2.
In sharp contrast with the global trend in population growth, certain developed countries are expected to experience rapid national population declines. Considering future land use scenarios that include depopulation is necessary to evaluate changes in ecosystem services that affect human well‐being and to facilitate comprehensive strategies for balancing rural and urban development. In this study, we applied a population‐projection‐assimilated predictive land use modeling (PPAP‐LM) approach, in which a spatially explicit population projection was incorporated as a predictor in a land use model. To analyze the effects of future population distributions on land use, we developed models for five land use types and generated projections for two scenarios (centralization and decentralization) under a shrinking population in Japan during 2015–2050. Our results suggested that population centralization promotes the compaction of built‐up areas and the expansion of forest and wastelands, while population decentralization contributes to the maintenance of a mixture of forest and cultivated land.  相似文献   

3.
In this work we investigate the effectiveness of different types of visibility models for use within location‐based services. This article outlines the methodology and results for our experiments, which were designed to understand the accuracy and effects of model choices for mobile visibility querying. Harnessing a novel mobile media consumption and authoring application called Zapp, the levels of accuracy of various digital surface representations used by a line of sight visibility algorithm are extensively examined by statistically assessing randomly sampled viewing sites across the 1 km2 study area, in relation to points of interest (POI) across the University of Nottingham campus. Testing was carried out on three different surface models derived from 0.5 m LiDAR data by visiting physical sites on each surface model with 14 random point of interest masks being viewed from between 10 and 16 different locations, totalling 190 data points. Each site was ground‐truthed by determining whether a given POI could be seen by the user and could also be identified by the mobile device. Our experiments in a semi‐urban area show that choice of surface model has important implications for mobile applications that utilize visibility in geospatial query operations.  相似文献   

4.
If sites, cities, and landscapes are captured at different points in time using technology such as LiDAR, large collections of 3D point clouds result. Their efficient storage, processing, analysis, and presentation constitute a challenging task because of limited computation, memory, and time resources. In this work, we present an approach to detect changes in massive 3D point clouds based on an out‐of‐core spatial data structure that is designed to store data acquired at different points in time and to efficiently attribute 3D points with distance information. Based on this data structure, we present and evaluate different processing schemes optimized for performing the calculation on the CPU and GPU. In addition, we present a point‐based rendering technique adapted for attributed 3D point clouds, to enable effective out‐of‐core real‐time visualization of the computation results. Our approach enables conclusions to be drawn about temporal changes in large highly accurate 3D geodata sets of a captured area at reasonable preprocessing and rendering times. We evaluate our approach with two data sets from different points in time for the urban area of a city, describe its characteristics, and report on applications.  相似文献   

5.
Emergency services personnel face risks and uncertainty as they respond to natural and anthropogenic events. Their primary goal is to minimize the loss of life and property, especially in neighborhoods with high population densities, where response time is of great importance. In recent years, mobile phones have become a primary communication device during emergencies. The portability of cell phones and ease of information storage and dissemination has enabled effective implementation of cell phones by first responders and one of the most viable means of communication with the population. Using cellular location data during evacuation planning and response also provides increased awareness to emergency personnel. This article introduces a multi‐objective, multi‐criteria approach to determining optimum evacuation routes in an urban setting. The first objective is to calculate evacuation routes for individual cell phone locations, minimizing the time it would take for a sample population to evacuate to designated safe zones based on both distance and congestion criteria. The second objective is to maximize coverage of individual cell phone locations, using the criteria of underlying geographic features, distance and congestion. In summary, this article presents a network‐based methodology for providing additional analytic support to emergency services personnel for evacuation planning.  相似文献   

6.
In this article we present a heuristic map simplification algorithm based on a novel topology‐inferred graph model. Compared with the existing algorithms, which only focus either on geometry simplification or on topological consistency, our algorithm simplifies the map composed of series of polylines and constraint points while maintaining the topological relationships in the map, maximizing the number of removal points, and minimizing error distance efficiently. Unlike some traditional geometry simplification algorithms, such as Douglas and Peucker's, which add points incrementally, we remove points sequentially based on a priority determined by heuristic functions. In the first stage, we build a graph to model the topology of points in the map from which we determine whether a point is removable or not. As map generalization is needed in different applications with different requirements, we present two heuristic functions to determine the priority of points removal for two different purposes: to save storage space and to reduce computation time. The time complexity of our algorithm is which is efficient enough to be considered for real‐time applications. Experiments on real maps were conducted and the results indicate that our algorithm produces high quality results; one heuristic function results in higher removal points saving storage space and the other improves the time performance significantly.  相似文献   

7.
8.
9.
10.
The Shuttle Radar Topography Mission (SRTM), the first relatively high spatial resolution near‐global digital elevation dataset, possesses great utility for a wide array of environmental applications worldwide. This article concerns the accuracy of SRTM in low‐relief areas with heterogeneous vegetation cover. Three questions were addressed about low‐relief SRTM topographic representation: to what extent are errors spatially autocorrelated, and how should this influence sample design? Is spatial resolution or production method more important for explaining elevation differences? How dominant is the association of vegetation cover with SRTM elevation error? Two low‐relief sites in Louisiana, USA, were analyzed to determine the nature and impact of SRTM error in such areas. Light detection and ranging (LiDAR) data were employed as reference, and SRTM elevations were contrasted with the US National Elevation Dataset (NED). Spatial autocorrelation of errors persisted hundreds of meters spatially in low‐relief topography; production method was more critical than spatial resolution, and elevation error due to vegetation canopy effects could actually dominate the SRTM representation of the landscape. Indeed, low‐lying, forested, riparian areas may be represented as substantially higher than surrounding agricultural areas, leading to an inverted terrain model.  相似文献   

11.
The 3D perception of the human eye is more impressive in irregular land surfaces than in flat land surfaces. The quantification of this perception would be very useful in many applications. This article presents the first approach to determining the visible volume, which we call the 3D‐viewshed, in each and all the points of a DEM (Digital Elevation Model). Most previous visibility algorithms in GIS (Geographic Information Systems) are based on the concept of a 2D‐viewshed, which determines the number of points that can be seen from an observer in a DEM. Extending such a 2D‐viewshed to 3D space, then to all the DEM‐points, is too expensive computationally since the viewshed computation per se is costly. In this work, we propose the first approach to compute a new visibility metric that quantifies the visible volume from every point of a DEM. In particular, we developed an efficient algorithm with a high data and calculation re‐utilization. This article presents the first total‐3D‐viewshed maps together with validation results and comparative analysis. Using our highly scalable parallel algorithm to compute the total‐3D‐viewshed of a DEM with 4 million points on a Xeon Processor E5‐2698 takes only 1.3 minutes.  相似文献   

12.
With the rapid growth and popularity of mobile devices and location‐aware technologies, online social networks such as Twitter have become an important data source for scientists to conduct geo‐social network research. Non‐personal accounts, spam users and junk tweets, however, pose severe problems to the extraction of meaningful information and the validation of any research findings on tweets or twitter users. Therefore, the detection of such users is a critical and fundamental step for twitter‐related geographic research. In this study, we develop a methodological framework to: (1) extract user characteristics based on geographic, graph‐based and content‐based features of tweets; (2) construct a training dataset by manually inspecting and labeling a large sample of twitter users; and (3) derive reliable rules and knowledge for detecting non‐personal users with supervised classification methods. The extracted geographic characteristics of a user include maximum speed, mean speed, the number of different counties that the user has been to, and others. Content‐based characteristics for a user include the number of tweets per month, the percentage of tweets with URLs or Hashtags, and the percentage of tweets with emotions, detected with sentiment analysis. The extracted rules are theoretically interesting and practically useful. Specifically, the results show that geographic features, such as the average speed and frequency of county changes, can serve as important indicators of non‐personal users. For non‐spatial characteristics, the percentage of tweets with a high human factor index, the percentage of tweets with URLs, and the percentage of tweets with mentioned/replied users are the top three features in detecting non‐personal users.  相似文献   

13.
Data about points of interest (POI) have been widely used in studying urban land use types and for sensing human behavior. However, it is difficult to quantify the correct mix or the spatial relations among different POI types indicative of specific urban functions. In this research, we develop a statistical framework to help discover semantically meaningful topics and functional regions based on the co‐occurrence patterns of POI types. The framework applies the latent Dirichlet allocation (LDA) topic modeling technique and incorporates user check‐in activities on location‐based social networks. Using a large corpus of about 100,000 Foursquare venues and user check‐in behavior in the 10 most populated urban areas of the US, we demonstrate the effectiveness of our proposed methodology by identifying distinctive types of latent topics and, further, by extracting urban functional regions using K‐means clustering and Delaunay triangulation spatial constraints clustering. We show that a region can support multiple functions but with different probabilities, while the same type of functional region can span multiple geographically non‐adjacent locations. Since each region can be modeled as a vector consisting of multinomial topic distributions, similar regions with regard to their thematic topic signatures can be identified. Compared with remote sensing images which mainly uncover the physical landscape of urban environments, our popularity‐based POI topic modeling approach can be seen as a complementary social sensing view on urban space based on human activities.  相似文献   

14.
Big Data, Linked Data, Smart Dust, Digital Earth, and e‐Science are just some of the names for research trends that surfaced over the last years. While all of them address different visions and needs, they share a common theme: How do we manage massive amounts of heterogeneous data, derive knowledge out of them instead of drowning in information, and how do we make our findings reproducible and reusable by others? In a network of knowledge, topics span across scientific disciplines and the idea of domain ontologies as common agreements seems like an illusion. In this work, we argue that these trends require a radical paradigm shift in ontology engineering away from a small number of authoritative, global ontologies developed top‐down, to a high number of local ontologies that are driven by application needs and developed bottom‐up out of observation data. Similarly as the early Web was replaced by a social Web in which volunteers produce data instead of purely consuming it, the next generation of knowledge infrastructures has to enable users to become knowledge engineers themselves. Surprisingly, existing ontology engineering frameworks are not well suited for this new perspective. Hence, we propose an observation‐driven ontology engineering framework, show how its layers can be realized using specific methodologies, and relate the framework to existing work on geo‐ontologies.  相似文献   

15.
The growth of the Web has resulted in the Web‐based sharing of distributed geospatial data and computational resources. The Geospatial Processing Web (GeoPW) described here is a set of services that provide a wide array of geo‐processing utilities over the Web and make geo‐processing functionalities easily accessible to users. High‐performance remote sensing image processing is an important component of the GeoPW. The design and implementation of high‐performance image processing are, at present, an actively pursued research topic. Researchers have proposed various parallel strategies for single image processing algorithm, based on a computer science approach to parallel processing. This article proposes a multi‐granularity parallel model for various remote sensing image processing algorithms. This model has four hierarchical interfaces that are labeled the Region of Interest oriented (ROI‐oriented), Decompose/Merge, Hierarchical Task Chain and Dynamic Task interfaces or sub‐models. In addition, interfaces, definitions, parallel task scheduling and fault‐tolerance mechanisms are described in detail. Based on the model and methods, we propose an open‐source online platform named OpenRS‐Cloud. A number of parallel algorithms were uniformly and efficiently developed, thus certifying the validity of the multi‐granularity parallel model for unified remote sensing image processing web services.  相似文献   

16.
Location‐based social media (LBSM) has been widely utilized to supplement traditional survey methods in modeling human activity patterns. However, there has not been sufficient study to assess the reliability of these data in deriving human movement. This research aims to evaluate how data collection duration and sample sizes affect the reliability of LBSM data in activity modeling based on two indicators: radius of gyration (ROG) and entropy. We use a linear regression model with logarithmic transformation to approximate how the magnitude of each indicator changes with different data collection durations—from 1 to 12 months. The results indicate that both ROG and entropy increase when the amount of data increases. However, the rate of increase slows down and approaches zero eventually. We also approximated the limit values and verified that with 12‐month data, we are at approximately >95% magnitude of the limit values for both indicators in all three cities. The clustering analysis also demonstrated that there are outlier users who exhibit distinct patterns. This case study focuses on three Chinese cities (Beijing, Shanghai, and Guangzhou) and provides a useful reference to explore the balance point between data effectiveness and an appropriate sample size from LBSM data.  相似文献   

17.
This article reports on the initial development of a generic framework for integrating Geographic Information Systems (GIS) with Massive Multi‐player Online Gaming (MMOG) technology to support the integrated modeling of human‐environment resource management and decision‐making. We review Web 2.0 concepts, online maps, and games as key technologies to realize a participatory construction of spatial simulation and decision making practices. Through a design‐based research approach we develop a prototype framework, “GeoGame”, that allows users to play board‐game‐style simulations on top of an online map. Through several iterations we demonstrate the implementation of a range of design artifacts including: real‐time, multi‐user editing of online maps, web services, game lobby, user‐modifiable rules and scenarios building, chat, discussion, and market transactions. Based on observational, analytical, experimental and functional evaluations of design artifacts as well as a literature review, we argue that a MMO GeoGame‐framework offers a viable approach to address the complex dynamics of human‐environmental systems that require a simultaneous reconciliation of both top‐down and bottom‐up decision making where stakeholders are an integral part of a modeling environment. Further research will offer additional insight into the development of social‐environmental models using stakeholder input and the use of such models to explore properties of complex dynamic systems.  相似文献   

18.
Using geographic information systems to link administrative databases with demographic, social, and environmental data allows researchers to use spatial approaches to explore relationships between exposures and health. Traditionally, spatial analysis in public health has focused on the county, ZIP code, or tract level because of limitations to geocoding at highly resolved scales. Using 2005 birth and death data from North Carolina, we examine our ability to geocode population‐level datasets at three spatial resolutions – zip code, street, and parcel. We achieve high geocoding rates at all three resolutions, with statewide street geocoding rates of 88.0% for births and 93.2% for deaths. We observe differences in geocoding rates across demographics and health outcomes, with lower geocoding rates in disadvantaged populations and the most dramatic differences occurring across the urban‐rural spectrum. Our results suggest that highly resolved spatial data architectures for population‐level datasets are viable through geocoding individual street addresses. We recommend routinely geocoding administrative datasets to the highest spatial resolution feasible, allowing public health researchers to choose the spatial resolution used in analysis based on an understanding of the spatial dimensions of the health outcomes and exposures being investigated. Such research, however, must acknowledge how disparate geocoding success across subpopulations may affect findings.  相似文献   

19.
生态系统服务和土地利用实质上是相互影响、相互制约的关系,定量研究县域土地利用与生态服务价值的变化情况,为生态资源的合理定价与有效补偿、促进自然生态系统与社会经济系统的协调发展提供了科学依据。本文基于遥感影像,在GIS、RS技术支持下,分析了富锦市1990~2010年生态系统服务价值变化情况。研究结果表明:富锦市1990~2010年土地利用的生态系统服务价值总体持续降低,由1990年的2 919 713.47万元下降至2010年的1 671 722.54万元;生态系统服务价值量与其土地利用变化密切相关,当土地利用类型改变时,生态系统服务价值量也随之改变;人口和经济是研究区生态系统服务价值的主要驱动因素。  相似文献   

20.
Density‐based clustering algorithms such as DBSCAN have been widely used for spatial knowledge discovery as they offer several key advantages compared with other clustering algorithms. They can discover clusters with arbitrary shapes, are robust to noise, and do not require prior knowledge (or estimation) of the number of clusters. The idea of using a scan circle centered at each point with a search radius Eps to find at least MinPts points as a criterion for deriving local density is easily understandable and sufficient for exploring isotropic spatial point patterns. However, there are many cases that cannot be adequately captured this way, particularly if they involve linear features or shapes with a continuously changing density, such as a spiral. In such cases, DBSCAN tends to either create an increasing number of small clusters or add noise points into large clusters. Therefore, in this article, we propose a novel anisotropic density‐based clustering algorithm (ADCN). To motivate our work, we introduce synthetic and real‐world cases that cannot be handled sufficiently by DBSCAN (or OPTICS). We then present our clustering algorithm and test it with a wide range of cases. We demonstrate that our algorithm can perform equally as well as DBSCAN in cases that do not benefit explicitly from an anisotropic perspective, and that it outperforms DBSCAN in cases that do. Finally, we show that our approach has the same time complexity as DBSCAN and OPTICS, namely O(n log n) when using a spatial index and O(n2) otherwise. We provide an implementation and test the runtime over multiple cases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号