首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article demonstrates how the generalisation of topographic surfaces has been formalised by means of graph theory and how this formalised approach has been integrated into an ISO standard that is employed within nanotechnology. By applying concepts from higher-dimensional calculus and topology, it is shown that Morse functions are those mappings that are ideally suited for the formal characterisation of topographic surfaces. Based on this result, a data structure termed weighted surface network is defined that may be applied for both the characterisation and the generalisation of the topological structure of a topographic surface. Hereafter, the focus is laid on specific issues of the standard ISO 25178-2; within this standard change trees, a data structure similar to weighted surface networks, are applied to portray the topological information of topographic surfaces. Furthermore, an approach termed Wolf pruning is used to simplify the change tree, with this pruning method being equivalent to the graph-theoretic contractions by which weighted surface networks can be simplified. Finally, some practical applications of the standard ISO 25178-2 within nanotechnology are discussed.  相似文献   

2.
Our research is concerned with automated generalisation of topographic vector databases in order to produce maps. This article presents a new, agent-based generalisation model called CartACom (Cartographic generalisation with Communicating Agents), dedicated to the treatment of areas of low density but where rubber sheeting techniques are not sufficient because some eliminations or aggregations are needed. In CartACom, the objects of the initial database are modelled as agents, that is, autonomous entities, that choose and apply generalisation algorithms to themselves in order to increase the satisfaction of their constraints as much as possible. The CartACom model focuses on modelling and treating the relational constraints, defined as constraints that concern a relation between two objects. In order to detect and assess their relational constraints, the CartACom agents are able to perceive their spatial surroundings. Moreover, to make the good generalisation decisions to satisfy their relational constraints, they are able to communicate with their neighbours using predefined dialogue protocols. Finally, a hook to another agent-based generalisation model – AGENT – is provided, so that the CartACom agents can handle not only their relational constraints but also their internal constraints. The CartACom model has been applied to the generalisation of low-density, heterogeneous areas like rural areas, where the space is not hierarchically organised. Examples of results obtained on real data show that it is well adapted for this application.  相似文献   

3.
Regionalization is to divide a large set of spatial objects into a number of spatially contiguous regions while optimizing an objective function, which is normally a homogeneity (or heterogeneity) measure of the derived regions. This research proposes and evaluates a family of six hierarchical regionalization methods. The six methods are based on three agglomerative clustering approaches, including the single linkage, average linkage (ALK), and the complete linkage (CLK), each of which is constrained with spatial contiguity in two different ways (i.e. the first‐order constraining and the full‐order constraining). It is discovered that both the Full‐Order‐CLK and the Full‐Order‐ALK methods significantly outperform existing methods across four quality evaluations: the total heterogeneity, region size balance, internal variation, and the preservation of data distribution. Moreover, the proposed algorithms are efficient and can find the solution in O(n 2log n) time. With such data scalability, for the first time it is possible to effectively regionalize large data sets that have 10 000 or more spatial objects. A detailed comparison and evaluation of the six methods are carried out with the 2004 US presidential election data.  相似文献   

4.
针对现有空间数据划分方法普遍存在的不考虑空间对象自身大小和相邻对象空间关系对数据划分的影响等问题,提出一种基于Hilbert空间填充曲线层次分解的空间数据划分方法。该方法使用Hilbert曲线保持划分后空间数据之间的邻近性,利用少数子网格的层次分解避免对整个空间范围的密集划分,减少空间对象的Hilbert编码计算和排序时间;通过计算划分区域平均数据量和子网格内空间对象大小,确定合适的层次分解参数,实现各划分区域内空间数据量均衡。实验表明,该方法提高了空间数据的划分效率,能够保持划分后空间数据之间的邻近性和各个分区数据量的平衡。  相似文献   

5.
One feature discovered in the study of complex networks is community structure, in which vertices are gathered into several groups where more edges exist within groups than between groups. Many approaches have been developed for identifying communities; these approaches essentially segment networks based on topological structure or the attribute similarity of vertices, while few approaches consider the spatial character of the networks. Many complex networks are spatially constrained such that the vertices and edges are embedded in space. In geographical space, nearer objects are more related than distant objects. Thus, the relations among vertices are defined not only by the links connecting them but also by the distance between them. In this article, we propose a geo-distance-based method of detecting communities in spatially constrained networks to identify communities that are both highly topologically connected and spatially clustered. The algorithm is based on the fast modularity maximisation (CNM) algorithm. First, we modify the modularity to geo-modularity Qgeo by introducing an edge weight that is the inverse of the geographic distance to the power of n. Then, we propose the concept of a spatial clustering coefficient as a measure of clustering of the network to determine the power value n of the distance. The algorithm is tested with China air transport network and BrightKite social network data-sets. The segmentation of the China air transport network is similar to the seven economic regions of China. The segmentation of the BrightKite social network shows the regionality of social groups and identifies the dynamic social groups that reflect users’ location changes. The algorithm is useful in exploring the interaction and clustering properties of geographical phenomena and providing timely location-based services for a group of people.  相似文献   

6.
In recent years, the evolution and improvement of LiDAR (Light Detection and Ranging) hardware has increased the quality and quantity of the gathered data, making the storage, processing and management thereof particularly challenging. In this work we present a novel, multi-resolution, out-of-core technique, used for web-based visualization and implemented through a non-redundant, data point organization method, which we call Hierarchically Layered Tiles (HLT), and a tree-like structure called Tile Grid Partitioning Tree (TGPT). The design of these elements is mainly focused on attaining very low levels of memory consumption, disk storage usage and network traffic on both, client and server-side, while delivering high-performance interactive visualization of massive LiDAR point clouds (up to 28 billion points) on multiplatform environments (mobile devices or desktop computers). HLT and TGPT were incorporated and tested in ViLMA (Visualization for LiDAR data using a Multi-resolution Approach), our own web-based visualization software specially designed to work with massive LiDAR point clouds.  相似文献   

7.
There has been a resurgence of interest in time geography studies due to emerging spatiotemporal big data in urban environments. However, the rapid increase in the volume, diversity, and intensity of spatiotemporal data poses a significant challenge with respect to the representation and computation of time geographic entities and relations in road networks. To address this challenge, a spatiotemporal data model is proposed in this article. The proposed spatiotemporal data model is based on a compressed linear reference (CLR) technique to transform network time geographic entities in three-dimensional (3D) (x, y, t) space to two-dimensional (2D) CLR space. Using the proposed spatiotemporal data model, network time geographic entities can be stored and managed in classical spatial databases. Efficient spatial operations and index structures can be directly utilized to implement spatiotemporal operations and queries for network time geographic entities in CLR space. To validate the proposed spatiotemporal data model, a prototype system is developed using existing 2D GIS techniques. A case study is performed using large-scale datasets of space-time paths and prisms. The case study indicates that the proposed spatiotemporal data model is effective and efficient for storing, managing, and querying large-scale datasets of network time geographic entities.  相似文献   

8.
Most of the literature to date proposes approximations to the determinant of a positive definite × n spatial covariance matrix (the Jacobian term) for Gaussian spatial autoregressive models that fail to support the analysis of massive georeferenced data sets. This paper briefly surveys this literature, recalls and refines much simpler Jacobian approximations, presents selected eigenvalue estimation techniques, summarizes validation results (for estimated eigenvalues, Jacobian approximations, and estimation of a spatial autocorrelation parameter), and illustrates the estimation of the spatial autocorrelation parameter in a spatial autoregressive model specification for cases as large as n = 37,214,101. The principal contribution of this paper is to the implementation of spatial autoregressive model specifications for any size of georeferenced data set. Its specific additions to the literature include (1) new, more efficient estimation algorithms; (2) an approximation of the Jacobian term for remotely sensed data forming incomplete rectangular regions; (3) issues of inference; and (4) timing results.  相似文献   

9.
Regionalization is a classification procedure applied to spatial objects with an areal representation, which groups them into homogeneous contiguous regions. This paper presents an efficient method for regionalization. The first step creates a connectivity graph that captures the neighbourhood relationship between the spatial objects. The cost of each edge in the graph is inversely proportional to the similarity between the regions it joins. We summarize the neighbourhood structure by a minimum spanning tree (MST), which is a connected tree with no circuits. We partition the MST by successive removal of edges that link dissimilar regions. The result is the division of the spatial objects into connected regions that have maximum internal homogeneity. Since the MST partitioning problem is NP‐hard, we propose a heuristic to speed up the tree partitioning significantly. Our results show that our proposed method combines performance and quality, and it is a good alternative to other regionalization methods found in the literature.  相似文献   

10.
Binary predictor patterns of geological features are integrated based on a probabilistic approach known as weights of evidence modeling to predict gold potential. In weights of evidence modeling, the log e of the posterior odds of a mineral occurrence in a unit cell is obtained by adding a weight, W + or W for presence of absence of a binary predictor pattern, to the log e of the prior probability. The weights are calculated as log e ratios of conditional probabilities. The contrast, C = W +W , provides a measure of the spatial association between the occurrences and the binary predictor patterns. Addition of weights of the input binary predictor patterns results in an integrated map of posterior probabilities representing gold potential. Combining the input binary predictor patterns assumes that they are conditionally independent from one another with respect to occurrences.  相似文献   

11.
In the context of OpenStreetMap (OSM), spatial data quality, in particular completeness, is an essential aspect of its fitness for use in specific applications, such as planning tasks. To mitigate the effect of completeness errors in OSM, this study proposes a methodological framework for predicting by means of OSM urban areas in Europe that are currently not mapped or only partially mapped. For this purpose, a machine learning approach consisting of artificial neural networks and genetic algorithms is applied. Under the premise of existing OSM data, the model estimates missing urban areas with an overall squared correlation coefficient (R 2) of 0.589. Interregional comparisons of European regions confirm spatial heterogeneity in the model performance, whereas the R 2 ranges from 0.129 up to 0.789. These results show that the delineation of urban areas by means of the presented methodology depends strongly on location.  相似文献   

12.
Abstract

We present the notion of a natural tree as an efficient method for storing spatial information for quick access. A natural tree is a representation of spatial adjacency, organised to allow efficient addition of new data, access to existing data, or deletions. The nodes of a natural tree are compound elements obtained by a particular Delaunay triangulation algorithm. Improvements to that algorithm allow both the construction of the triangulation and subsequent access to neighbourhood information to be O(N log N). Applications include geographical information systems, contouring, and dynamical systems reconstruction.  相似文献   

13.
Anthropogenic, ecological, and land‐surface processes interact in landscapes at multiple spatial and temporal scales to create characteristic patterns. The relationships between temporally and spatially varying processes and patterns are poorly understood because of the lack of spatiotemporal observations of real landscapes over significant stretches of time. We report a new method for observing joint spatiotemporal landscape variation over large areas by analyzing multitemporal Landsat data. We calculate the spatiotemporal variation of the Normalized Difference Vegetation Index (NDVI) in the area covered by one Landsat scene footprint in north central Florida, over spatial windows of 104–108 m2 and time steps of two to sixteen years. The correlations, slopes, and intercepts of spatial versus temporal regressions in the real landscape all differ significantly from results obtained using a null model of a randomized landscape. Spatial variances calculated within windows of 105–107 m2 had the strongest relationships with temporal variances (regressions with both larger and smaller windows had lower coefficients of determination), and the relationships were stronger with longer time steps. Slopes and y‐intercepts increased with window size and decreased with increased time step. The spatial and temporal scales at which NDVI signals are most strongly related may be the characteristic scales of the processes that most strongly determine landscape patterns. For example, the important time and space windows correspond with areas and timing of fires and tree plantation harvests. Observations of landscape dynamics will be most effective if conducted at the characteristic scales of the processes, and our approach may provide a tool for determining those scales.  相似文献   

14.
祁连山大野口流域青海云杉种群结构和空间分布格局   总被引:2,自引:0,他引:2  
青海云杉(Picea crassifolia)是祁连山亚高寒山地森林植被的建群种之一。应用“相邻格子法”获得100 m×100 m样方内所有个体的调查资料,采用种群动态和胸径、树高和冠幅级频率分布及6种聚集强度指数分析了青海云杉的种群结构和空间分布格局。结果表明:(1)青海云杉种群动态分析表明,种群表现为增长型种群。(2)青海云杉种群胸径级频率分布呈“倒J”型,胸径级随个体数的变化符合对数方程y=-219.32ln(x)+482.67(R2=0.963 8,P<0.01),胸径分异指数为0.48,种群个体胸径差异属明显分异;树高级频率分布呈“间歇”型,不同高度级与个体数之间可用二次方程y=0.795x2-31.23x+285.1(R2=0.603,P<0.01)进行较好的描述,树高分异指数为0.55,高度差异属明显分异;胸径和树高两者之间符合对数方程y=5.912ln(x)-4.249 3(R2=0.603,P<0.01);冠幅级与个体数之间可用三次方程y=5.317 6x3 -91.759x2+408.88x-173.87(R2=0.8355,P<0.01)进行很好的拟合[WTBZ],冠幅分异指数为0.53,种群个体冠幅差异亦属明显分异。总体上看,青海云杉幼苗较为丰富,天然更新能力强,目前表现为成熟稳定型种群。(3)在空间分布格局上,青海云杉种群空间分布格局呈斑块状聚集分布,其扩散系数、丛生指标、聚块性指数、平均拥挤度指数、负二项式分布参数、Cassie指标分别为1.162、2.285、0.162、85.802、1.002和0.026。在不同发育阶段的分布格局有所差异,Ⅰ级幼苗和Ⅱ级幼苗为聚集分布,小树、中树和大树为均匀分布,随着龄级的增大,种群的聚集程度减小,即由聚集分布变为均匀分布,该种群表现出明显的扩散趋势。研究结果可为青海云杉的管理和经营提供理论依据。  相似文献   

15.
With the increase in the number of applications using digital vector maps and the development of surveying techniques, a large volume of GIS (geographic information system) vector maps having high accuracy and precision is being produced. However, to achieve their effective transmission while preserving their high positional quality, these large amounts of vector map data need to be compressed. This paper presents a compression method based on a bin space partitioning data structure, which preserves a high-level accuracy and exact precision of spatial data. To achieve this, the proposed method a priori divides a map into rectangular local regions and classifies the bits of each object in the local regions to three types of bins, defined as category bin (CB), direction bin (DB), and accuracy bin (AB). Then, it encodes objects progressively using the properties of the classified bins, such as adjacency and orientation, to obtain the optimum compression ratio. Experimental results verify that our method can encode vector map data constituting less than 20% of the original map data at a 1-cm accuracy degree and that constituting less than 9% at a 1-m accuracy degree. In addition, its compression efficiency is greater than that of previous methods, whereas its complexity is lower for close to real-time applications.  相似文献   

16.
As increasingly large‐scale and higher‐resolution terrain data have become available, for example air‐form and space‐borne sensors, the volume of these datasets reveals scalability problems with existing GIS algorithms. To address this problem, a kind of serial algorithm was developed to generate viewshed on large grid‐based digital elevation models (DEMs). We first divided the whole DEM into rectangular blocks in row and column directions (called block partitioning), then processed these blocks with four axes followed by four sectors sequentially. When processing the particular block, we adopted the ‘reference plane’ algorithm to calculate the visibility of the target point on the block, and adjusted the calculation sequence according to the different spatial relationships between the block and the viewpoint since the viewpoint is not always inside the DEM. By adopting the ‘Reference Plane’ algorithm and using a block partitioning method to segment and load the DEM dynamically, it is possible to generate viewshed efficiently in PC‐based environments. Experiments showed that the divided block should be dynamically loaded whole into computer main memory when partitioning, and the suggested approach retains the accuracy of the reference plane algorithm and has near linear compute complexity.  相似文献   

17.
ABSTRACT

Regionalization attempts to group units into a few subsets to partition the entire area. The results represent the underlying spatial structure and facilitate decision-making. Massive amounts of trajectories produced in the urban space provide a new opportunity for regionalization from human mobility. This paper proposes and applies a novel regionalization method to cluster similar areal units and visualize the spatial structure by considering all trajectories in an area into a word embedding model. In this model, nodes in a trajectory are regarded as words in a sentence, and nodes can be clustered in the feature space. The result depicts the underlying socio-economic structure at multiple spatial scales. To our knowledge, this is the first regionalization method from trajectories with natural language processing technology. A case study of mobile phone trajectory data in Beijing is used to validate our method, and then we evaluate its performance by predicting the next location of an individual’s trajectory. The case study indicates that the method is fast, flexible and scalable to large trajectory datasets, and moreover, represents the structure of trajectory more effectively.  相似文献   

18.
Effects of spatial autocorrelation (SAC), or spatial structure, have often been neglected in the conventional models of pedogeomorphological processes. Based on soil, vegetation, and topographic data collected in a coastal dunefield in western Korea, this research developed three soil moisture–landscape models, each incorporating SAC at fine, broad, and multiple scales, respectively, into a non-spatial ordinary least squares (OLS) model. All of these spatially explicit models showed better performance than the OLS model, as consistently indicated by R2, Akaike’s information criterion, and Moran’s I. In particular, the best model was proved to be the one using spatial eigenvector mapping, a technique that accounts for spatial structure at multiple scales simultaneously. After including SAC, predictor variables with greater inherent spatial structure underwent more reduction in their predictive power than those with less structure. This finding implies that the environmental variables pedogeomorphologists have perceived important in the conventional regression modeling may have a reduced predictive power in reality, in cases where they possess a significant amount of SAC. This research demonstrates that accounting for spatial structure not only helps to avoid the violation of statistical assumptions, but also allows a better understanding of dynamic soil hydrological processes occurring at different spatial scales.  相似文献   

19.
Sediment cores from Lakes Punta Laguna, Chichancanab, and Petén Itzá on the Yucatan Peninsula were used to (1) investigate “within-horizon” stable isotope variability (δ18O and δ13C) measured on multiple, single ostracod valves and gastropod shells, (2) determine the optimum number of individuals required to infer low-frequency climate changes, and (3) evaluate the potential for using intra-sample δ18O variability in ostracod and gastropod shells as a proxy measure for high-frequency climate variability. Calculated optimum sample numbers (“n”) for δ18O and δ13C in the ostracod Cytheridella ilosvayi and the gastropod Pyrgophorus coronatus vary appreciably throughout the cores in all three lakes. Variability and optimum “n” values were, in most cases, larger for C. ilosvayi than for P. coronatus for δ18O measurements, whereas there was no significant difference for δ13C measurements. This finding may be explained by differences in the ecology and life history of the two taxa as well as contrasting modes of calcification. Individual δ18O measurements on C. ilosvayi in sediments from Lake Punta Laguna show that samples from core depths that have high mean δ18O values, indicative of low effective moisture, display lower variability, whereas samples with low mean δ18O values, reflecting times of higher effective moisture, display higher variability. Relatively dry periods were thus consistently dry, whereas relatively wet periods had both wet and dry years. This interpretation of data from the cores applies to two important periods of the late Holocene, the Maya Terminal Classic period and the Little Ice Age. δ18O variability during the ancient Maya Terminal Classic Period (ca. 910–990 AD) indicates not only the driest mean conditions in the last 3,000 years, but consistently dry climate. Variability of δ13C measurements in single stratigraphic layers displayed no relationship with climate conditions inferred from δ18O measurements.  相似文献   

20.
Abstract

Multiresolution data structures provide a means of retrieving geographical features from a database at levels of detail which are adaptable to different scales of representation. A database design is presented which integrates multi-scale storage of point, linear and polygonal features, based on the line generalization tree, with a multi-scale surface model based on the Delaunay pyramid. The constituent vertices of topologically-structured geographical features are thus distributed between the triangulated levels of a Delaunay pyramid in which triangle edges are constrained to follow those features at differing degrees of generalization. Efficient locational access is achieved by imposing a spatial index on each level of the pyramid.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号