首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ABSTRACT

An Australian standard for Spatial Data Transfer modelled on the draft American standard is likely to be introduced in 1993. The spatial data model is not accommodated by existing DBMS such as relational systems. Using this model as an example, the suitability of object–oriented database systems for geographical databases is demonstrated. Some initial performance figures obtained with the ONTOS system on an example database are given.  相似文献   

2.
The purpose of object matching in conflation is to identify corresponding objects in different data sets that represent the same real-world entity. This article presents an improved linear object matching approach, named the optimization and iterative logistic regression matching (OILRM) method, which combines the optimization model and logistic regression model to obtain a better matching result by detecting incorrect matches and missed matches that are included in the result obtained from the optimization (Opt) method for object matching in conflation. The implementation of the proposed OILRM method was demonstrated in a comprehensive case study of Shanghai, China. The experimental results showed the following. (1) The Opt method can determine most of the optimal one-to-one matching pairs under the condition of minimizing the total distance of all matching pairs without setting empirical thresholds. However, the matching accuracy and recall need to be further improved. (2) The proposed OILRM method can detect incorrect matches and missed matches and resolve the issues of one-to-many and many-to-many matching relationships with a higher matching recall. (3) In the case where the source data sets become more complicated, the matching accuracy and recall based on the proposed OILRM method are much better than those based on the Opt method.  相似文献   

3.
Abstract

Progress in technical database management systems offers alternative strategies for the design and implementation of databases for geographical information systems. Desirable extensions in the user data types and database management are reviewed. A prototype geographical database tool-kit, SIRO-DBMS, which provides some spatial data types and spatial access methods as external attachments to a kernel relational database management system, is described. An ability to fragment a large set of entities into several relations while retaining the ability to search the full set as a logical unit is provided. Implementation of the geometric data types is based on mapping the types of data into a set of attributes of the atomic types supported by the kernel and specifying the relational designs for the set of atomic attributes.  相似文献   

4.
Abstract

The characteristics of a soil information system based on the fuzzy relational database model as defined by Zemankova-Leech and Kandel are presented. The proposed system maintains all the advantages of the more conventional relational implementations but enhances them in two ways: (1) the system can cope with incomplete or even imprecise data; and (2) the users are allowed to express their subjective view of the stored data. The retrieval and processing of data approximately resemble the way that humans think and reason. The INGRES relational database management system was used for the implementation of the system.  相似文献   

5.
Abstract

Research on time and data models for Geographical Information Systems (GIS) has focused mainly in the representation of temporal geographical entities and implementation of temporal databases. Many temporal GIS database structures have been proposed but most of them just provide principles, not the recipe for the design. Owing to the manipulation of the large quantity of geographical information and the slow response time, few implementations exist. This paper presents a relational method of storing and retrieving spatial and temporal topologies. Two-level state topologies are proposed: a state topology for a set of geographical entities and a state topology for a single geographical entity.

From a temporal perspective, these two-level state topologies may also be viewed as two-level time topologies: a time topology for all geographical entities in a GIS database and a time topology for a single geographical entity. Based on these state and time topologies, a detailed storage approach for historical geographical information is provided.  相似文献   

6.
This article is centred on analysing the state of the art of the conflation processes applied to geospatial databases (GDBs) from heterogeneous sources. The term conflation is used to describe the procedure for the integration of these different data, and conflation methods play an important role in systems for updating GDBs, derivation of new cartographic products, densification of digital elevation models, automatic features extraction and so on. In this article we define extensively each conflation process, its evaluation measures and its main application problems and present a classification of all conflation processes. Finally, we introduce a bibliography which the reader may find useful to further explore the field. It tries to serve as a starting point and direct the reader to characteristic research in this area.  相似文献   

7.
Abstract

In this paper we address the problem of computing visibility information on digital terrain models in parallel. We propose a parallel algorithm for computing the visible region of an observation point located on the terrain. The algorithm is based on a sequential triangle-sorting visibility approach proposed by De Floriani et al. (1989). Static and dynamic parallelization strategies, both in terms of partitioning criteria and scheduling policies, are discussed. The different parallelization strategies are implemented on an MIMD multicomputer and evaluated through experimental results.  相似文献   

8.
ABSTRACT

Big data have shifted spatial optimization from a purely computational-intensive problem to a data-intensive challenge. This is especially the case for spatiotemporal (ST) land use/land cover change (LUCC) research. In addition to greater variety, for example, from sensing platforms, big data offer datasets at higher spatial and temporal resolutions; these new offerings require new methods to optimize data handling and analysis. We propose a LUCC-based geospatial cyberinfrastructure (GCI) that optimizes big data handling and analysis, in this case with raster data. The GCI provides three levels of optimization. First, we employ spatial optimization with graph-based image segmentation. Second, we propose ST Atom Model to temporally optimize the image segments for LUCC. At last, the first two domain ST optimizations are supported by the computational optimization for big data analysis. The evaluation is conducted using DMTI (DMTI Spatial Inc.) Satellite StreetView imagery datasets acquired for the Greater Montreal area, Canada in 2006, 2009, and 2012 (534 GB, 60 cm spatial resolution, RGB image). Our LUCC-based GCI builds an optimization bridge among LUCC, ST modelling, and big data.  相似文献   

9.
ABSTRACT

The efficiency of public investments and services has been of interest to geographic researchers for several decades. While in the private sector inefficiency often leads to higher prices, loss of competitiveness, and loss of business, in the public sector inefficiency in service provision does not necessarily lead to immediate changes. In many cases, it is not an entirely easy task to analyze a particular service as appropriate data may be difficult to obtain and hidden in detailed budgets. In this paper, we develop an integrative approach that uses cyber search, Geographic Information System (GIS), and spatial optimization to estimate the spatial efficiency of fire protection services in Los Angeles (LA) County. We develop a cyber-search process to identify current deployment patterns of fire stations across the major urban region of LA County. We compare the results of our search to existing databases. Using spatial optimization, we estimate the level of deployment that is needed to meet desired coverage levels based upon the location of an ideal fire station pattern, and then compare this ideal level of deployment to the existing system as a means of estimating spatial efficiency. GIS is adopted throughout the paper to simulate the demand locations, to conduct location-based spatial analysis, to visualize fire station data, and to map model simulation results. Finally, we show that the existing system in LA County has considerable room for improvement. The methodology presented in this paper is both novel and groundbreaking, and the automated assessments are readily transferable to other counties and jurisdictions.  相似文献   

10.
There has been a resurgence of interest in time geography studies due to emerging spatiotemporal big data in urban environments. However, the rapid increase in the volume, diversity, and intensity of spatiotemporal data poses a significant challenge with respect to the representation and computation of time geographic entities and relations in road networks. To address this challenge, a spatiotemporal data model is proposed in this article. The proposed spatiotemporal data model is based on a compressed linear reference (CLR) technique to transform network time geographic entities in three-dimensional (3D) (x, y, t) space to two-dimensional (2D) CLR space. Using the proposed spatiotemporal data model, network time geographic entities can be stored and managed in classical spatial databases. Efficient spatial operations and index structures can be directly utilized to implement spatiotemporal operations and queries for network time geographic entities in CLR space. To validate the proposed spatiotemporal data model, a prototype system is developed using existing 2D GIS techniques. A case study is performed using large-scale datasets of space-time paths and prisms. The case study indicates that the proposed spatiotemporal data model is effective and efficient for storing, managing, and querying large-scale datasets of network time geographic entities.  相似文献   

11.
This article discusses the challenges of doing fieldwork in an antagonistic context. Such an antagonistic context can emerge when a non‐Muslim researcher conducts fieldwork in a Muslim country that experiences humanitarian intervention and reconstruction efforts after natural disasters or the end of conflict. This particular setting can create a conflation of Islamic and Western (liberal) values while a political settlement is about to be consolidated. The case discussed in this article is located in the province of Aceh in Indonesia, where a political settlement of a conflict which lasted more than 25 years converged with a massive influx of foreign aid for disaster mitigation vis‐à‐vis the desire to apply Islamic Law (shari'a). The combined effects of reconstruction efforts and political and armed conflict, forged a problematic co‐presence of Western and non‐Western values, which affected the relations between the (Western) researcher and (non‐Western) researched by creating tension or even hostility between the two. The article argues that methodological dilemmas stemming from such a setting require a relational approach drawing on empathy, sameness and the personal, thereby taking into account emotions when conducting fieldwork. For this particular case I suggest an approach based on teamwork crossing cultures and gender among the research team members. To deal with constraints in such a setting, the article proposes to contextualize any potential difference between the researcher and researched, and to explore various relational elements drawing on psychoanalytical approaches and/or cross‐cultural positioning through and in teamwork.  相似文献   

12.

Innovation efforts in developing soft computing models (SCMs) of researchers and scholars are significant in recent years, especially for problems in the mining industry. So far, many SCMs have been proposed and applied to practical engineering to predict ground vibration intensity (BIGV) induced by mine blasting with high accuracy and reliability. These models significantly contributed to mitigate the adverse effects of blasting operations in mines. Despite the fact that many SCMs have been introduced with promising results, but ambitious goals of researchers are still novel SCMs with the accuracy improved. They aim to prevent the damages caused by blasting operations to the surrounding environment. This study, therefore, proposed a novel SCM based on a robust meta-heuristic algorithm, namely Hunger Games Search (HGS) and artificial neural network (ANN), abbreviated as HGS–ANN model, for predicting BIGV. Three benchmark models based on three other meta-heuristic algorithms (i.e., particle swarm optimization (PSO), firefly algorithm (FFA), and grasshopper optimization algorithm (GOA)) and ANN, named as PSO–ANN, FFA–ANN, and GOA–ANN, were also examined to have a comprehensive evaluation of the HGS–ANN model. A set of data with 252 blasting operations was collected to evaluate the effects of BIGV through the mentioned models. The data were then preprocessed and normalized before splitting into individual parts for training and validating the models. In the training phase, the HGS algorithm with the optimal parameters was fine-tuned to train the ANN model to optimize the ANN model's weights. Based on the statistical criteria, the HGS–ANN model showed its best performance with an MAE of 1.153, RMSE of 1.761, R2 of 0.922, and MAPE of 0.156, followed by the GOA–ANN, FFA–ANN and PSO–ANN models with the lower performances (i.e., MAE?=?1.186, 1.528, 1.505; RMSE?=?1.772, 2.085, 2.153; R2?=?0.921, 0.899, 0.893; MAPE?=?0.231, 0.215, 0.225, respectively). Based on the outstanding performance, the HGS–ANN model should be applied broadly and across a swath of open-pit mines to predict BIGV, aiming to optimize blast patterns and reduce the environmental effects.

  相似文献   

13.
With a huge volume of trajectories being collected and stored in databases, more and more researchers try to discover outlying trajectories from trajectory databases. In this article, we propose a novel framework called relative distance-based trajectory outliers detection (RTOD). In RTOD, we first employed relative distances to measure the dissimilarity between trajectory segments, and then formally defined the outlying trajectories based on distance measures. In order to improve the time performance, we proposed an optimization method that employs R-tree and local feature correlation matrix to eliminate unrelated trajectory segments. Finally, we conducted extensive experiments to estimate the advantages of the proposed approach. The experimental results show that our proposed approach is more efficient and effective at identifying outlying trajectories than existing algorithms. Particularly, we analyzed the effect of each parameter in theory.  相似文献   

14.
An inconsistency measure can be used to compare the quality of different data sets and to quantify the cost of data cleaning. In traditional relational databases, inconsistency is defined in terms of constraints that use comparison operators between attributes. Inconsistency measures for traditional databases cannot be applied to spatial data sets because spatial objects are complex and the constraints are typically defined using spatial relations. This paper proposes an inconsistency measure to evaluate how dirty a spatial data set is with respect to a set of integrity constraints that define the topological relations that should hold between objects in the data set. The paper starts by reviewing different approaches to quantify the degree of inconsistency and showing that they are not suitable for the problem. Then, the inconsistency measure of a data set is defined in terms of the degree in which each spatial object in the data set violates topological constraints, and the possible representations of spatial objects are points, curves, and surfaces. Finally, an experimental evaluation demonstrates the applicability of the proposed inconsistency measure and compares it with previously existing approaches.  相似文献   

15.
Landscape metrics have been widely used to characterize geographical patterns which are important for many geographical and ecological analyses. Cellular automata (CA) are attractive for simulating settlement development, landscape evolution, urban dynamics, and land-use changes. Although various methods have been developed to calibrate CA, landscape metrics have not been explicitly used to ensure the simulated pattern best fitted to the actual one. This article presents a pattern-calibrated method which is based on a number of landscape metrics for implementing CA by using genetic algorithms (GAs). A Pattern-calibrated GA–CA is proposed by incorporating percentage of landscape (PLAND), patch metric (LPI), and landscape division (D) into the fitness function of GA. The sensitivity analysis can allow the users to explore various combinations of weights and examine their effects. The comparison between Logistic- CA, Cell-calibrated GA–CA, and Pattern-calibrated GA–CA indicates that the last method can yield the best results for calibrating CA, according to both the training and validation data. For example, Logistic-CA has the average simulation error of 27.7%, but Pattern-calibrated GA–CA (the proposed method) can reduce this error to only 7.2% by using the training data set in 2003. The validation is further carried out by using new validation data in 2008 and additional landscape metrics (e.g., Landscape shape index, edge density, and aggregation index) which have not been incorporated for calibrating CA models. The comparison shows that this pattern-calibrated CA has better performance than the other two conventional models.  相似文献   

16.
17.
ABSTRACT

The increasing popularity of Location-Based Social Networks (LBSNs) and the semantic enrichment of mobility data in several contexts in the last years has led to the generation of large volumes of trajectory data. In contrast to GPS-based trajectories, LBSN and context-aware trajectories are more complex data, having several semantic textual dimensions besides space and time, which may reveal interesting mobility patterns. For instance, people may visit different places or perform different activities depending on the weather conditions. These new semantically rich data, known as multiple-aspect trajectories, pose new challenges in trajectory classification, which is the problem that we address in this paper. Existing methods for trajectory classification cannot deal with the complexity of heterogeneous data dimensions or the sequential aspect that characterizes movement. In this paper we propose MARC, an approach based on attribute embedding and Recurrent Neural Networks (RNNs) for classifying multiple-aspect trajectories, that tackles all trajectory properties: space, time, semantics, and sequence. We highlight that MARC exhibits good performance especially when trajectories are described by several textual/categorical attributes. Experiments performed over four publicly available datasets considering the Trajectory-User Linking (TUL) problem show that MARC outperformed all competitors, with respect to accuracy, precision, recall, and F1-score.  相似文献   

18.
Reconstruction of 3D trees from incomplete point clouds is a challenging issue due to their large variety and natural geometric complexity. In this paper, we develop a novel method to effectively model trees from a single laser scan. First, coarse tree skeletons are extracted by utilizing the L1-median skeleton to compute the dominant direction of each point and the local point density of the point cloud. Then we propose a data completion scheme that guides the compensation for missing data. It is an iterative optimization process based on the dominant direction of each point and local point density. Finally, we present a L1-minimum spanning tree (MST) algorithm to refine tree skeletons from the optimized point cloud, which integrates the advantages of both L1-median skeleton and MST algorithms. The proposed method has been validated on various point clouds captured from single laser scans. The experiment results demonstrate the effectiveness and robustness of our method for coping with complex shapes of branching structures and occlusions.  相似文献   

19.
ABSTRACT

Missing data is a common problem in the analysis of geospatial information. Existing methods introduce spatiotemporal dependencies to reduce imputing errors yet ignore ease of use in practice. Classical interpolation models are easy to build and apply; however, their imputation accuracy is limited due to their inability to capture spatiotemporal characteristics of geospatial data. Consequently, a lightweight ensemble model was constructed by modelling the spatiotemporal dependencies in a classical interpolation model. Temporally, the average correlation coefficients were introduced into a simple exponential smoothing model to automatically select the time window which ensured that the sample data had the strongest correlation to missing data. Spatially, the Gaussian equivalent and correlation distances were introduced in an inverse distance-weighting model, to assign weights to each spatial neighbor and sufficiently reflect changes in the spatiotemporal pattern. Finally, estimations of the missing values from temporal and spatial were aggregated into the final results with an extreme learning machine. Compared to existing models, the proposed model achieves higher imputation accuracy by lowering the mean absolute error by 10.93 to 52.48% in the road network dataset and by 23.35 to 72.18% in the air quality station dataset and exhibits robust performance in spatiotemporal mutations.  相似文献   

20.
This article attempts a double reflection: a methodological interrogation of myself and an autointerrogation of my methodology. Following Ernst Bloch, I structure this reflection around the idea of traces, which are brief, narrative, aphoristic speculations on a particular theme. In this article, I (re)produce my own narrative traces, engaging with and representing several moments of strangeness in my methodological praxis as they are recorded in field notes from prior fieldwork with urban secession movements in black and white communities of Atlanta. Building from Bloch’s hermeneutic, I treat these moments as traces to be pursued, rather than simple social artifacts of the relational, intersubjective activity of research. Finally, I demonstrate how a geographer might develop that which crystallizes in the interpretation of the trace (i.e., through the intentional reconsideration of the uncanny and recurrent moments of everyday experience) toward the methodological worlding of philosophy as a vibrant, reflexive, human praxis. Key Words: Bloch, interpretation, method, postqualitative analysis, praxis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号