首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Background and purposeTerrorism is a real and present danger. The build-up to an attack includes planning, travel, and reconnaissance which necessarily require the offender to move through their environment. Whilst research has examined patterns of terrorist attack locations, with a few exceptions (e.g. Rossmo & Harries, 2011), it has not examined the spatial behavior of the terrorists themselves. In this paper, we investigate whether the spatial mobility patterns of terrorists resemble those of criminals (and the wider population) and if these change in the run up to their attacks.MethodUsing mobile phone data records for the ringleaders of four different UK-based terrorist plots in the months leading up to their attacks, we examine the frequency with which terrorists visit different locations, how far they travel from key anchor points such as their home, the distance between sequential cell-site hits and how their range of movement varies as the planned time to attack approaches.ConclusionsLike the wider population (and criminals), the sample of terrorists examined exhibited predictable patterns of spatial behavior. Most movements were close to their home location or safe house, and they visited a relatively small number of locations most of the time. Disaggregating these patterns over time provided mixed evidence regarding the way in which their spatial activity changed as the time to the planned attack approached. The findings are interpreted in terms of how they inform criminological understanding of the spatial behavior of terrorists, and the implications for law enforcement.  相似文献   

2.
ABSTRACT

The analysis of geographically referenced data, specifically point data, is predicated on the accurate geocoding of those data. Geocoding refers to the process in which geographically referenced data (addresses, for example) are placed on a map. This process may lead to issues with positional accuracy or the inability to geocode an address. In this paper, we conduct an international investigation into the impact of the (in)ability to geocode an address on the resulting spatial pattern. We use a variety of point data sets of crime events (varying numbers of events and types of crime), a variety of areal units of analysis (varying the number and size of areal units), from a variety of countries (varying underlying administrative systems), and a locally-based spatial point pattern test to find the levels of geocoding match rates to maintain the spatial patterns of the original data when addresses are missing at random. We find that the level of geocoding success depends on the number of points and the number of areal units under analysis, but generally show that the necessary levels of geocoding success are lower than found in previous research. This finding is consistent across different national contexts.  相似文献   

3.
The increase of extreme meteorological phenomena, along with continuous population growth, has led to a rising number of flooding disasters. Therefore, there is an urgent need to develop better risk reduction strategies, among which is increased social resilience. Experiencing a disaster is recognized as a factor that positively influences overall community resilience, with particular effects on social resilience; it appears to be more influential than school education. It also has many negative effects, though. Previous studies underline that citizens do not distinguish between different types of experiences. Thus, we investigated whether a simulated experience of a flood can improve social resilience, without being hampered by negative repercussions. The study was executed in five municipalities in three Italian regions involved in the European project LIFE PRIMES, which planned simulation activities for each studied area. Data, collected through the administration of anonymous questionnaires before and after a flood drill, were processed by applying a multicriteria decision analysis tool (PROMETHEE). Results show that the drill significantly augmented perceived social resilience in the smaller studied communities but not in the larger ones, a fact that should be further investigated. Key Words: multicriteria decision analysis, simulated flood experience, social resilience to disasters.  相似文献   

4.
In recent years, the evolution and improvement of LiDAR (Light Detection and Ranging) hardware has increased the quality and quantity of the gathered data, making the storage, processing and management thereof particularly challenging. In this work we present a novel, multi-resolution, out-of-core technique, used for web-based visualization and implemented through a non-redundant, data point organization method, which we call Hierarchically Layered Tiles (HLT), and a tree-like structure called Tile Grid Partitioning Tree (TGPT). The design of these elements is mainly focused on attaining very low levels of memory consumption, disk storage usage and network traffic on both, client and server-side, while delivering high-performance interactive visualization of massive LiDAR point clouds (up to 28 billion points) on multiplatform environments (mobile devices or desktop computers). HLT and TGPT were incorporated and tested in ViLMA (Visualization for LiDAR data using a Multi-resolution Approach), our own web-based visualization software specially designed to work with massive LiDAR point clouds.  相似文献   

5.
Abstract

Results of a simulation study of map-image rectification accuracy are reported. Sample size, spatial distribution pattern and measurement errors in a set of ground control points, and the computational algorithm employed to derive the estimate of the parameters of a least-squares bivariate map-image transformation function, are varied in order to assess the sensitivity of the procedure. Standard errors and confidence limits are derived for each of 72 cases, and it is shown that the effects of all four factors are significant. Standard errors fall rapidly as sample size increases, and rise as the control point pattern becomes more linear. Measurement error is shown to have a significant effect on both accuracy and precision. The Gram-Schmidt orthogonal polynomial algorithm performs consistently better than the Gauss-Jordan matrix inversion procedure in all circumstances.  相似文献   

6.
Assessing spatial autocorrelation (SA) of statistical estimates such as means is a common practice in spatial analysis and statistics. Popular SA statistics implicitly assume that the reliability of the estimates is irrelevant. Users of these SA statistics also ignore the reliability of the estimates. Using empirical and simulated data, we demonstrate that current SA statistics tend to overestimate SA when errors of the estimates are not considered. We argue that when assessing SA of estimates with error, one is essentially comparing distributions in terms of their means and standard errors. Using the concept of the Bhattacharyya coefficient, we proposed the spatial Bhattacharyya coefficient (SBC) and suggested that it should be used to evaluate the SA of estimates together with their errors. A permutation test is proposed to evaluate its significance. We concluded that the SBC more accurately and robustly reflects the magnitude of SA than traditional SA measures by incorporating errors of estimates in the evaluation. Key Words: American Community Survey, Geary ratio, Moran’s I, permutation test, spatial Bhattacharyya coefficient.  相似文献   

7.
ABSTRACT

Point cloud classification, which provides meaningful semantic labels to the points in a point cloud, is essential for generating three-dimensional (3D) models. Its automation, however, remains challenging due to varying point densities and irregular point distributions. Adapting existing deep-learning approaches for two-dimensional (2D) image classification to point cloud classification is inefficient and results in the loss of information valuable for point cloud classification. In this article, a new approach that classifies point cloud directly in 3D is proposed. The approach uses multi-scale features generated by deep learning. It comprises three steps: (1) extract single-scale deep features using 3D convolutional neural network (CNN); (2) subsample the input point cloud at multiple scales, with the point cloud at each scale being an input to the 3D CNN, and combine deep features at multiple scales to form multi-scale and hierarchical features; and (3) retrieve the probabilities that each point belongs to the intended semantic category using a softmax regression classifier. The proposed approach was tested against two publicly available point cloud datasets to demonstrate its performance and compared to the results produced by other existing approaches. The experiment results achieved 96.89% overall accuracy on the Oakland dataset and 91.89% overall accuracy on the Europe dataset, which are the highest among the considered methods.  相似文献   

8.
ABSTRACT

The stochastic perturbation of urban cellular automata (CA) model is difficult to fine-tune and does not take the constraint of known factors into account when using a stochastic variable, and the simulation results can be quite different when using the Monte Carlo method, reducing the accuracy of the simulated results. Therefore, in this paper, we optimize the stochastic component of an urban CA model by the use of a maximum entropy model to differentially control the intensity of the stochastic perturbation in the spatial domain. We use the kappa coefficient, figure of merit, and landscape metrics to evaluate the accuracy of the simulated results. Through the experimental results obtained for Wuhan, China, the effectiveness of the optimization is proved. The results show that, after the optimization, the kappa coefficient and figure of merit of the simulated results are significantly improved when using the stochastic variable, slightly improved when using Monte Carlo methods. The landscape metrics for the simulated results and actual data are much closer when using the stochastic variable, and slightly closer when using the Monte Carlo method, but the difference between the simulated results is narrowed, reflecting the fact that the results are more reliable.  相似文献   

9.
Abstract

In this article we demonstrate that substantial gains in time can be made when using point sampling rather than contour line digitising for generation of Digital Elevation Models (DEMs). A simple sampling scheme, based on regularly distributed points, was used supplemented with points near break-lines in the terrain. An evaluation of surfaces created with three different interpolation methods at three different resolutions shows that the statistical distribution was better when using points as opposed to contours, and that the accuracy was comparable despite the much smaller amount of input data.  相似文献   

10.
Abstract

Mapping by sampling and prediction of local and regional values of two-dimensional surfaces is a frequent, complex task in geographical information systems. This article describes a method for the approximation of two-dimensional surfaces by optimizing sample size, arrangement and prediction accuracy simultaneously. First, a grid of an ancillary data set is approximated by a quadtree to determine a predefined number of homogeneous mapping units. This approximation is optimal in the sense of minimizing Kullback-divergence between the quadtree and the grid of ancillary data. Then, samples are taken from each mapping unit. The performance of this sampling has been tested against other sampling strategies (regular and random) and found to be superior in reconstructing the grid using three interpolation techniques (inverse squared Euclidean distance, kriging, and Thiessen-polygonization). Finally, the discrepancy between the ancillary grid and the surface to be mapped is modelled by different levels and spatial structures of noise. Conceptually this method is advantageous in cases when sampling strata cannot be well defined a priori and the spatial structure of the phenomenon to be mapped is not known, but ancillary information (e.g., remotely-sensed data), corresponding to its spatial pattern, is available.  相似文献   

11.
Abstract

The current research focuses upon the development of a methodology for undertaking real-time spatial analysis in a supercomputing environment, specifically using massively parallel SIMD computers. Several approaches that can be used to explore the parallelization characteristics of spatial problems are introduced. Within the focus of a methodology directed toward spatial data parallelism, strategies based on both location-based data decomposition and object-based data decomposition are proposed and a programming logic for spatial operations at local, neighborhood and global levels is also recommended. An empirical study of real-time traffic flow analysis shows the utility of the suggested approach for a complex, spatial analysis situation. The empirical example demonstrates that the proposed methodology, especially when combined with appropriate programming strategies, is preferable in situations where critical, real-time, spatial analysis computations are required. The implementation of this example in a parallel environment also points out some interesting theoretical questions with respect to the theoretical basis underlying the analysis of large networks.  相似文献   

12.
ABSTRACT

The aim of the study on which the article is based was to identify groups of communities with similar resilience profiles, using Norwegian municipalities as a case. The authors used a set of socioeconomic and environmental indicators as measures of municipalities’ resilience and performed a cluster analysis to divide the municipalities into groups with similar multivariate resilience signatures. The results revealed six groups of municipalities that, apart from their unique combinations of indicator scores, featured certain spatial patterns, such as an “urban cluster” with urbanized municipalities and a “suburban cluster” with municipalities concentrated around major cities. The authors conclude that municipalities in each of the groups shared aspects that made them either more or less resilient to natural hazards, which could make them potential targets for shared interventions. Additionally, the authors conclude that clustering can be used to identify municipalities with similar resilience features and that could benefit from networking and sharing operational planning as a way to improve their respective communities' resilience to natural hazards.  相似文献   

13.
The accuracy of spatial interpolation of precipitation data is determined by the actual spatial variability of the precipitation, the interpolation method, and the distribution of observatories whose selections are particularly important. In this paper, three spatial sampling programs, including spatial random sampling, spatial stratified sampling, and spatial sandwich sampling, are used to analyze the data from meteorological stations of northwestern China. We compared the accuracy of ordinary Kriging interpolation methods on the basis of the sampling results. The error values of the regional annual precipitation interpolation based on spatial sandwich sampling, including ME(0.1513), RMSE(95.91), ASE(101.84), MSE(-0.0036), and RMSSE(1.0397), were optimal under the premise of abundant prior knowledge. The result of spatial stratified sampling was poor, and spatial random sampling was even worse. Spatial sandwich sampling was the best sampling method, which minimized the error of regional precipitation estimation. It had a higher degree of accuracy compared with the other two methods and a wider scope of application.  相似文献   

14.
The topic of geoprivacy is increasingly relevant as larger quantities of personal location data are collected and shared. The results of scientific inquiries are often spatially suppressed to protect confidentiality, limiting possible benefits of public distribution. Obfuscation techniques for point data hold the potential to enable the public release of more accurate location data without compromising personal identities. This paper examines the application of four spatial obfuscation methods for household survey data. Household privacy is evaluated by a nearest neighbor analysis, and spatial distribution is measured by a cross-k function and cluster analysis. A new obfuscation technique, Voronoi masking, is demonstrated to be distinctively equipped to balance between protecting both household privacy and spatial distribution.  相似文献   

15.
The factors determining the suitability of limestone for industrial use and its commercial value are the amounts of calcium oxide (CaO) and impurities. From 244 sample points in 18 drillhole sites in a limestone mine, southwestern Japan, data on four impurity elements, SiO2, Fe2O3, MnO, and P2O5 were collected. It generally is difficult to estimate spatial distributions of these contents, because most of the limestone bodies in Japan are located in the accretionary complex lithologies of Paleozoic and Mesozoic age. Because the spatial correlations of content data are not clearly shown by variogram analysis, a feedforward neural network was applied to estimate the content distributions. The network structure consists of three layers: input, middle, and output. The input layer has 17 neurons and the output layer four. Three neurons in the input layer correspond with x, y, z coordinates of a sample point and the others are rock types such as crystalline and conglomeratic limestones, and fossil types related to the geologic age of the limestone. Four neurons in the output layer correspond to the amounts of SiO2, Fe2O3, MnO, and P2O5. Numbers of neurons in the middle layer and training data differ with each estimation point to avoid the overfitting of the network. We could detect several important characteristics of the three-dimensional content distributions through the network such as a continuity of low content zones of SiO2 along a Lower Permian fossil zone trending NE-SW, and low-quality zones located in depths shallower than 50 m. The capability of the neural network-based method compared with the geostatistical method is demonstrated from the viewpoints of estimation errors and spatial characteristics of multivariate data. To evaluate the uncertainty of estimates, a method that draws several outputs by changing coordinates slightly from the target point and inputting them to the same trained network is proposed. Uncertainty differs with impurity elements, and is not based on just the spatial arrangement of data points.  相似文献   

16.

Accurately mapping a region’s ground water quality depends upon the spatial sampling strategies employed, including where and how often field data are collected. This study compares the relative values of three field sampling strategies for mapping a known migrating plume of volcanic ground water in Sierra Valley, California. The first strategy sampled wells once each year during 1957, 1972, and 1980 (n=63, 45, and 57, respectively) and portrayed spatial–temporal changes in ground water quality more clearly on maps than did two alternative sampling strategies. One of these alternatives, Strategy 2, sampled one well per township per year during 1957, 1972, and 1980 (n=11) and did not detect the migrating plume, despite being a recommended strategy. The other alternative, Strategy 3, frequently sampled in time a small, fixed group of indicator wells (n=13) every four years for the same period, again producing maps with little correlation to the original pattern detected by Strategy 1.  相似文献   

17.
Abstract

Error and uncertainty in spatial databases have gained considerable attention in recent years. The concern is that, as in other computer applications and, indeed, all analyses, poor quality input data will yield even worse output. Various methods for analysis of uncertainty have been developed, but none has been shown to be directly applicable to an actual geographical information system application in the area of natural resources. In spatial data on natural resources in general, and in soils data in particular, a major cause of error is the inclusion of unmapped units within areas delineated on the map as uniform. In this paper, two alternative algorithms for simulating inclusions in categorical natural resource maps are detailed. Their usefulness is shown by a simplified Monte Carlo testing to evaluate the accuracy of agricultural land valuation using land use and the soil information. Using two test areas it is possible to show that errors of as much as 6 per cent may result in the process of land valuation, with simulated valuations both above and below the actual values. Thus, although an actual monetary cost of the error term is estimated here, it is not found to be large.  相似文献   

18.
ABSTRACT

Recently developed urban air quality sensor networks are used to monitor air pollutant concentrations at a fine spatial and temporal resolution. The measurements are however limited to point support. To obtain areal coverage in space and time, interpolation is required. A spatio-temporal regression kriging approach was applied to predict nitrogen dioxide (NO2) concentrations at unobserved space-time locations in the city of Eindhoven, the Netherlands. Prediction maps were created at 25 m spatial resolution and hourly temporal resolution. In regression kriging, the trend is separately modelled from autocorrelation in the residuals. The trend part of the model, consisting of a set of spatial and temporal covariates, was able to explain 49.2% of the spatio-temporal variability in NO2 concentrations in Eindhoven in November 2016. Spatio-temporal autocorrelation in the residuals was modelled by fitting a sum-metric spatio-temporal variogram model, adding smoothness to the prediction maps. The accuracy of the predictions was assessed using leave-one-out cross-validation, resulting in a Root Mean Square Error of 9.91 μg m?3, a Mean Error of ?0.03 μg m?3 and a Mean Absolute Error of 7.29 μg m?3. The method allows for easy prediction and visualization of air pollutant concentrations and can be extended to a near real-time procedure.  相似文献   

19.
中国旅游城市星级饭店韧性时空分异及影响因素   总被引:1,自引:0,他引:1  
王庆伟  梅林  姜洪强  姚前  石勇  付占辉 《地理科学》2022,42(8):1483-1491
新冠肺炎疫情重创全球旅游业、饭店业,但不同城市旅游业、饭店业应对和适应扰动的能力不同,即韧性存在差异。以中国41个旅游城市的星级饭店为研究对象,构建基于累积损失的韧性评估模型,运用SARIMA、随机森林等方法,探讨新冠肺炎疫情干扰下2020年旅游城市星级饭店韧性的时空分异特征及影响因素。研究发现:①在中国疫情防控成效逐渐趋好的态势下,中国旅游城市星级饭店韧性不断增强,但韧性演变存在差异。②中国旅游城市星级饭店韧性空间分异明显,存在交通廊道效应和地理邻近效应。③影响旅游城市星级饭店韧性水平的主要因素依次为:客房平均出租率增长率、餐饮与客房收入比、人均公园绿地面积、国内旅游收入增长率、空气质量优良率、城镇居民人均可支配收入等,它们对旅游城市星级饭店韧性水平的影响呈现出非线性的复杂作用。对指导新冠肺炎疫情干扰下旅游城市星级饭店韧性增强具有重要的参考价值。  相似文献   

20.
Abstract

We present the notion of a natural tree as an efficient method for storing spatial information for quick access. A natural tree is a representation of spatial adjacency, organised to allow efficient addition of new data, access to existing data, or deletions. The nodes of a natural tree are compound elements obtained by a particular Delaunay triangulation algorithm. Improvements to that algorithm allow both the construction of the triangulation and subsequent access to neighbourhood information to be O(N log N). Applications include geographical information systems, contouring, and dynamical systems reconstruction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号