Cellular automata (CA) models can simulate complex urban systems through simple rules and have become important tools for studying the spatio-temporal evolution of urban land use. However, the multiple and large-volume data layers, massive geospatial processing and complicated algorithms for automatic calibration in the urban CA models require a high level of computational capability. Unfortunately, the limited performance of sequential computation on a single computing unit (i.e. a central processing unit (CPU) or a graphics processing unit (GPU)) and the high cost of parallel design and programming make it difficult to establish a high-performance urban CA model. As a result of its powerful computational ability and scalability, the vectorization paradigm is becoming increasingly important and has received wide attention with regard to this kind of computational problem. This paper presents a high-performance CA model using vectorization and parallel computing technology for the computation-intensive and data-intensive geospatial processing in urban simulation. To transfer the original algorithm to a vectorized algorithm, we define the neighborhood set of the cell space and improve the operation paradigm of neighborhood computation, transition probability calculation, and cell state transition. The experiments undertaken in this study demonstrate that the vectorized algorithm can greatly reduce the computation time, especially in the environment of a vector programming language, and it is possible to parallelize the algorithm as the data volume increases. The execution time for the simulation of 5-m resolution and 3 × 3 neighborhood decreased from 38,220.43 s to 803.36 s with the vectorized algorithm and was further shortened to 476.54 s by dividing the domain into four computing units. The experiments also indicated that the computational efficiency of the vectorized algorithm is closely related to the neighborhood size and configuration, as well as the shape of the research domain. We can conclude that the combination of vectorization and parallel computing technology can provide scalable solutions to significantly improve the applicability of urban CA. 相似文献
Is economic development compatible with mitigation? On the one hand, development should promote effective climate policy by enhancing states’ capacities for mitigation. On the other hand, economic growth creates more demand for production, thereby inhibiting emissions reduction. These arguments are often reconciled in the environmental Kuznets curve (EKC) thesis. According to this approach, development initially increases emissions in poor economies, but begins to lower emissions after a country has attained a certain level of development.The aim of this article is to determine empirically whether the EKC hypothesis seems plausible in light of emissions trends over the birth and implementation of the Kyoto Protocol. Drawing on data from the World Bank World Development Indicators and World Resources Institute Climate Data Explorer, it conducts a large-N investigation of the emissions behaviour of 120 countries from 1990 to 2012. While several quantitative studies have found that economic factors influence emissions activity, this article goes beyond existing research by employing a more sophisticated – multilevel – research design to determine whether economic development: (a) continues to be a significant driver once country-level clustering is accounted for and (b) has different effects on different countries. The results of this article indicate that, even after we account for country-level clustering and hold constant the other main putative drivers of emissions activity, economic development tends to inhibit emissions reduction. They also provide strong evidence that emissions trends resemble the EKC, with development significantly constraining emissions reduction in the South and promoting it in the North.POLICY RELEVANCEThis article contributes to the understanding of the (changing) role of economic development in shaping emissions activity. It demonstrates the need for a contextualized, country-specific approach for evaluating the effectiveness of economic development in promoting emissions reduction and uncovers new evidence in support of the EKC hypothesis. 相似文献
This study explores the implications of shifting the narrative of climate policy evaluation from one of costs/benefits or economic growth to a message of improving social welfare. Focusing on the costs of mitigation and the associated impacts on gross domestic product (GDP) may translate into a widespread concern that a climate agreement will be very costly. This article considers the well-known Human Development Index (HDI) as an alternative criterion for judging the welfare effects of climate policy. We estimate what the maximum possible annual average increase in HDI welfare per tons of CO2 would be within the carbon budget associated with limiting warming to 2°C over the period 2015–2050. Emission pathways are determined by a policy that allows the HDI of poor countries and their emissions to increase under a business-as-usual development path, while countries with a high HDI value (>0.8) have to restrain their emissions to ensure that the global temperature rise does not exceed 2°C. For comparison, the well-known multi-regional RICE model is used to assess GDP growth under the same climate change policy goals.
Policy relevance
This is the first study that shifts the narrative of climate policy evaluation from one of GDP growth to a message of improving social welfare, as captured by the HDI. This could make it easier for political leaders and climate negotiators to publicly commit themselves to ambitious carbon emission reduction goals, such as limiting global warming to 2°C, as in the (non-binding) agreement made at COP 21 in Paris in 2015. We find that if impacts are framed in terms of growth in HDI per t CO2 emission per capita instead of in GDP, the HDI of poor countries and their emissions are allowed to increase under a business-as-usual development path, whereas countries with a high HDI (>0.8) must control emissions so that global temperature rise remains within 2°C. Importantly, a climate agreement is more attractive for rich countries under the HDI than the GDP frame. This is good news, as these countries have to make the major contribution to emissions reductions. 相似文献
Space-time prisms envelop all spatio-temporal locations that moving objects may have visited between two of their known spatio-temporal locations, given a bound on their travel speed. In this context, the known locations are often the result of observations or measurements, and they are called ‘anchor points’. The classic space-time prism, in isotropic two-dimensional space, as well as in transportation networks, assumes that the measurements of these anchor points are exact. Whereas, in many applications, we can assume that time can be measured fairly precisely, this assumption is unrealistic for the spatial components of measured locations (we think of Global Positioning System (GPS) errors, for instance). In this paper, we extend the classical prism from anchor points to circular ‘anchor regions’ that capture the uncertainty or error on their measurement. We define the notion of a space-time prism with uncertain anchor points, called uncertain prism, for short. We study the geometry of uncertain prisms in an arbitrary metric space to make this concept as widely applicable as possible. We also focus on the rims of uncertain space-time prisms, which demarcate the area that a moving object can have visited between two anchor regions (given some local speed limitations). 相似文献
Geographic visualization tools with coordinated and multiple views (CMV) typically provide sets of visualization methods. Such configuration gives users the possibility of investigating data in various visual contexts; however, it can be confusing due to the multiplicity of visual components and interactive functions. We addressed this challenge and conducted an empirical study on how a CMV tool, consisting of a map, a parallel coordinate plot (PCP), and a table, is used to acquire information. We combined a task-based approach with eye-tracking and usability metrics since these methods provide comprehensive insights into users’ behaviour. Our empirical study revealed that the freedom to choose visualization components is appreciated by users. The individuals worked with all the available visualization methods and they often used more than one visualization method when executing tasks. Different views were used in different ways by various individuals, but in a similarly effective way. Even PCP, which is claimed to be problematic, was found to be a handy way of exploring data when accompanied by interactive functions. 相似文献