Limiting global warming to ‘well below’ 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase even further to 1.5°C is an integral part of the 2015 Paris Agreement. To achieve these aims, cumulative global carbon emissions after 2016 should not exceed 940 – 390?Gt of CO2 (for the 2°C target) and 167 – ?48?Gt of CO2 (for the 1.5°C target) by the end of the century. This paper analyses the EU’s cumulative carbon emissions in different models and scenarios (global models, EU-focused models and national carbon mitigation scenarios). Due to the higher reductions in energy use and carbon intensity of the end-use sectors in the national scenarios, we identify an additional mitigation potential of 26–37 Gt cumulative CO2 emissions up to 2050 compared to what is currently included in global or EU scenarios. These additional reductions could help to both reduce the need for carbon dioxide removals and bring cumulative emissions in global and EU scenarios in line with a fairness-based domestic EU budget for a 2°C target, while still remaining way above the budget for 1.5°C.Key policy insights
Models used for policy advice such as global integrated assessment models or EU models fail to consider certain mitigation potential available at the level of sectors.
Global and EU models assume significant levels of CO2 emission reductions from carbon capture and storage to reach the 1.5°C target but also to reach the 2°C target.
Global and EU model scenarios are not compatible with a fair domestic EU share in the global carbon budget either for 2°C or for 1.5°C.
Integrating additional sectoral mitigation potential from detailed national models can help bring down cumulative emissions in global and EU models to a level comparable to a fairness-based domestic EU share compatible with the 2°C target, but not the 1.5°C aspiration.
At the beginning of the twenty-first century, a technological change took place in geodetic astronomy by the development of
Digital Zenith Camera Systems (DZCS). Such instruments provide vertical deflection data at an angular accuracy level of 0.̋1
and better. Recently, DZCS have been employed for the collection of dense sets of astrogeodetic vertical deflection data in
several test areas in Germany with high-resolution digital terrain model (DTM) data (10–50 m resolution) available. These
considerable advancements motivate a new analysis of the method of astronomical-topographic levelling, which uses DTM data
for the interpolation between the astrogeodetic stations. We present and analyse a least-squares collocation technique that
uses DTM data for the accurate interpolation of vertical deflection data. The combination of both data sets allows a precise
determination of the gravity field along profiles, even in regions with a rugged topography. The accuracy of the method is
studied with particular attention on the density of astrogeodetic stations. The error propagation rule of astronomical levelling
is empirically derived. It accounts for the signal omission that increases with the station spacing. In a test area located
in the German Alps, the method was successfully applied to the determination of a quasigeoid profile of 23 km length. For
a station spacing from a few 100 m to about 2 km, the accuracy of the quasigeoid was found to be about 1–2 mm, which corresponds
to a relative accuracy of about 0.05−0.1 ppm. Application examples are given, such as the local and regional validation of
gravity field models computed from gravimetric data and the economic gravity field determination in geodetically less covered
regions. 相似文献
AbstractFinding the shortest path through open spaces is a well-known challenge for pedestrian routing engines. A common solution is routing on the open space boundary, which causes in most cases an unnecessarily long route. A possible alternative is to create a subgraph within the open space. This paper assesses this approach and investigates its implications for routing engines. A number of algorithms (Grid, Spider-Grid, Visibility, Delaunay, Voronoi, Skeleton) have been evaluated by four different criteria: (i) Number of additional created graph edges, (ii) additional graph creation time, (iii) route computation time, (iv) routing quality. We show that each algorithm has advantages and disadvantages depending on the use case. We identify the algorithms Visibility with a reduced number of edges in the subgraph and Spider-Grid with a large grid size to be a good compromise in many scenarios. 相似文献
The understanding of alpine groundwater dynamics and the interactions with surface stream water is crucial for water resources research and management in mountain regions. In order to characterize local spring and stream water systems, samples at 8 springs, 5 stream gauges and bulk samples of precipitation at 4 sites were regularly collected between January 2012 and January 2016 in the Berchtesgaden Alps for stable water isotope analysis. The sampled hydro-systems are characterized by very different dynamics of the stable isotope signatures. To quantify those differences, we analyzed the stable isotope time series and calculated mean transit times (MTT) and young water fractions (YWF) of the sampled systems. Based on the data analysis, two groups of spring systems could be identified: one group with relatively short MTT (and high YWF) and another group with long MTT (and low YWF). The MTT and the YWF of the sampled streams were intermediate, respectively. The reaction of the sampled spring and stream systems to precipitation input was studied by lag time analysis. The average lag times revealed the influence of snow and ice melt for the hydrology in the study region. It was not possible to determine the recharge elevation of the spring and stream systems due to a lack of altitude effect in the precipitation data. For two catchments, the influence of the spring water stable isotopic composition on the streamflow was shown, highlighting the importance of the spring water for the river network in the study area. 相似文献