Carbon Budgets

It started as a nice simple idea: There is a finite amount of Carbon that humanity can burn before the planet warms above 2C. This idea was based on  AR5  Earth Systems Models (ESMs) ‘showing’ that the relationship between global temperatures and cumulative emissions was linear. At last the IPCC had something easy for world leaders to understand! This was all nicely  summarised in Figure SPM-10, shown below. The Paris accord is essentially derived from this one figure.

The problem though is that it wasn’t really true. After all, how could something so complex end up being as simple as a linear relationship? The difficulties  arise once you try to work back from emissions data to CO2 concentrations. One only needs to look  at Miller et al’s earlier paper to see that there were underlying problems.

A second important feature of ESMs is the increase in airborne fraction (the fraction of emitted CO2 that remains in the atmosphere after a specified period) over time in scenarios involving substantial emissions or warming (Friedlingstein et al., 2006). An emergent feature of the CMIP5 full complexity ESMs appears to be that this increase in airborne fraction approximately cancels the logarithmic relationship between CO2 concentrations and radiative forcing, yielding an approximately linear relationship between cumulative CO2 emissions and CO2-induced warming.

However their paper then found that AR5 carbon cycle models could not actually reproduce the Mauna-Loa CO2 concentrations from the historic emissions data. In short they needed to use a lower and stable airborne fraction to reproduce historic CO2 levels. I  found exactly the same problem when applying the BERN model- see Carbon Circular Logic

Their new 1.5C carbon budget paper  then found that models based on 2015 “cumulative emissions” were 0.3C warmer than reality (1.3C as opposed to 0.93C). Naturally this was then  reported across the world’s media as climate models were running too hot, and that global warming has been exaggerated.  The Climate Science community then reacted in horror with a string of denials by their peers:

which all culminated in a climb down by the authors: Authors respond to misinterpretations of their 1.5C carbon budget paper

The basic underlying problem is that climate models are CO2 concentration based and not emissions based. ESMs are required to derive atmospheric CO2 concentrations from Carbon emissions. Millar et al. showed conclusively that they fail to do this. That is why models appear warmer than reality for a given carbon budget. The models are running too hot with carbon emissions.

The real result of the Miller et al. paper is that CMIP5 Earth System Models (ESMs) are not handling the carbon cycle correctly. For a given emission scenario ESMs predict too high an atmospheric concentration for any given year. This is because the models result in a higher airborne fraction than reality, and they assume it is slowly increasing (it isn’t).

Temperatures shown in the Miller et al. paper are all plotted against Cumulative Emissions and NOT against CO2 concentrations. It is perfectly true that models are 0.3C warmer than measurements for the same Cumulative Emission. Only if you compare models and temperature measurements against atmospheric CO2 do the ‘Climate’ models appear now to be doing fine, albeit still somewhat on the warm side.

This then explains why the authors find that the remaining Carbon budget to reach 1.5C is now much larger! We now need more emissions than predicted in AR5 to bring CO2 levels up to where model reach 1.5C levels. As a direct result of this, the remaining budget has risen from ~50 GtC to ~200 GtC, which is an increase of a factor 4!

Posted in climate science, GCM, IPCC, Science | Tagged , | 15 Comments

Calculating global temperature anomalies using three new methods.

Abstract

Deriving global temperatures anomalies involves the surface averaging of normalized ocean and station temperature data that are inhomogeneously distributed in both space and time. Different groups have adopted different averaging schemes to deal with this problem. For example GISS use approximately 8000 equal area cells and interpolate to near neighbor stations. Berkeley Earth (Robert Rohde, 2013) fit a temperature distribution to a 1-degree grid, while Hadcrut4 (Osborn & Jones, 2014) use regular binning onto a 5-degree grid. Cowtan and Way (Cowtan & Way, 2014) then attempt to correct Hadcrut4 for spatial bias by kriging results into sparse coverage regions, guided by satellite data. In this paper we look at alternative methods to these which are based on averaging over the 3D spheroidal surface of the earth. It is shown that this approach alone removes any spatial bias, thereby avoiding direct interpolation. A spherical triangulation method is described which additionally has the benefit of avoiding binning completely by using each data point individually. Longer term 3D averaging is investigated by using an equal area Icosahedral spherical binning. New monthly and annual temperature series are presented for each new method based on a) merging CRUTEM4 with HADSST3 (HADCRUT4.5), and b) merging GHN V3C with HadSST3.

Introduction

Hadcrut4 (Osborn & Jones, 2014) is one of the most widely used global temperature records and was originally adopted by the IPCC as their official measure of climate change. It is based only on direct measurements and avoids any extrapolation into un-sampled regions. Hadcrut4 is based on two main sources. CRUTEM4 (Jones P.D., 2012) covers land based weather station data collected and processed by the Climate Research Unit (CRU) at the university of East Anglia, and the second source is the Sea Surface temperature data HADSST3 (Kennedy J.J., 2011) processed by the Hadley Centre. An independent set of station data is also maintained by NOAA NCDC (Lawrimore, 2011). The latest set V3C has similarly been processed and combined with HADSST3 to compare results.

The algorithm used to calculate the Hadcrut4.5 global average temperature anomaly for each month uses a 5×5° grid in latitude and longitude. The baseline climatology used to derive temperature anomalies for each station are the 12 monthly average temperatures measured between 1961 and 1990. All such stations anomalies within the same 5×5° bin are then averaged together, as are SST values. The monthly global average is then the area weighted (cos \theta  lat) average over all occupied 2592 bins. The yearly average is simply the 12 month average.

In this study we look at alternative methods to calculate the global average, based only on using measured data without any interpolation. Two of the methods use the 3D spherical surface of the earth rather than the usual 2D latitude, longitude projection. Direct triangulation methods have the advantage that they avoid any averaging and use each individual station data. Finally we compare the results of each method to the standard monthly and annual averages with those of Hadcrut4.5, and also to the interpolated (kriging) results of Cowtan & Way (Cowtan & Way, 2014).

Method 1. 2D Triangulation of measurements in (Lat,Lon)

The basic idea is to form a triangular mesh of all monthly measurements and then treat each triangle separately to avoid any binning.

IDL (Harris) is used to form a Delauney triangulation in (lat,lon) of all locations where there are measurements recorded in each month. The grid itself will vary from one month to the next because station data is not contiguous. The algorithm used to calculate the global average is the following

  • Each triangle contains one measurement at each vertex. We use Heron’s formula to calculate the area of the triangle.
  • Calculate the centroid position and assign this an anomaly equal to the average of all 3 vertex values. This centroid value is then used as the average anomaly for the whole triangular area.
  • Use a spatial weighting for each triangle in the integration equal to cos(lat)*area, where Lat is the latitude of the centroid of the triangle.

The global average is then

\frac{\sum{T_i \times w_i}}{\sum{w_i}}  where w_i is the spatial weighting.

An example of such a triangulation grid is shown in Figure 1.

Figure 1: A 2D (lat,lon) triangulation mesh using all V3C station data and HadSST3 data for January 2015, showing intense triangulation in North America and Europe. The triangle edges obscure some of the colouring.

Method 2.  3D Spherical Triangulation

The second method extends triangulation to 3 dimensions. This has the advantage that it can now provide a full coverage of Polar Regions. This is an elegant method for spatial integration of irregular temperature data by using spherical triangulation over the earth’s surface. The method treats each measurement equally and covers the earth’s surface with a triangular mesh of station & SST nodes. Unlike linear 2D triangulation (described above) spherical triangulation also spans polar-regions because it uses a 3D model. Spherical triangulation returns the 3D Cartesian coordinates of each triangle vertex. The temperature of the triangular area is then set to the average of each vertex. The global average is the area weighted mean of all triangles. For the annual average we take the 12 monthly average of each global average. This is because the grid changes from one month to another. We investigate how to solve this in the third method.

Figure 2 Spherical triangulation of CRUTEM4 & HadSST3 (Hadcrut4.5) temperature data for February and March 2017.

Spherical Triangulation essentially also solves the coverage bias problem for Hadcrut4, without any need for interpolation or use of external satellite data. Figure 3 shows a comparison of the Spherical Triangulation data to the Cowtan & Way data (Cowtan & Way, 2014). The agreement between the two is remarkable. This means that HADCRUT4 can already describe full global coverage, if it is treated in 3 dimensions instead of 2.

Figure 3. Comparison of recent yearly anomalies showing excellent agreement between Spherical Triangulation and Cowtan & Way.

Method 3. Icosahedral Grids

The last method addresses the problem of defining unbiased fixed spherical grids in 3D. All the major temperature series use latitude, longitude gridding to average spatial temperatures. Problems arise near the poles, because bin sizes reduce as cos(lat), and the pole itself is a singularity .

Spherical triangulation of monthly global temperatures is the most direct way to present temperature data, but it has one drawback. You cannot make annual or decadal averages directly on the triangular mesh, because it is forever changing from one month to another. You can only average monthly global averages together. To make such local averages you really need a fixed grid. So does that mean that we have to give up and return to 2D fixed lat,lon grids such as  those used by CRU, Berkeley, NOAA  or NASA ? The answer is no because there is a neat method of defining fixed 3D grids that maintain spherical symmetry.  This is based on subdividing an Icosahedron (Ning Wang, 2011).

Figure 4 Subdividing an Icosahedron to form a spherical  triangular mesh.

We start with a 20-sided icosahedron, which is a 3-D object all of whose sides are equilateral triangles. We then divide each triangle into 4 equal triangles by connecting each midpoint edge. These points are then extended outwards from the centre to lie on the surface of a unit sphere. The process is repeated n-times to form a symmetric spherical mesh as shown in Figure 4. It can be shown that such a grid formed from an icosahedron is the most accurate representation of a sphere (ref).

A level-4 icosahedron with 5120 triangles provides the best match to the actual distribution of Hadcrut4 global temperature data. An example of such a mesh, centered on the North Pole, is shown in figure 5.

Figure 5. Annual average (GHCN V3 & HADSST3)
temperature anomalies for 2016 over the pole. Note the equal area triangles covering the Arctic.

The global anomaly is calculated by binning the data onto the grid in exactly the same way as is done for the 5×5 degree grid. A station lies within a given triangle if it is encircled exactly once. This is called the winding number. All stations within a given triangle and within a given time interval are then averaged. Such averages can be done monthly, annually or for each decade on the fixed grid. The global average is simply the numerical average over all bins, without any need for area weighting because they are all of the same area. Such grids allow annual and decadal averaging of temperature anomalies. Table 1 shows six successive decadal temperature averages calculated in this way using GHCNV3 and HADSST3, centered on Africa, while Table 2 shows the same decadal trends centered on the Arctic.

 

Western Hemisphere
1951-1960 (50s) 1961-1970 (6os) 1971-1980 (70s)
1981-1990 (80s) 1991-2000 (9os) 2001-2010 (2000s)
Table 1: 10-year averages of temperature anomalies for GHCNV3+HADSST3 on an icosahedral grid normalized to a 1961-1990 baseline.
Arctic
50s 6os 70s
80s 9os 2000s
Table 2: Decadal averages for Arctic regions.

The results show a clear decadal warming trend after 1980 especially in the Arctic. Spatial results can be viewed in 3D e.g. http://clivebest.com/webgl/earth2016.html

Results

CRUTEM4 station data and HADSST3 SST data have been processed using the three methods described above. These global averages have then been compared to Hadcrut4.5 data. Of the 7830 station data some 6857 have sufficient information to calculate anomalies and position them. A very small random perturbation has been added to the position of each SST data in order to triangulate them. This is because otherwise they would lie on exactly parallel latitude lines and the triangulation algorithm fails. Example triangulations are shown in figures 1 and 2.

A comparison of the annual values calculated by all 3 methods to the quoted Hadcrut4.5 values is shown in Figure 6.

Figure 6 Hadcrut4.5 Triangulation is Method1, Spherical is method 2 and Isocahedron is method 3. Deltas are the difference method-H4

If instead of CRUTEM4 we use the NOAA GHCN V3C station data we get almost the same result. There are 300 more stations in V3C and although there is a large overlap, the homogenisation procedures are different. Figure 7 shows the comparison between the two plotted on the 1961-1990 baseline.

Figure 7 Comparison between GHCN V3C based annual averages and CRUTEM4 based annual avergaes. Both station samples are combined with HADSST3.

In general the agreement between all three averaging methods is good. However there is a systematic difference after ~2000 between Hadcrut4.5 binning and the triangulation methods. This is due to the different processing procedures in Polar Regions. The spherical triangulation gives slightly higher net annual values than Hadcrut4.5 (up to 0.05C) after 2000.

The monthly comparison is shown in Figure 8, excluding the icosahedral binning. The agreement is good but the same systematic effects are observed for the same reasons.

The most interesting result is that Spherical Triangulation reproduces almost exactly the Cowtan and Way result. This shows that triangulation alone resolves any Hadcrut4 coverage biases in an internally consistent manner, without the need to use interpolation or bin averaging.

Figure 8 Compare monthly anomalies. Black is Spherical triangulation, Red is lat.lon triangulation and blue is Hadcrut4.5. Deltas show the same effect as the annual data.

Another advantage of triangulation is that land ocean borders are respected in a consistent way. There is always a problem when averaging across a grid cell containing both land and ocean data. Ideally the average should be weighted by the fraction of land/ocean within the cell. This problem is avoided in triangulation because all measurements contribute equally irrespective of location.

Conclusions

We propose that for monthly averages the spherical triangulation method is the most accurate, because it does the best job of covering the poles. This conclusion is supported by the observation that it reproduces Cowatn & Way’s result, without the need for satellite data or kriging. The drawback of triangulation is that the grid is forever changing with time making spatial averaging difficult. For this reason icosahedral grids are proposed as optimum for annual or decadal averaging, although differences are small compared to latitude, longitude gridding. The traditional Hadcrut4.5 5×5 degree lat, lon binning is the most direct method, but it does slightly underestimate recent trends, which are concentrated in the Arctic. This is also partly because (lat, lon) cells can never span the poles.

All relevant software can be found at http://clivebest.com/data/t-study

Acknowledgement

Useful discussion and advice from Dr. Nick Stokes https://moyhu.blogspot.co.uk/

References

Cowtan, K., & Way, R. G. (2014). Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Q.J.R. Meteorol. Soc. , 140, 1935–1944.

Jones P.D., L. D. (2012). Hemispheric and large-scale land surface air temperature variations: an extensive revision and an update to 2010. Journal of Geophysical Research , 117.

Kennedy J.J., R. N. (2011). Reassessing biases and other uncertainties in sea-surface temperature observations since 1850 part 1: measurement and sampling errors.Part 1&2. J.Geophys. Res. , 116.

Lawrimore, J. H. (2011). An overview of the Global Historical Climatology Network monthly mean temperature data set, version 3. J. Geophys. Res. , 116.

Ning Wang, J.-L. L. (2011). GEOMETRIC PROPERTIES OF THE ICOSAHEDRAL-HEXAGONAL GRID ON THE TWO-SPHERE. SIAM J. SCI. COMPUT , 33, 2536-2559.

Osborn, T., & Jones, P. (2014). The CRUTEM4 land-surface air temperature data set: construction, previous versions and dissemination via Google Earth. Earth System Science Data , 6, 61-68.

Robert Rohde, R. A. (2013). Berkeley Earth Temperature Averaging Process. Geoinfor Geostat .

Note: I wrote this work up as a paper for possible publication. However journal charges have become so ridiculously expensive for any individual that I will simply post it here instead.

 

Posted in AGW, climate science, NASA, NOAA, UK Met Office | Tagged , | 11 Comments

Global absolute temperature

Is there such a thing as a global absolute temperature of the earth’s surface? The temperature at any point on the earth’s surface is forever changing, hour to hour, night to day and with the seasons.  A  global average temperature Tgl is theoretically the area average of all local temperatures integrated over the earth’s surface. I claim that Tgl can really only be defined at one instance in time (maximum one day). In 1993 I was asked to process about twenty 9-track magnetic tapes containing an archive of ECMWF daily forecast data in GRIB Format and then write the results to an optical jukebox. Having finally succeeded,  I decided to calculate the ECMWF global average temperatures.  These were published were 23 years ago in GRL. Today such archives of weather forecast data are called reanalyses.

from:’Observation of a monthly variation in global surface temperature data’, C Best, Nov 1994. GRL

As the earth rotates so different fractions of land and ocean are illuminated by the Sun, and the earth’s albedo changes. The highest heating of the earth’s surface is probably at midday over the Pacific Ocean. The absorbed heat is then dispersed through the atmosphere (weather) and by ocean currents. Over land the albedo has been changing over thousands of years due to human activity. Deforestation, agriculture, drainage and urbanisation alter local albedo and weather, while recently man has also increased CO2 levels. Therefore the absolute temperature of the earth’s surface is always changing. Can long term trends be measured directly?

The average daily temperature (Tav) is  calculated from the maximum (Tmax) and the minimum (Tmin) recorded temperatures at each station.  Tav=(Tmax+Tmin)/2 . Likewise long term temperature series used for climate studies calculate the monthly averages in the same way, where Tmax and Tmin now are the extreme temperatures for any given month. These monthly values also vary from year to year due to fluctuations in weather. The average seasonal variation at one station is calculated over some given 30 years period and are called  ‘normal’ values. The 30 year period is called the baseline.

Now suppose you simply calculate the global average temperature for one month or for one year. This is fairly easy to do by preforming an area weighted average of Tav over the earth’s surface and then  averaging over one year. Here is the result for land based station in CRUTEM3.

Globally averaged temperatures based on CRUTEM3 Station Data

There are obviously some problems here. For example the temperature appears to jump up in 1950, but this is simply because a lot of new stations were suddenly added that year. This demonstrates that there is always a spatial bias due to where you have available measurements and this bias gets worse the further back in time you go. Before 1860 weather stations were mostly confined to Europe and the US , while ocean temperature data was confined to a few shipping lanes.

Stations with data back before 1860

So we can’t really measure global temperatures much before the satellite era or before ~1980.

The answer to this problem is to use temperature anomalies instead. Anomalies for a given station are defined relative to the monthly ‘normal’ temperatures over the 30-year period. CRU use 1961-1990, GISS use 1959-1980 and NCDC use all the 20th century. The temperature ‘anomaly’ for each month is then Tav-Tnorm.  Any sampling bias has not really disappeared but has instead been mostly subtracted. There is still the underlying assumption that all stations react in synchrony to warming (or cooling) as do their near neighbours. In addition it assumes that areas of the world with zero coverage behave similarly to those areas with good coverage.  It seems that Guy Callendar was the first person to use temperature anomalies for this purpose back in 1938

So the conclusion is that you can’t measure a global temperature directly, and even if you could it would  be changing on a daily and even hourly basis. The only thing you can measure is a global average temperature ‘anomaly’. Spatial biases are reduced but not fully eliminated, plus there remains an overall assumption of regional homogeneity. So when you hear that global temperatures have risen by 1C, it really means that the global average anomaly has risen by 1C. For any given month the temperature where you live could even be colder than ‘normal’. So for example, Europe was colder than normal in October 2015, despite 2015 being itself the ‘warmest’ anomaly year on record.

H4 Temperature anomalies for October 2015 using new temperature colour scale.

This post was prompted by Gavin Schmidt : Observations, Reanalyses and the Elusive Absolute Global Mean Temperature

Posted in AGW, Climate Change, climate science, NASA, UK Met Office | Tagged , | 15 Comments