Calculating global temperature anomalies using three new methods.

Abstract

Deriving global temperatures anomalies involves the surface averaging of normalized ocean and station temperature data that are inhomogeneously distributed in both space and time. Different groups have adopted different averaging schemes to deal with this problem. For example GISS use approximately 8000 equal area cells and interpolate to near neighbor stations. Berkeley Earth (Robert Rohde, 2013) fit a temperature distribution to a 1-degree grid, while Hadcrut4 (Osborn & Jones, 2014) use regular binning onto a 5-degree grid. Cowtan and Way (Cowtan & Way, 2014) then attempt to correct Hadcrut4 for spatial bias by kriging results into sparse coverage regions, guided by satellite data. In this paper we look at alternative methods to these which are based on averaging over the 3D spheroidal surface of the earth. It is shown that this approach alone removes any spatial bias, thereby avoiding direct interpolation. A spherical triangulation method is described which additionally has the benefit of avoiding binning completely by using each data point individually. Longer term 3D averaging is investigated by using an equal area Icosahedral spherical binning. New monthly and annual temperature series are presented for each new method based on a) merging CRUTEM4 with HADSST3 (HADCRUT4.5), and b) merging GHN V3C with HadSST3.

Introduction

Hadcrut4 (Osborn & Jones, 2014) is one of the most widely used global temperature records and was originally adopted by the IPCC as their official measure of climate change. It is based only on direct measurements and avoids any extrapolation into un-sampled regions. Hadcrut4 is based on two main sources. CRUTEM4 (Jones P.D., 2012) covers land based weather station data collected and processed by the Climate Research Unit (CRU) at the university of East Anglia, and the second source is the Sea Surface temperature data HADSST3 (Kennedy J.J., 2011) processed by the Hadley Centre. An independent set of station data is also maintained by NOAA NCDC (Lawrimore, 2011). The latest set V3C has similarly been processed and combined with HADSST3 to compare results.

The algorithm used to calculate the Hadcrut4.5 global average temperature anomaly for each month uses a 5×5° grid in latitude and longitude. The baseline climatology used to derive temperature anomalies for each station are the 12 monthly average temperatures measured between 1961 and 1990. All such stations anomalies within the same 5×5° bin are then averaged together, as are SST values. The monthly global average is then the area weighted (cos \theta  lat) average over all occupied 2592 bins. The yearly average is simply the 12 month average.

In this study we look at alternative methods to calculate the global average, based only on using measured data without any interpolation. Two of the methods use the 3D spherical surface of the earth rather than the usual 2D latitude, longitude projection. Direct triangulation methods have the advantage that they avoid any averaging and use each individual station data. Finally we compare the results of each method to the standard monthly and annual averages with those of Hadcrut4.5, and also to the interpolated (kriging) results of Cowtan & Way (Cowtan & Way, 2014).

Method 1. 2D Triangulation of measurements in (Lat,Lon)

The basic idea is to form a triangular mesh of all monthly measurements and then treat each triangle separately to avoid any binning.

IDL (Harris) is used to form a Delauney triangulation in (lat,lon) of all locations where there are measurements recorded in each month. The grid itself will vary from one month to the next because station data is not contiguous. The algorithm used to calculate the global average is the following

  • Each triangle contains one measurement at each vertex. We use Heron’s formula to calculate the area of the triangle.
  • Calculate the centroid position and assign this an anomaly equal to the average of all 3 vertex values. This centroid value is then used as the average anomaly for the whole triangular area.
  • Use a spatial weighting for each triangle in the integration equal to cos(lat)*area, where Lat is the latitude of the centroid of the triangle.

The global average is then

\frac{\sum{T_i \times w_i}}{\sum{w_i}}  where w_i is the spatial weighting.

An example of such a triangulation grid is shown in Figure 1.

Figure 1: A 2D (lat,lon) triangulation mesh using all V3C station data and HadSST3 data for January 2015, showing intense triangulation in North America and Europe. The triangle edges obscure some of the colouring.

Method 2.  3D Spherical Triangulation

The second method extends triangulation to 3 dimensions. This has the advantage that it can now provide a full coverage of Polar Regions. This is an elegant method for spatial integration of irregular temperature data by using spherical triangulation over the earth’s surface. The method treats each measurement equally and covers the earth’s surface with a triangular mesh of station & SST nodes. Unlike linear 2D triangulation (described above) spherical triangulation also spans polar-regions because it uses a 3D model. Spherical triangulation returns the 3D Cartesian coordinates of each triangle vertex. The temperature of the triangular area is then set to the average of each vertex. The global average is the area weighted mean of all triangles. For the annual average we take the 12 monthly average of each global average. This is because the grid changes from one month to another. We investigate how to solve this in the third method.

Figure 2 Spherical triangulation of CRUTEM4 & HadSST3 (Hadcrut4.5) temperature data for February and March 2017.

Spherical Triangulation essentially also solves the coverage bias problem for Hadcrut4, without any need for interpolation or use of external satellite data. Figure 3 shows a comparison of the Spherical Triangulation data to the Cowtan & Way data (Cowtan & Way, 2014). The agreement between the two is remarkable. This means that HADCRUT4 can already describe full global coverage, if it is treated in 3 dimensions instead of 2.

Figure 3. Comparison of recent yearly anomalies showing excellent agreement between Spherical Triangulation and Cowtan & Way.

Method 3. Icosahedral Grids

The last method addresses the problem of defining unbiased fixed spherical grids in 3D. All the major temperature series use latitude, longitude gridding to average spatial temperatures. Problems arise near the poles, because bin sizes reduce as cos(lat), and the pole itself is a singularity .

Spherical triangulation of monthly global temperatures is the most direct way to present temperature data, but it has one drawback. You cannot make annual or decadal averages directly on the triangular mesh, because it is forever changing from one month to another. You can only average monthly global averages together. To make such local averages you really need a fixed grid. So does that mean that we have to give up and return to 2D fixed lat,lon grids such as  those used by CRU, Berkeley, NOAA  or NASA ? The answer is no because there is a neat method of defining fixed 3D grids that maintain spherical symmetry.  This is based on subdividing an Icosahedron (Ning Wang, 2011).

Figure 4 Subdividing an Icosahedron to form a spherical  triangular mesh.

We start with a 20-sided icosahedron, which is a 3-D object all of whose sides are equilateral triangles. We then divide each triangle into 4 equal triangles by connecting each midpoint edge. These points are then extended outwards from the centre to lie on the surface of a unit sphere. The process is repeated n-times to form a symmetric spherical mesh as shown in Figure 4. It can be shown that such a grid formed from an icosahedron is the most accurate representation of a sphere (ref).

A level-4 icosahedron with 5120 triangles provides the best match to the actual distribution of Hadcrut4 global temperature data. An example of such a mesh, centered on the North Pole, is shown in figure 5.

Figure 5. Annual average (GHCN V3 & HADSST3)
temperature anomalies for 2016 over the pole. Note the equal area triangles covering the Arctic.

The global anomaly is calculated by binning the data onto the grid in exactly the same way as is done for the 5×5 degree grid. A station lies within a given triangle if it is encircled exactly once. This is called the winding number. All stations within a given triangle and within a given time interval are then averaged. Such averages can be done monthly, annually or for each decade on the fixed grid. The global average is simply the numerical average over all bins, without any need for area weighting because they are all of the same area. Such grids allow annual and decadal averaging of temperature anomalies. Table 1 shows six successive decadal temperature averages calculated in this way using GHCNV3 and HADSST3, centered on Africa, while Table 2 shows the same decadal trends centered on the Arctic.

 

Western Hemisphere
1951-1960 (50s) 1961-1970 (6os) 1971-1980 (70s)
1981-1990 (80s) 1991-2000 (9os) 2001-2010 (2000s)
Table 1: 10-year averages of temperature anomalies for GHCNV3+HADSST3 on an icosahedral grid normalized to a 1961-1990 baseline.
Arctic
50s 6os 70s
80s 9os 2000s
Table 2: Decadal averages for Arctic regions.

The results show a clear decadal warming trend after 1980 especially in the Arctic. Spatial results can be viewed in 3D e.g. http://clivebest.com/webgl/earth2016.html

Results

CRUTEM4 station data and HADSST3 SST data have been processed using the three methods described above. These global averages have then been compared to Hadcrut4.5 data. Of the 7830 station data some 6857 have sufficient information to calculate anomalies and position them. A very small random perturbation has been added to the position of each SST data in order to triangulate them. This is because otherwise they would lie on exactly parallel latitude lines and the triangulation algorithm fails. Example triangulations are shown in figures 1 and 2.

A comparison of the annual values calculated by all 3 methods to the quoted Hadcrut4.5 values is shown in Figure 6.

Figure 6 Hadcrut4.5 Triangulation is Method1, Spherical is method 2 and Isocahedron is method 3. Deltas are the difference method-H4

If instead of CRUTEM4 we use the NOAA GHCN V3C station data we get almost the same result. There are 300 more stations in V3C and although there is a large overlap, the homogenisation procedures are different. Figure 7 shows the comparison between the two plotted on the 1961-1990 baseline.

Figure 7 Comparison between GHCN V3C based annual averages and CRUTEM4 based annual avergaes. Both station samples are combined with HADSST3.

In general the agreement between all three averaging methods is good. However there is a systematic difference after ~2000 between Hadcrut4.5 binning and the triangulation methods. This is due to the different processing procedures in Polar Regions. The spherical triangulation gives slightly higher net annual values than Hadcrut4.5 (up to 0.05C) after 2000.

The monthly comparison is shown in Figure 8, excluding the icosahedral binning. The agreement is good but the same systematic effects are observed for the same reasons.

The most interesting result is that Spherical Triangulation reproduces almost exactly the Cowtan and Way result. This shows that triangulation alone resolves any Hadcrut4 coverage biases in an internally consistent manner, without the need to use interpolation or bin averaging.

Figure 8 Compare monthly anomalies. Black is Spherical triangulation, Red is lat.lon triangulation and blue is Hadcrut4.5. Deltas show the same effect as the annual data.

Another advantage of triangulation is that land ocean borders are respected in a consistent way. There is always a problem when averaging across a grid cell containing both land and ocean data. Ideally the average should be weighted by the fraction of land/ocean within the cell. This problem is avoided in triangulation because all measurements contribute equally irrespective of location.

Conclusions

We propose that for monthly averages the spherical triangulation method is the most accurate, because it does the best job of covering the poles. This conclusion is supported by the observation that it reproduces Cowatn & Way’s result, without the need for satellite data or kriging. The drawback of triangulation is that the grid is forever changing with time making spatial averaging difficult. For this reason icosahedral grids are proposed as optimum for annual or decadal averaging, although differences are small compared to latitude, longitude gridding. The traditional Hadcrut4.5 5×5 degree lat, lon binning is the most direct method, but it does slightly underestimate recent trends, which are concentrated in the Arctic. This is also partly because (lat, lon) cells can never span the poles.

All relevant software can be found at http://clivebest.com/data/t-study

Acknowledgement

Useful discussion and advice from Dr. Nick Stokes https://moyhu.blogspot.co.uk/

References

Cowtan, K., & Way, R. G. (2014). Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Q.J.R. Meteorol. Soc. , 140, 1935–1944.

Jones P.D., L. D. (2012). Hemispheric and large-scale land surface air temperature variations: an extensive revision and an update to 2010. Journal of Geophysical Research , 117.

Kennedy J.J., R. N. (2011). Reassessing biases and other uncertainties in sea-surface temperature observations since 1850 part 1: measurement and sampling errors.Part 1&2. J.Geophys. Res. , 116.

Lawrimore, J. H. (2011). An overview of the Global Historical Climatology Network monthly mean temperature data set, version 3. J. Geophys. Res. , 116.

Ning Wang, J.-L. L. (2011). GEOMETRIC PROPERTIES OF THE ICOSAHEDRAL-HEXAGONAL GRID ON THE TWO-SPHERE. SIAM J. SCI. COMPUT , 33, 2536-2559.

Osborn, T., & Jones, P. (2014). The CRUTEM4 land-surface air temperature data set: construction, previous versions and dissemination via Google Earth. Earth System Science Data , 6, 61-68.

Robert Rohde, R. A. (2013). Berkeley Earth Temperature Averaging Process. Geoinfor Geostat .

Note: I wrote this work up as a paper for possible publication. However journal charges have become so ridiculously expensive for any individual that I will simply post it here instead.

 

Posted in AGW, climate science, NASA, NOAA, UK Met Office | Tagged , | 11 Comments

Global absolute temperature

Is there such a thing as a global absolute temperature of the earth’s surface? The temperature at any point on the earth’s surface is forever changing, hour to hour, night to day and with the seasons.  A  global average temperature Tgl is theoretically the area average of all local temperatures integrated over the earth’s surface. I claim that Tgl can really only be defined at one instance in time (maximum one day). In 1993 I was asked to process about twenty 9-track magnetic tapes containing an archive of ECMWF daily forecast data in GRIB Format and then write the results to an optical jukebox. Having finally succeeded,  I decided to calculate the ECMWF global average temperatures.  These were published were 23 years ago in GRL. Today such archives of weather forecast data are called reanalyses.

from:’Observation of a monthly variation in global surface temperature data’, C Best, Nov 1994. GRL

As the earth rotates so different fractions of land and ocean are illuminated by the Sun, and the earth’s albedo changes. The highest heating of the earth’s surface is probably at midday over the Pacific Ocean. The absorbed heat is then dispersed through the atmosphere (weather) and by ocean currents. Over land the albedo has been changing over thousands of years due to human activity. Deforestation, agriculture, drainage and urbanisation alter local albedo and weather, while recently man has also increased CO2 levels. Therefore the absolute temperature of the earth’s surface is always changing. Can long term trends be measured directly?

The average daily temperature (Tav) is  calculated from the maximum (Tmax) and the minimum (Tmin) recorded temperatures at each station.  Tav=(Tmax+Tmin)/2 . Likewise long term temperature series used for climate studies calculate the monthly averages in the same way, where Tmax and Tmin now are the extreme temperatures for any given month. These monthly values also vary from year to year due to fluctuations in weather. The average seasonal variation at one station is calculated over some given 30 years period and are called  ‘normal’ values. The 30 year period is called the baseline.

Now suppose you simply calculate the global average temperature for one month or for one year. This is fairly easy to do by preforming an area weighted average of Tav over the earth’s surface and then  averaging over one year. Here is the result for land based station in CRUTEM3.

Globally averaged temperatures based on CRUTEM3 Station Data

There are obviously some problems here. For example the temperature appears to jump up in 1950, but this is simply because a lot of new stations were suddenly added that year. This demonstrates that there is always a spatial bias due to where you have available measurements and this bias gets worse the further back in time you go. Before 1860 weather stations were mostly confined to Europe and the US , while ocean temperature data was confined to a few shipping lanes.

Stations with data back before 1860

So we can’t really measure global temperatures much before the satellite era or before ~1980.

The answer to this problem is to use temperature anomalies instead. Anomalies for a given station are defined relative to the monthly ‘normal’ temperatures over the 30-year period. CRU use 1961-1990, GISS use 1959-1980 and NCDC use all the 20th century. The temperature ‘anomaly’ for each month is then Tav-Tnorm.  Any sampling bias has not really disappeared but has instead been mostly subtracted. There is still the underlying assumption that all stations react in synchrony to warming (or cooling) as do their near neighbours. In addition it assumes that areas of the world with zero coverage behave similarly to those areas with good coverage.  It seems that Guy Callendar was the first person to use temperature anomalies for this purpose back in 1938

So the conclusion is that you can’t measure a global temperature directly, and even if you could it would  be changing on a daily and even hourly basis. The only thing you can measure is a global average temperature ‘anomaly’. Spatial biases are reduced but not fully eliminated, plus there remains an overall assumption of regional homogeneity. So when you hear that global temperatures have risen by 1C, it really means that the global average anomaly has risen by 1C. For any given month the temperature where you live could even be colder than ‘normal’. So for example, Europe was colder than normal in October 2015, despite 2015 being itself the ‘warmest’ anomaly year on record.

H4 Temperature anomalies for October 2015 using new temperature colour scale.

This post was prompted by Gavin Schmidt : Observations, Reanalyses and the Elusive Absolute Global Mean Temperature

Posted in AGW, Climate Change, climate science, NASA, UK Met Office | Tagged , | 15 Comments

“Unprecedented” Rainfall ?

A recent Met Office press release , which was taken up by the BBC and most national newspapers, claimed the following:

“New innovative research has found that for England and Wales there is a 1 in 3 chance of a new monthly rainfall record in at least one region each winter (Oct-Mar).”

The paper in nature is called ” High risk of unprecedented UK rainfall in the current climate” and is based on model simulations, rather than real measurements of rainfall in regions of UK. Can this really be  true? The result is based on an ensemble of model runs with ~ 400 ppm where regional winter precipitation is generated stochastically, based on some assumed dependence of CO2 levels with temperature and humidity.

To judge these results we first need to understand exactly what 1 in 3 really means. Secondly if such an increased risk really has occurred then should it not already be evident in the data?

So in this post we look at the measured monthly rainfall data. If the above claim is true then we should observe a skewed distribution of monthly record rainfalls towards recent times.

There are 5 regions in England and the rainfall monthly averages  all start in 1873. This means there are 144 completed years to consider up to July 2017.

The probability of a rainfall record goes as 1/n-years so in year 1 the probability is 100% year 2: 50% year 3: 33% etc. Therefore  the probability of a new record for one month and one region after 145 years (2017)  is 1/145 or 0.7%.

The paper  defines ‘winter’ to be 6 months (Oct,Nov,Dec,Jan,Feb,Mar). So the random probability of a record year in any month and for any region in 2017 is 21% or roughly 1 in 5 (6*5*0.7). (Richard Allan says the study is actually based on just 4 regions, if so then the random probability is 17% for 4 regions however this does not affect the argument).

The claim is that climate change has increased the probability of a new record in any year to 1 in 3. In other words that the chance has increased by 50%. Now if that is true then it should be possible to check for this in the data itself, because the probability must have been steadily increasing with CO2, or their claim is not supported. It can’t suddenly step up by a factor 2 in 2017.

So lets look at the Met Office’s own data for regional rainfall to see if there is any evidence of an increase in records with time. We take 1960 (based on Hadcrut4) as a marker for the onset of CO2 induced warming. The plots below show the rainfall distributions for each of the 6 months in 5 regions of England. Monthly record rainfalls are highlighted by the circles.

1)Rainfall monthly data for South East England. Monthly record rainfalls for each month  are highlighted. Units are mm of rain ( not inches!)

There is one record occurring after 1960 for South East England (Storm Jan 2014).

2) South West Region. Units are mm of rain ( not inches!)

Again we have one record after 1960 corresponding to the storms of 2013/14.

3) Central England. Units are mm of rain ( not inches!)

For Central England there is only one record ( February 1983) after 1960.

4) North West England. Units are mm of rain ( not inches!)

For North West England there are 4 records post 1960 with December 2013 coincident with the 2013/14 winter storms.

5) North East. Units are mm of rain ( not inches!)

There are no records  at all after 1960 for North East England.

There are 30 available rainfall records (6 months x 5 regions).  22 of these records occur before 1960 and 8 of them occur after 1960.

On a purely random basis one would expect 60% (18 records) to occur before 1960 and 40% (12 records) to occur  after 1960. Therefore there is no ‘real life’ evidence to support the hypothesis of any increased risk of ‘unprecedented’ rainfall.

Even AR5 states: “The complexity  of land surface and atmospheric processes limits confidence in regional projections  of  precipitation change, especially over land…”

Perhaps a case of cognitive dissonance ?

 

Posted in AGW, Climate Change, climate science, UK Met Office | Tagged | 10 Comments