Global absolute temperature

Is there such a thing as a global absolute temperature of the earth’s surface? The temperature at any point on the earth’s surface is forever changing, hour to hour, night to day and with the seasons.  A  global average temperature Tgl is theoretically the area average of all local temperatures integrated over the earth’s surface. I claim that Tgl can really only be defined at one instance in time (maximum one day). In 1993 I was asked to process about twenty 9-track magnetic tapes containing an archive of ECMWF daily forecast data in GRIB Format and then write the results to an optical jukebox. Having finally succeeded,  I decided to calculate the ECMWF global average temperatures.  These were published were 23 years ago in GRL. Today such archives of weather forecast data are called reanalyses.

from:’Observation of a monthly variation in global surface temperature data’, C Best, Nov 1994. GRL

As the earth rotates so different fractions of land and ocean are illuminated by the Sun, and the earth’s albedo changes. The highest heating of the earth’s surface is probably at midday over the Pacific Ocean. The absorbed heat is then dispersed through the atmosphere (weather) and by ocean currents. Over land the albedo has been changing over thousands of years due to human activity. Deforestation, agriculture, drainage and urbanisation alter local albedo and weather, while recently man has also increased CO2 levels. Therefore the absolute temperature of the earth’s surface is always changing. Can long term trends be measured directly?

The average daily temperature (Tav) is  calculated from the maximum (Tmax) and the minimum (Tmin) recorded temperatures at each station.  Tav=(Tmax+Tmin)/2 . Likewise long term temperature series used for climate studies calculate the monthly averages in the same way, where Tmax and Tmin now are the extreme temperatures for any given month. These monthly values also vary from year to year due to fluctuations in weather. The average seasonal variation at one station is calculated over some given 30 years period and are called  ‘normal’ values. The 30 year period is called the baseline.

Now suppose you simply calculate the global average temperature for one month or for one year. This is fairly easy to do by preforming an area weighted average of Tav over the earth’s surface and then  averaging over one year. Here is the result for land based station in CRUTEM3.

Globally averaged temperatures based on CRUTEM3 Station Data

There are obviously some problems here. For example the temperature appears to jump up in 1950, but this is simply because a lot of new stations were suddenly added that year. This demonstrates that there is always a spatial bias due to where you have available measurements and this bias gets worse the further back in time you go. Before 1860 weather stations were mostly confined to Europe and the US , while ocean temperature data was confined to a few shipping lanes.

Stations with data back before 1860

So we can’t really measure global temperatures much before the satellite era or before ~1980.

The answer to this problem is to use temperature anomalies instead. Anomalies for a given station are defined relative to the monthly ‘normal’ temperatures over the 30-year period. CRU use 1961-1990, GISS use 1959-1980 and NCDC use all the 20th century. The temperature ‘anomaly’ for each month is then Tav-Tnorm.  Any sampling bias has not really disappeared but has instead been mostly subtracted. There is still the underlying assumption that all stations react in synchrony to warming (or cooling) as do their near neighbours. In addition it assumes that areas of the world with zero coverage behave similarly to those areas with good coverage.  It seems that Guy Callendar was the first person to use temperature anomalies for this purpose back in 1938

So the conclusion is that you can’t measure a global temperature directly, and even if you could it would  be changing on a daily and even hourly basis. The only thing you can measure is a global average temperature ‘anomaly’. Spatial biases are reduced but not fully eliminated, plus there remains an overall assumption of regional homogeneity. So when you hear that global temperatures have risen by 1C, it really means that the global average anomaly has risen by 1C. For any given month the temperature where you live could even be colder than ‘normal’. So for example, Europe was colder than normal in October 2015, despite 2015 being itself the ‘warmest’ anomaly year on record.

H4 Temperature anomalies for October 2015 using new temperature colour scale.

This post was prompted by Gavin Schmidt : Observations, Reanalyses and the Elusive Absolute Global Mean Temperature

Posted in AGW, Climate Change, climate science, NASA, UK Met Office | Tagged , | 7 Comments

“Unprecedented” Rainfall ?

A recent Met Office press release , which was taken up by the BBC and most national newspapers, claimed the following:

“New innovative research has found that for England and Wales there is a 1 in 3 chance of a new monthly rainfall record in at least one region each winter (Oct-Mar).”

The paper in nature is called ” High risk of unprecedented UK rainfall in the current climate” and is based on model simulations, rather than real measurements of rainfall in regions of UK. Can this really be  true? The result is based on an ensemble of model runs with ~ 400 ppm where regional winter precipitation is generated stochastically, based on some assumed dependence of CO2 levels with temperature and humidity.

To judge these results we first need to understand exactly what 1 in 3 really means. Secondly if such an increased risk really has occurred then should it not already be evident in the data?

So in this post we look at the measured monthly rainfall data. If the above claim is true then we should observe a skewed distribution of monthly record rainfalls towards recent times.

There are 5 regions in England and the rainfall monthly averages  all start in 1873. This means there are 144 completed years to consider up to July 2017.

The probability of a rainfall record goes as 1/n-years so in year 1 the probability is 100% year 2: 50% year 3: 33% etc. Therefore  the probability of a new record for one month and one region after 145 years (2017)  is 1/145 or 0.7%.

The paper  defines ‘winter’ to be 6 months (Oct,Nov,Dec,Jan,Feb,Mar). So the random probability of a record year in any month and for any region in 2017 is 21% or roughly 1 in 5 (6*5*0.7). (Richard Allan says the study is actually based on just 4 regions, if so then the random probability is 17% for 4 regions however this does not affect the argument).

The claim is that climate change has increased the probability of a new record in any year to 1 in 3. In other words that the chance has increased by 50%. Now if that is true then it should be possible to check for this in the data itself, because the probability must have been steadily increasing with CO2, or their claim is not supported. It can’t suddenly step up by a factor 2 in 2017.

So lets look at the Met Office’s own data for regional rainfall to see if there is any evidence of an increase in records with time. We take 1960 (based on Hadcrut4) as a marker for the onset of CO2 induced warming. The plots below show the rainfall distributions for each of the 6 months in 5 regions of England. Monthly record rainfalls are highlighted by the circles.

1)Rainfall monthly data for South East England. Monthly record rainfalls for each month  are highlighted. Units are mm of rain ( not inches!)

There is one record occurring after 1960 for South East England (Storm Jan 2014).

2) South West Region. Units are mm of rain ( not inches!)

Again we have one record after 1960 corresponding to the storms of 2013/14.

3) Central England. Units are mm of rain ( not inches!)

For Central England there is only one record ( February 1983) after 1960.

4) North West England. Units are mm of rain ( not inches!)

For North West England there are 4 records post 1960 with December 2013 coincident with the 2013/14 winter storms.

5) North East. Units are mm of rain ( not inches!)

There are no records  at all after 1960 for North East England.

There are 30 available rainfall records (6 months x 5 regions).  22 of these records occur before 1960 and 8 of them occur after 1960.

On a purely random basis one would expect 60% (18 records) to occur before 1960 and 40% (12 records) to occur  after 1960. Therefore there is no ‘real life’ evidence to support the hypothesis of any increased risk of ‘unprecedented’ rainfall.

Even AR5 states: “The complexity  of land surface and atmospheric processes limits confidence in regional projections  of  precipitation change, especially over land…”

Perhaps a case of cognitive dissonance ?

 

Posted in AGW, Climate Change, climate science, UK Met Office | Tagged | 10 Comments

How to normalise temperature anomalies

Update 2/7/17: I used the wrong GISS data (land only) in the comparison as pointed out by @cce! Agreement now is reasonably good for all temperature series.

This is my crib sheet for comparing temperature anomalies across ground and satellite data. Temperature anomalies are usually (but not always) defined relative to a baseline 30 year ‘climatology’. This simply means that a 30-year average temperature for each month is calculated at each location. The anomaly is then the difference of the mean temperature from that monthly ‘normal’.  First here is a table showing which baseline each group uses.

Group Baseline
NASA GISS 1951 – 1980
Berkeley (BEST) 1951 – 1980
Hadcrut4.5 1961 – 1990
NOAA 1971 – 2000
UAH 1979 – 2010
RSS 1979 – 1984

This means that you can only compare temperature anomalies once they have all been normalised to the same baseline. In order to do that you must first calculate the 30 year average monthly temperature for the new baseline and then subtract it. These are the normalisation ‘offsets’. The UAH baseline 1979-2010 is the only one where all datasets have overlapping values. These are the offsets you need to plot all series together.

Group Offset for 1979 – 2010 baseline
NASA GISS 0.5421 0.4314
Berkeley (BEST) 0.3635
Hadcrut4.5 0.2907
NOAA 0.1838
H4-ST 0.315
RSS 0.0925

UAH clearly has no offset  and  H4-ST is my own Spherical Triangulation of Hadcrut4.5 stations. The offsets above should be subtracted from all anomaly values in each series. Here is an animation of the results.

Comparison of main temperature series normalised to UAH time period.

 

The agreement across groups.  is good except for GISS. The ‘warming’ observed by GISS is far greater than any other temperature index and renormalisation does not change the slope. It is an outlier.

Shown below is the table of offsets needed to normalise all series to the same baseline as Hadcrut4.5. The satellite offsets have been deduced by simply using the negative of the average Hadcrut4.5 anomaly in each of their respective baseline periods.

Group Offset for 1961 – 1990 baseline
NASA GISS 0.1293 0.101
Berkeley (BEST) 0.0643
NOAA -0.1216
RSS -0.1982
UAH -0.2907

Here is the comparison for all series plotted on the Hadcrut4.5 baseline. GISS does show the largest overall warming trend,  but it is not an outlier.

Temperature anomalies for each series normalised to 1961-1990

You can check my values and derive new ones using this spreadsheet. I also cannot understand why there is not already some agreed IPCC renormalisation. However, I am pretty sure it would be the same as mine.

Note: This post was prompted after a twitter exchange with Victor Venema. 😉

Posted in AGW, IPCC, NASA, NOAA, UK Met Office | Tagged | 18 Comments