Whatever happened to the Hiatus in Global Warming?

The surprise news from the IPCC 5th Assessment report published in 2013 was that the earth had not warmed at all since 1998. This was the so called  Hiatus in global warming lasting some 15 years. The Physical Science Basis of working group 1 is an excellent summary of the Physics and is still relevant today.

Evolution of “Global Temperatures” after AR5 (2013)

The official temperature series used then by the IPCC for the AR5 report was based on the CRU (Climate Research Unit, University of East Anglia) station data combined with the UK MET Office Hadley Centre’s SST ocean temperatures – HadSST2. However all the other temperature series (NOAA, NASA, Hadley/CRU) basically agreed with this conclusion. The AR5 report stated: “The observed GMST has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years, depending on the observation data set. The GMST trend over 1998-2012 is estimated to be around one third to one half of the trend over 1951-2012”.

There followed a Royal Society open Meeting to discuss these results, which I attended (detailed notes are here).   The Physical Science Basis of working group 1 is an excellent summary of the basic Physics and is still just as relevant today. The  Hiatus though became a major discussion topic, especially since all the AR5 climate models were predicting far greater warming than that observed. The official IPCC position at that time was that such pauses in warming were not unexpected and probably due to natural oscillations like the AMO (Atlantic Multi Decadal Oscillation).  Warming though was expected to restart soon.

Fig 1: Comparison of CMIP5 models and observed temperature trends, showing strong disagreement.

It thus became critically important for Climate Science to confirm that increasing CO2 levels does indeed cause significant warming in the temperature data. Here is the story behind how that objective was finally achieved within just  a couple of years of AR5 and the hiatus was quietly forgotten.

Evolution of HadCRUT.

The CRU land data is based on hundreds of meteorological station data collated by CRU. The climatology for each station is defined as the 12 monthly average temperatures calculated over a 30 year period from 1961-1990. These are referred to as climate “normals” and act as a baseline. The monthly difference in temperature to these normals are called “anomalies” and the global average of these anomalies is the reported monthly and annual global temperature (anomaly). So it is not the actual temperatures which count, but instead it is the change in temperature relative to the normal period which count. The Sea Surface Temperature HADSST is maintained by the Hadley Centre (MET OFFICE). Modern measurements are taken  by floating buoys and by satellites. Older measurements are derived from buckets and engine intake temperatures and have far sparser coverage. Systematic differences between old and new methods are estimated (bucket depth etc.) and corrected for.  Therefore knowledge of earlier SST is much poorer than modern SST temperatures.

Each month temperature data are being accumulated. The global temperature anomaly for HadCRUT4 in 2013 was calculated as follows. The monthly temperature anomaly for a weather station is simply the difference between that value and it’s “normal value” –  the 30 year average temperature for that month (1961-1990). Stations data were gridded in 5×5 lat, lon bins. Bins are weighted by cos(latitude) because the earth is a sphere and areas diminish as cos(latitude). Until 2013 the global value was simply (NH + SH)/2, but this was later changed to (2*NH + SH/3) because there is roughly half as much land area in the SH as there is in the NH. The original software to do this was a PERL script which looped over all station files (I still have it).  This change alone explains some of the apparent temperature increase post 1998 going from CRUTEM3 to CRUTEM4. However this is not the full story, because in addition over 600 new stations at high northern latitudes were added in CRUTEM4, while 175 stations dropped from South America  showing cooling were dropped. The net warming effect of all this was to boost the warming trend by ~30% as compared to CRUTEM3.

Jones(2012) wrote “The inclusion of much additional data from the Arctic (particularly the Russian Arctic) has led to estimates for the Northern Hemisphere (NH) being warmer by about 0.1°C for the years since 2001.”

CRUTEM4.3 then further continued this warming trend by adding yet more arctic stations and began using data homogenisation ( similar to GHCN V3 ) as discussed below.

Sea Surface temperature data is maintained by the Met Office and was originally called HadSST2 in 2013 (current version is HadSST4). These sea surface temperatures were mainly ship based measurements either by buckets sampling or by engine inlets. Floating buoys were deployed later in the 20th century. Consequently knowledge of SST is rather poor in the 19th and early 20th century and much better once buoys and satellite data were used. Corrections are therefore applied to the earlier data,much of which is inspired guesswork.

HadCRUT4 is the global temperature average calculated by simply combined all the land measurements combined with SST measurements based on the same basic algorithm as for CRUTEM4. The global average was then simply the cosine(Latitude) weighted average over land and ocean 5 degree lat-lon bins.

This then was the situations after AR5 where the Hiatus in Global warming was a clear effect as confirmed by other groups and in the AR5 report itself. Yet just a few months later the Hiatus slowly began to disappear! So how did that happen? The simple answer is that the underlying temperature measurement data were changed by applying two new “corrections”. The first correction applied an algorithm called “pair wise homogenisation” to all station temperatures. The second algorithm is called “Kriging” of the temperature data, which assumes that one can interpolate temperature anomalies into Polar regions without any recorded measurement data. We look at each below.

Pair Wise Homogenisation.

“Automated homogenisation algorithms are based on the pairwise comparison of monthly temperature series.” The pairwise homogenisation algorithm effectively always enhances any region wide warming trend. The algorithm first looks for shifts between neighbouring weather station “anomalies”. There can indeed be cases where a station has been relocated to a slightly different altitude which can causes a step function shift in average temperatures. However the algorithm is applied generally between all station pairs even, sometimes thousands of miles apart. This will produce a systematic enhanced warming effect especially when applied to the 30 year normalisation period itself (e.g. 1961-1990).

Global temperature comparison for uncorrected (4U)  and corrected data (4) – GHCN.

We see above that the net effect of “homogenisation” is to increase the apparent warming since ~2000 by about 0.15C.  How sure are we that these automated algorithmic corrections are even correct? A recent paper has looked in detail at the effects of the pairwise algorithm on GHCN-V4 and the results are surprising.  They downloaded all daily versions of GHCN-V4  over a period of  10 years providing a consistency check over time of the corrections that were applied. They studied European stations and found that on average of 100 different pairwise corrections were applied during that decade while only 3% of these  corrections actually corresponded to documented  metadata events e.g. station relocations. Only 18% of corrections applied corresponded to documented station moves within a 12 month window. The majority of “corrections” were intermittent and undocumented.

Another consideration is that a comparison of the temperature of one station with its near neighbours should occasionally identify stations reading too hot and reduce the recorded temperature accordingly. Yet this never seems to happen, the trend  always seems to be towards a warmer trend compared to that in the raw measurement data.

Here is just one  example from Australia where a warming trend has been imposed on top of a clear station shift.

Click to view an animation of the corrections applied from  stations hundreds of miles away.

Krigging (COWTAN and WAY)

Cowtan & Way introduced a version of Hadcrut4 after AR5 which used a kriging technique to extrapolate values into those parts of the world where there are no direct measurements. In particular this was used especially in the Arctic. The end result is always to increase the overall warming rate. All groups now regularly use this type of technique to extend coverage into  polar regions and .

Sea Surface Temperature Data

Sea Surface temperatures have been measured since 1850 using very different methods ranging  from bucket temperature, engine inlet temperatures, through to buoys and satellite data. It is complex since temperature varies with depth and other factors while land temperatures are measured 2m above the surface. Complicated methods to correct such instrumentation changes have been developed. The latest HadSST4 data which also  incorporates satellite corrections to recent buoy data  has also added a significant warming trend. See here for details.

The overall result of both these updates has been to increase the apparent recent warming. This can be seen by comparing the uncorrected global temperature data with the corrected data each calculated in exactly the same way.

Effect of upgrading SST from HadSST3 to HadSST4 is large

So finally the Hiatus has disappeared !

This also continues a general trend ever since AR5 of unifying all temperatures series so as to agree with each other confirming a  higher rate of warming (GHCN, Berkeley Earth, NASA, HADSST5). This trend is based on 1) Homogenisation of station measurements, 2) Infilling data to include areas without measurements (kriging)  and 3) Blending of SST to 2m temperatures.

The HADCRUT5 dataset now also uses by default infilling to extend coverage to those 5 degree bins without any data. Shown below is a “modern” comparison between all the major temperature series as produced by Zeke Hausfather for Carbon Brief.

Basically the consensus on quantitive global warming was reached simply because  everyone now uses the same  methodology (homogenisation, kriging, etc.) and the same underlying data to calculate it! There are no independent datasets. That said, our knowledge of temperatures before 1950, let alone pre-industrial temperatures, are still far more uncertain than is made apparent here. As a consequence  the vertical scale shown above has an uncertainty probably over  0.1C, simply because we don’t really know what preindustrial temperatures really were.

One of the most important checks on the validity of scientific results is to have independent groups analysing the raw temperature data using different methodologies. This process seems to stopped as climate change politics has grown. Despite all the hype,  it is still often almost impossible to see any actual warming effect on an individual station. Dublin is a good example.

Temperature trace for Dublin Airport. In red are the recorded temperatures and in green are the temperature anomalies relative to 1961-1990.

Science advances when independent experimental results confirm a theoretical  prediction. It usually never works the other way round !

About Clive Best

PhD High Energy Physics Worked at CERN, Rutherford Lab, JET, JRC, OSVision
This entry was posted in Climate Change, climate science, CRU, Hadley, NASA, UK Met Office. Bookmark the permalink.

7 Responses to Whatever happened to the Hiatus in Global Warming?

  1. Mattew says:

    Hi Clive,

    thanks for all this info. superb. i would like to ask just some honest questions.

    Why do you think everyone uses the same methodology (homogenisation, kriging, etc.) and the same underlying data to calculate it?

    Why we don’t have institutions, public or private, that apply different methodologies to raw data?

    Do you have any early thoughts about the last paper of John Christy and Roy Spencer? Looks promising.

    https://www.drroyspencer.com/2023/11/a-new-global-urban-heat-island-dataset-global-grids-of-the-urban-heat-island-effect-on-air-temperature-1800-2023/

    Cheers!

  2. Clive Best says:

    Thanks for the comments and highlighting the new John Christy and Roy Spencer paper. It looks interesting. The effect is stronger in the corrected GHCN station data, which is probably because all trends are magnified by the pair wise corrections. The homogenisation exercise will tend to also amplify a pure urban heating effect.

    I’ll check out the details later.

  3. François Riverin says:

    Hi Mr Clive Best,
    I am a layperson, but deeply interested in the climate change debate. Over all, do you consider these data reassessments good or bad for climatic science? Thank you.

  4. Clive Best says:

    Some of the corrections may be reasonable but the automated pair wise correction has essentially made all temperature series (Hadley, Nasa, Berkeley, NOAA etc. ) the same. That means we have lost the independent safeguard check – so it is bad for science.

    Furthermore everything is based on the global average temperature anomaly which the data says has risen by about 1.2C. This is not an emergency and the proposed solution (cut emissions to zero) is currently impossible. Most of the green lobby reject Nuclear Power which is the only effective zero carbon energy source. So there is a total disconnect between the problem (currently minor) and the proposed solution (green ideology).

  5. climanrecon says:

    One thing to note is that the comparisons of the various global temperature datasets always seem to be global, i.e. including the sea surface. This gives substantial dilution of any differences in the land temperatures, given that everybody uses more or less the same sea surface temperatures.

    I believe that current homogenisation procedures for land data are very error prone, but that the errors tend to cancel out in regional averages, but there is no reason for exact cancellation. These errors are hiding in plain sight. Papers from both Canada and Australia show enormous differences in warming rates of homogenised data between nearby stations, to an ex-engineer they establish that errors are being made, but to the authors of the papers they are nothing to write home about.

  6. Chris says:

    Mr Best, Why are sea temperatures not taken only at the equator and land temperatures not taken only at the poles?

    • Clive Best says:

      Of course there are huge differences between actual temperature measurements, and the so-called “anomalies”. Climate Science can only deal with anomalies so they would argue that the anomaly measured at the equator should be the same as that measured at the poles. This assumes Global Warming is indeed “Global”.

      Of course this probably isn’t true.

Leave a Reply