Global Temperatures for 2023

December saw a rise in the global average temperature anomaly to 1.27C resulting in a final annual value of 1.13C for the year 2023. This represents an increase of just under 0.3C  since 2022, the warmest year so far (as widely reported). My calculation is based on spherical triangulation and uses GHCN corrected land data and the latest HadCRUT3 sea surface temperature data. It is the same basic data as used by all groups (Berkeley, Hadley/CRU, NASA etc.)

Here is the data for the full year 2023

Annual global temperature anomalies updated for 2023

The monthly data shows just how changeable 2023 really was.

Monthly Temperature Anomalies updated for December 2023

Finally here is an animation showing the temperature distribution on the earth’s surface during December 2023

 

Temperatures during December 2023.

 
Note the continuing El Nino and very warm temperatures in USA and Canada, while Antarctica remains cooler than normal.

Posted in AGW, Climate Change, Hadley, IPCC, NASA, NOAA | 4 Comments

Whatever happened to the Hiatus in Global Warming?

The surprise news from the IPCC 5th Assessment report published in 2013 was that the earth had not warmed at all since 1998. This was the so called  Hiatus in global warming lasting some 15 years. The Physical Science Basis of working group 1 is an excellent summary of the Physics and is still relevant today.

Evolution of “Global Temperatures” after AR5 (2013)

The official temperature series used then by the IPCC for the AR5 report was based on the CRU (Climate Research Unit, University of East Anglia) station data combined with the UK MET Office Hadley Centre’s SST ocean temperatures – HadSST2. However all the other temperature series (NOAA, NASA, Hadley/CRU) basically agreed with this conclusion. The AR5 report stated: “The observed GMST has shown a much smaller increasing linear trend over the past 15 years than over the past 30 to 60 years, depending on the observation data set. The GMST trend over 1998-2012 is estimated to be around one third to one half of the trend over 1951-2012”.

There followed a Royal Society open Meeting to discuss these results, which I attended (detailed notes are here).   The Physical Science Basis of working group 1 is an excellent summary of the basic Physics and is still just as relevant today. The  Hiatus though became a major discussion topic, especially since all the AR5 climate models were predicting far greater warming than that observed. The official IPCC position at that time was that such pauses in warming were not unexpected and probably due to natural oscillations like the AMO (Atlantic Multi Decadal Oscillation).  Warming though was expected to restart soon.

Fig 1: Comparison of CMIP5 models and observed temperature trends, showing strong disagreement.

It thus became critically important for Climate Science to confirm that increasing CO2 levels does indeed cause significant warming in the temperature data. Here is the story behind how that objective was finally achieved within just  a couple of years of AR5 and the hiatus was quietly forgotten.

Evolution of HadCRUT.

The CRU land data is based on hundreds of meteorological station data collated by CRU. The climatology for each station is defined as the 12 monthly average temperatures calculated over a 30 year period from 1961-1990. These are referred to as climate “normals” and act as a baseline. The monthly difference in temperature to these normals are called “anomalies” and the global average of these anomalies is the reported monthly and annual global temperature (anomaly). So it is not the actual temperatures which count, but instead it is the change in temperature relative to the normal period which count. The Sea Surface Temperature HADSST is maintained by the Hadley Centre (MET OFFICE). Modern measurements are taken  by floating buoys and by satellites. Older measurements are derived from buckets and engine intake temperatures and have far sparser coverage. Systematic differences between old and new methods are estimated (bucket depth etc.) and corrected for.  Therefore knowledge of earlier SST is much poorer than modern SST temperatures.

Each month temperature data are being accumulated. The global temperature anomaly for HadCRUT4 in 2013 was calculated as follows. The monthly temperature anomaly for a weather station is simply the difference between that value and it’s “normal value” –  the 30 year average temperature for that month (1961-1990). Stations data were gridded in 5×5 lat, lon bins. Bins are weighted by cos(latitude) because the earth is a sphere and areas diminish as cos(latitude). Until 2013 the global value was simply (NH + SH)/2, but this was later changed to (2*NH + SH/3) because there is roughly half as much land area in the SH as there is in the NH. The original software to do this was a PERL script which looped over all station files (I still have it).  This change alone explains some of the apparent temperature increase post 1998 going from CRUTEM3 to CRUTEM4. However this is not the full story, because in addition over 600 new stations at high northern latitudes were added in CRUTEM4, while 175 stations dropped from South America  showing cooling were dropped. The net warming effect of all this was to boost the warming trend by ~30% as compared to CRUTEM3.

Jones(2012) wrote “The inclusion of much additional data from the Arctic (particularly the Russian Arctic) has led to estimates for the Northern Hemisphere (NH) being warmer by about 0.1°C for the years since 2001.”

CRUTEM4.3 then further continued this warming trend by adding yet more arctic stations and began using data homogenisation ( similar to GHCN V3 ) as discussed below.

Sea Surface temperature data is maintained by the Met Office and was originally called HadSST2 in 2013 (current version is HadSST4). These sea surface temperatures were mainly ship based measurements either by buckets sampling or by engine inlets. Floating buoys were deployed later in the 20th century. Consequently knowledge of SST is rather poor in the 19th and early 20th century and much better once buoys and satellite data were used. Corrections are therefore applied to the earlier data,much of which is inspired guesswork.

HadCRUT4 is the global temperature average calculated by simply combined all the land measurements combined with SST measurements based on the same basic algorithm as for CRUTEM4. The global average was then simply the cosine(Latitude) weighted average over land and ocean 5 degree lat-lon bins.

This then was the situations after AR5 where the Hiatus in Global warming was a clear effect as confirmed by other groups and in the AR5 report itself. Yet just a few months later the Hiatus slowly began to disappear! So how did that happen? The simple answer is that the underlying temperature measurement data were changed by applying two new “corrections”. The first correction applied an algorithm called “pair wise homogenisation” to all station temperatures. The second algorithm is called “Kriging” of the temperature data, which assumes that one can interpolate temperature anomalies into Polar regions without any recorded measurement data. We look at each below.

Pair Wise Homogenisation.

“Automated homogenisation algorithms are based on the pairwise comparison of monthly temperature series.” The pairwise homogenisation algorithm effectively always enhances any region wide warming trend. The algorithm first looks for shifts between neighbouring weather station “anomalies”. There can indeed be cases where a station has been relocated to a slightly different altitude which can causes a step function shift in average temperatures. However the algorithm is applied generally between all station pairs even, sometimes thousands of miles apart. This will produce a systematic enhanced warming effect especially when applied to the 30 year normalisation period itself (e.g. 1961-1990).

Global temperature comparison for uncorrected (4U)  and corrected data (4) – GHCN.

We see above that the net effect of “homogenisation” is to increase the apparent warming since ~2000 by about 0.15C.  How sure are we that these automated algorithmic corrections are even correct? A recent paper has looked in detail at the effects of the pairwise algorithm on GHCN-V4 and the results are surprising.  They downloaded all daily versions of GHCN-V4  over a period of  10 years providing a consistency check over time of the corrections that were applied. They studied European stations and found that on average of 100 different pairwise corrections were applied during that decade while only 3% of these  corrections actually corresponded to documented  metadata events e.g. station relocations. Only 18% of corrections applied corresponded to documented station moves within a 12 month window. The majority of “corrections” were intermittent and undocumented.

Another consideration is that a comparison of the temperature of one station with its near neighbours should occasionally identify stations reading too hot and reduce the recorded temperature accordingly. Yet this never seems to happen, the trend  always seems to be towards a warmer trend compared to that in the raw measurement data.

Here is just one  example from Australia where a warming trend has been imposed on top of a clear station shift.

Click to view an animation of the corrections applied from  stations hundreds of miles away.

Krigging (COWTAN and WAY)

Cowtan & Way introduced a version of Hadcrut4 after AR5 which used a kriging technique to extrapolate values into those parts of the world where there are no direct measurements. In particular this was used especially in the Arctic. The end result is always to increase the overall warming rate. All groups now regularly use this type of technique to extend coverage into  polar regions and .

Sea Surface Temperature Data

Sea Surface temperatures have been measured since 1850 using very different methods ranging  from bucket temperature, engine inlet temperatures, through to buoys and satellite data. It is complex since temperature varies with depth and other factors while land temperatures are measured 2m above the surface. Complicated methods to correct such instrumentation changes have been developed. The latest HadSST4 data which also  incorporates satellite corrections to recent buoy data  has also added a significant warming trend. See here for details.

The overall result of both these updates has been to increase the apparent recent warming. This can be seen by comparing the uncorrected global temperature data with the corrected data each calculated in exactly the same way.

Effect of upgrading SST from HadSST3 to HadSST4 is large

So finally the Hiatus has disappeared !

This also continues a general trend ever since AR5 of unifying all temperatures series so as to agree with each other confirming a  higher rate of warming (GHCN, Berkeley Earth, NASA, HADSST5). This trend is based on 1) Homogenisation of station measurements, 2) Infilling data to include areas without measurements (kriging)  and 3) Blending of SST to 2m temperatures.

The HADCRUT5 dataset now also uses by default infilling to extend coverage to those 5 degree bins without any data. Shown below is a “modern” comparison between all the major temperature series as produced by Zeke Hausfather for Carbon Brief.

Basically the consensus on quantitive global warming was reached simply because  everyone now uses the same  methodology (homogenisation, kriging, etc.) and the same underlying data to calculate it! There are no independent datasets. That said, our knowledge of temperatures before 1950, let alone pre-industrial temperatures, are still far more uncertain than is made apparent here. As a consequence  the vertical scale shown above has an uncertainty probably over  0.1C, simply because we don’t really know what preindustrial temperatures really were.

One of the most important checks on the validity of scientific results is to have independent groups analysing the raw temperature data using different methodologies. This process seems to stopped as climate change politics has grown. Despite all the hype,  it is still often almost impossible to see any actual warming effect on an individual station. Dublin is a good example.

Temperature trace for Dublin Airport. In red are the recorded temperatures and in green are the temperature anomalies relative to 1961-1990.

Science advances when independent experimental results confirm a theoretical  prediction. It usually never works the other way round !

Posted in Climate Change, climate science, CRU, Hadley, NASA, UK Met Office | 4 Comments

Richard Betts/Met Office warming Indicator

The Met Office are proposing to use a 10 year average temperature indicator to define when and if the 1.5 degree since industrial periods will be exceeded. The pre-industrial period is defined as 1850-1900 which is different from  the 1961- 1990 baseline that are used for calculating station anomalies because this maximises the available station count. The offset for the preindustrial period is consequentially based on  these anomaly values  before 1900. 

A comparison of the latest HadCRUT5 with my Spherical triangulation results shows an  almost perfect agreement between the two methods.  This then results in a joint pre-industrial baseline (1850-1900) of -0.4C.  Thus the Paris agreement to limit total warming to 1.5C equates to limiting these measured (baseline) temperature anomalies to less that 1.1C.

It is clearly evident that the year to year variability is large, for example due to El Nino and other transient effects which makes estimating the effect due to CO2 alone difficult. However there is another way to calculate global temperatures which allows for far longer integration periods avoiding this problem. This is based on Icosahedral 3D binning as I described here. This method allows for  the integration period in each cell to be extended up to a decade. These are the results that I get using the latest complete 2010s decade measurements.

Views of the 3D equal size Icosahedral bins for 2011-2020

Temperature anomalies calculated on a regular icosahedron Click here to view in 3D

Now we compare the decadal temperature values with the normal annual calculated values by spherical triangulation.

Annual temperature trends updated for October 2023 compared to the underlying decadal trends.

This result proves that the decadal temperatures describe very well the underlying temperature trends in the data,  unaffected by annual variability. As a consequence of this we can simply extrapolate the observed linear decadal trend forward in time to determine when the Paris agreement limit of 1.5 degrees will likely be exceeded.

The observed stable decadal trend shows that the Paris Agreement to limit warming to 1.5C since the preindustrial level will very likely be exceeded in 2032

The conclusion is that a net warming of 1.5C since preindustrial times exceeding the Paris Agreement will most likely occur in 2032. This figure of 1.5C, if I remember well, was anyway a compromise resolution made at the end of the conference.

Posted in AGW, Climate Change | 4 Comments