2019 Annual Temperature

The December 2019 temperature was up by 0.12C from November at  1.01C relative to 1961-1990. This completes the annual global average temperature for 2019 making it the second warmest year at 0.86C. This is  just 0.015C cooler than 2016. All these values are calculated using GHCN-V4 and HadSST3 and spherical triangulation.

Here is the temperature distribution for December

Notice in particular the warmer than average temperatures across Australia, parts of the US and Northern Europe plus the ocean hot spot west of New Zealand.

Here are the final trend results for 2019

Recent Monthly temperature trends


and the Annual trends showing 2019 slightly cooler than 2016.

Annual average temperature anomalies

So 2019 ended with a warm December and Australia suffering terrible bush fires. I will be there in 3 weeks time and was planning to first visit the Blue Mountains.

Posted in Australia, climate science, Hadley, NOAA | Tagged , | 5 Comments

The baseline problem

A baseline is simply a period of successive years whose average temperature is subtracted from a time series to produce temperature “anomalies”.

One normally associates anomalies with weather station data where the baseline is then the 12 monthly seasonal averages for each station. What is less well known is that climate model projections  also need a baseline and that the end result depends on both the choice and length of that baseline period. Subtle changes in this choice can transform the apparent agreement between models and data.  We  have already met  this effect once before when reviewing Zeke’s wonder plot.  The question one might ask is why would climate models need any normalising at all ?

The underlying reason is simple – Climate models do not conserve energy at the observed surface temperature. They cannot balance the energy in from the sun with the energy out from IR radiation except by adjusting the mean surface temperature. This problem was beautifully explained by Ed Hawkins in a 2015 talk and later on his blog.

Hawkins slide 1. CMIP5 models all give wildly different  values of the global average surface temperature in order to balance energy. Hence the need for baselines.

CMIP5 models all predict different average surface temperatures. The model projections that we see in various IPCC reports and in the press have all been normalised to some arbitrary common baseline, but they are not normalised in the same way as the  measurement temperature data. Instead they are each artificially shifted so that every model averages to zero during the chosen baseline period. As a direct result of this all models can now agree with each other that the temperature anomaly is zero within this selected  baseline.

Model projections so adjusted can now be compared to the data once that too has been shifted to match their same baseline. This is an arbitrary shift without any scientific basis, yet the agreement between models and data now simply depends on that choice of baseline. Models can basically be tuned to fit the temperature data by selecting an optimum timebase. This is the dirty secret behind climate science.

Fig 2: Simply  changing baselines models improves agreement.

Figure 2. from Ed Hawkins demonstrates this effect perfectly. Models can appear to have almost perfect agreement to data if the baseline spans recent years. Agreement gets much worse when you select a more standard baseline like 1961-1990. This is also why Zeke got such good agreement for models as shown below. He chose a baseline which spans the full time interval for all the displayed data !

Zeke’s wonder plot.

Using his baseline the models and the data had to agree because by definition both agree that the average temperature anomaly is zero, and indeed it is !

There is another trick which can be used to fine tune agreement – varying the length of the baseline period. This changes the relative spread of the model data because of short time span variations between models. Figure 3  demonstrates the effect of varying the baseline timespan. This animation is from Ed Hawkin’s Blog and shows how short timebases can radically change the dispersion in models.

Effects of different time spans on normalisation.
Animation – Ed Hawkins

Choosing a shorter of longer baseline time period affects the spread and ordering of individual model projections. This is because the baseline captures just one snapshot in model variability freezing it in based on just one time interval.

In general series of measured temperature anomalies can be moved to a different baseline by a linear shift of all points. Here for example is GHCN V4 calculated on different timebases.

Global Land temperature anomalies calculated relative to 5 different baselines. The numbers in brackets are the number of stations contributing for each baseline period.

It is the model projections which change dramatically when using different baselines. One should always bear this in mind whenever  an ensemble of models seem to perfectly  match the data well. They are simply constrained to do so ! That is also why at any given time future projections  fan out so dramatically 30 years into the future.

Don’t worry though because in 30 years time all the models will yet again agree with those measured temperature anomalies !

Posted in AGW, Climate Change, climate science | Tagged , | 5 Comments

30 years of IPCC assessment reports – How well have they done?

Over the last 30 years there have been 5 IPCC assessment reports on climate change.  I decided to compare each  warming predictions made at the time of publication to the consequently measured  temperatures. The objective is twofold. Firstly how well did the predictions fare with time?  Secondly have we learned anything new about climate change in the last 30 years?

  1. FAR

The First Assessment Report (FAR) was published in 1990. It was probably the first public alarm raised that the world was warming as a consequence of CO2 emissions. As a spinoff there was also a very good book written by the FAR chairman John Houghton – “Global Warming – The Complete Briefing”.  Here is the comparison of the “model” to resultant temperatures.

The first prediction of global warming in 1990. The data are my own 3D version of HadCRUT4.6

The trend in temperatures is confirmed but by the end of  2019 the degree of warming was less than the FAR 1990 forecast. In reality emissions have indeed followed the Business as Usual trajectory, but warming instead turned out about 0.3C less than that forecast.

2. SAR

The second assessment report (SAR) dates from 1995. Here is the SAR comparison between my 3D HadCRUT4 results shown in red with  the UK Met Office Model.

My 3D results for HadCRUT4.6 are shown in red. The results are the same as Cowtan & Way.

We see that the model which reduces the effect of CO2 by including aerosols agrees today reasonably well with the data. However since aerosols have actually fallen significantly since 1995 this result is not as good as it looks.  I would call this a moderate success for SAR but only assuming just CO2 level forcing without feedbacks. Next we look at the third assessment report (TAR).

3. TAR

The third assessment report was published in 2001. The TAR model predictions were actually lower than those of both FAR and SAR,  perhaps reflecting a real drop off in the measured temperatures. HadCRUT3 had by then showed a definite pause (hiatus) in global warming was underway following the super el Nino in 1998. Since 2001 many more stations have consequently been added (and some removed) so that that as we will see the hiatus has today essentially disappeared. Here though  is the comparison  of the TAR ‘projections’  compared to the current temperature data as of 2019. The temperature data  are again my own 3D version of the HadCRUT4.6 measurements.

The blue points are HadCRUT4.6 calculated using Spherical Triangulation.

The agreement between models and data  looks much better. However note that the absolute temperature change is only relative to that of 1990 (FAR publication date). Even then my impression is  that we are most likely following their blue B2 ensemble curve.

4. AR4

The fourth assessment report was published in 2005 and included a  shorter term prediction diagram compared to the HadCRUT data as it was available then. The updated figure below shows the updated comparison as of Jan 2020. The black squares are the latest HadCRU4.6 data as downloaded from the Hadley site. The small black circles are the old H4 data points (as available in 2005) from the original report. They differ from the current H4 results because CRU have in the meantime updated all their station data. In this respect note how the 1998 temperature has dropped a little while the 2005 value increased sufficiently so that the hiatus starts to disappear.

AR4 model comparison to HadCRUT4.6 updated for 2019

The final agreement is not too bad but the data nevertheless still lie below the model means for all reasonable SRES scenarios.

5. AR5

This finally brings us up to the Fifth Assessment report comparison. When AR5 was published in 2013 the hiatus in warming was still clearly visible in all temperature series, and as a consequence all models were running hot by comparison. Since then however many more stations have been added in the Arctic and the large el Nino of 2016 has now apparently evaporated the hiatus. Despite all this how do the models now compare with the the modern temperature data as of January 2020?

Here is the up to date comparison of Figure 11.25a to the data.

IPCC AR5 Figure 11.25a updated for 2019. The green trend is HadCRUT4.6. The Cyan trend is the 3D-version. The data is skimming along the bottom sensitivity range of all 45 CMIP5 models.

It is clear that the warming trend lies at the lower end of the CMIP5 ensemble. Only models with lower sensitivity can adequately describe the temperature data.

All five comparisons across a 30 year period of assessment reports say the same thing. There is an obvious warming trend in global surface temperatures consistent with being caused by the anthropogenic increases in atmospheric CO2 levels. However this trend lies at the lower end of all model projections that have been made into the future. It is very easy for models to describe the known historic temperature record,  simply because they have been tuned to do exactly that. The real test of climate models is whether they can predict future warming and all the evidence of the last 5 assessment reports shows that most of them fail to do that.

It is normal practice in science that theoretical predictions which fail experimental tests are rejected or at the very least modified. Climate science is different. The news from the latest modelling ensemble CMIP6 is that the new generation of  ESMs are even more sensitive to CO2 than the 7 year old CMIP5 models. CMIP6  produces even stronger warming trends in stark contrast to the actual observations!   Where is the scientific accountability?  Has not climate science perhaps simply merged with climate activism?

Hold on to your hats !


Posted in AGW, Climate Change, climate science, CRU, Hadley, IPCC | Tagged , | 23 Comments