HadCRUT4.6 has released their annual temperature result for 2019 as 0.74C. They grid the station data (CRUTEM4) with SST data (HADSST3) in 5×5 degree bins and perform an area weighted: global average. CRUTEM4 has 7680 stations which contribute to calculating this average. Although GHCN-V4 has more stations (17280) it is not clear to me that the coverage is really all that much better. CRUTEM4 is similar to the coverage of V3 but with some additional stations.
I downloaded the CRUTEM4 station data and calculated the global average with HadSST3 using my 3D averaging method (Spherical Triangulation). This is exactly the same data as that used by HadCRUT4. Both results are compared below together with those using GHCN-V4 and HadSST3..
Comparison of methodology and station data used to calculate annual global temperatures.
The largest difference is that between spatial integration techniques. Exactly the same data produces a difference of 0.07C in the result for 2019. The reason for this is simply because spherical triangulation makes an implicit interpolation over both poles, whereas the traditional 5×5 lat/lon grid averages over just the occupied cells. This difference will depend from year by year on just how much warmer high latitude stations warm as compared to those at lower latitudes. So in 2004 and 2015 there was little difference between the two as compared to say 2016.
Here is the 3D grid used to calculate the anomaly for December 2019.
The spherical triangulation grid for December 2019.
This shows how the triangulation connects together all station locations covering all the earth’s surface and as a result interpolates the average temperature from the 3 vertices across each triangular area. The coloured triangles show the relative increase in temperature anomalies relative to 1961-1990..
This procedure also gives very similar results to those of Cowtan & Way who instead use a kriging technique to interpolate into polar regions.
The December 2019 temperature was up by 0.12C from November at 1.01C relative to 1961-1990. This completes the annual global average temperature for 2019 making it the second warmest year at 0.86C. This is just 0.015C cooler than 2016. All these values are calculated using GHCN-V4 and HadSST3 and spherical triangulation.
Here is the temperature distribution for December
Notice in particular the warmer than average temperatures across Australia, parts of the US and Northern Europe plus the ocean hot spot west of New Zealand.
Here are the final trend results for 2019
Recent Monthly temperature trends
and the Annual trends showing 2019 slightly cooler than 2016.
Annual average temperature anomalies
So 2019 ended with a warm December and Australia suffering terrible bush fires. I will be there in 3 weeks time and was planning to first visit the Blue Mountains.
A baseline is simply a period of successive years whose average temperature is subtracted from a time series to produce temperature “anomalies”.
One normally associates anomalies with weather station data where the baseline is then the 12 monthly seasonal averages for each station. What is less well known is that climate model projections also need a baseline and that the end result depends on both the choice and length of that baseline period. Subtle changes in this choice can transform the apparent agreement between models and data. We have already met this effect once before when reviewing Zeke’s wonder plot. The question one might ask is why would climate models need any normalising at all ?
The underlying reason is simple – Climate models do not conserve energy at the observed surface temperature. They cannot balance the energy in from the sun with the energy out from IR radiation except by adjusting the mean surface temperature. This problem was beautifully explained by Ed Hawkins in a 2015 talk and later on his blog.
CMIP5 models all predict different average surface temperatures. The model projections that we see in various IPCC reports and in the press have all been normalised to some arbitrary common baseline, but they are not normalised in the same way as the measurement temperature data. Instead they are each artificially shifted so that every model averages to zero during the chosen baseline period. As a direct result of this all models can now agree with each other that the temperature anomaly is zero within this selected baseline.
Model projections so adjusted can now be compared to the data once that too has been shifted to match their same baseline. This is an arbitrary shift without any scientific basis, yet the agreement between models and data now simply depends on that choice of baseline. Models can basically be tuned to fit the temperature data by selecting an optimum timebase. This is the dirty secret behind climate science.
Figure 2. from Ed Hawkins demonstrates this effect perfectly. Models can appear to have almost perfect agreement to data if the baseline spans recent years. Agreement gets much worse when you select a more standard baseline like 1961-1990. This is also why Zeke got such good agreement for models as shown below. He chose a baseline which spans the full time interval for all the displayed data !
Using his baseline the models and the data had to agree because by definition both agree that the average temperature anomaly is zero, and indeed it is !
There is another trick which can be used to fine tune agreement – varying the length of the baseline period. This changes the relative spread of the model data because of short time span variations between models. Figure 3 demonstrates the effect of varying the baseline timespan. This animation is from Ed Hawkin’s Blog and shows how short timebases can radically change the dispersion in models.
Choosing a shorter of longer baseline time period affects the spread and ordering of individual model projections. This is because the baseline captures just one snapshot in model variability freezing it in based on just one time interval.
In general series of measured temperature anomalies can be moved to a different baseline by a linear shift of all points. Here for example is GHCN V4 calculated on different timebases.
It is the model projections which change dramatically when using different baselines. One should always bear this in mind whenever an ensemble of models seem to perfectly match the data well. They are simply constrained to do so ! That is also why at any given time future projections fan out so dramatically 30 years into the future.
Don’t worry though because in 30 years time all the models will yet again agree with those measured temperature anomalies !