Ed Hawkins has an interesting new paper which discusses why models use temperature anomalies and how the normalisation period chosen can affect the results. The main reason why anomalies are needed in the first place is because the models have a large spread in Global Surface Temperature(GST) values. The paper makes this clear.
It may be a surprise to some readers that an accurate simulation of global mean temperature is not necessarily an essential pre-requisite for accurate global temperature projections.
Calculated GST values can differ by up to 3C which is nearly 4 times larger than all observed global warming since 1850. Ed argues that this does not matter since the trends are all rather similar so that by calculating anomalies such offsets are subtracted out. The use of anomalies for sparse weather station data is common practice but this is not true for models which are typically run on a 1 degree grid of the surface. This all sounds a bit fishy, so I decided to look in more detail at where such large temperature differences occur, and whether this is not simply papering over other systematic problems with the models. I selected GISS-EC-R which has a low climate sensitivity and GFDL-CM3 which has a high sensitivity. The plot below shows the net temperature difference by 2100 between the ‘hot’ running GFDL-C3 and the ‘cooler’ GISS-EC-R . The former has a temperature increase of 5.6C by 2100 and the latter of 3C.
The largest differences are in the Arctic, Himalayas, Andes and central Africa. This difference remains consistent throughout the decadel projections since 1861, while by 2100 the Arctic discrepency of over 5C has spreads across Russia and into Europe and Canada by 2100.
We can also compare how the Arctic sea ice cover evolves for both models in 5 steps, 1861, 2015, 2035, 2061, 2099. This is shown below.
Summer Ice essentially dissapears about 40 years earlier in GFD than in GISS.
There seems to be a gentleman’s agreement among modeling groups not to criticise other members of the CMIP5 ensemble. I understand that huge amounts of effort is put into developing each GCM, but they simply can’t all be correct. Physics has always been based on developing theoretical models to describe nature. These models make predictions which can then be tested by experiment. If the results of these experiments dissagree with the predictions then either the model can be updated to explain the new data, or else discarded. Why should climate science be different?
Surely we are already at the stage that we can distinguish between models based on measurements. Why is this not being done?
So let’s just compare global temperature anomalies directly.
It would apear that the data is already pointing one way. The data certainly favours the low sensitivity model GIS-E2-R and essentially excludes the high sensitivity GFDL-CM3. RCP8.5 represents more than a quadrupling of CO2 forcing by 2100 (2.2TCR), so TCR looks likely to be ~ 1.4C. A value of 2.5C is ruled out.