I posted the following comment on RealScience today concerning the AR5 SPM statement “It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together”. I also got a good response from Gavin Schmidt which I appreciate. Here it is:
I can’t help feeling that there is a certain amount of circular argument going on here.
“Over-estimates of model sensitivity would be accounted for in the methodology (via a scaling factor of less than one), and indeed, a small over-estimate (by about 10%) is already factored in.”
It seems clear that each model is tuned to match past temperature trends through individual adjustments to external forcings, feedbacks and internal variability. Then the results from these tuned model are re-presented (via Figure 2 above) as giving strong evidence that nearly all observed warming is anthropogenic as predicted. How could it be anything else ?
[Response: Your premise is not true, and so your conclusions do not follow. Despite endless repetition of these claims, models are *not* tuned on the trends over the 20th Century. They just aren’t. And in this calculation it wouldn’t even be relevant in any case, because the fingerprinting is done without reference to the scale of the response – just the pattern. That the scaling is close to 1 for the models is actually an independent validation of the model sensitivity. – gavin]
Despite this we then read in chapter 9 that.
“Almost all CMIP5 historical simulations do not reproduce the observed recent warming hiatus. There is medium confidence that the GMST trend difference between models and observations during 1998–2012 is to a substantial degree caused by internal variability, with possible contributions from forcing error and some CMIP5 models overestimating the response to increasing greenhouse-gas forcing.”
The AR5 explanation for the hiatus as given in chapter 9 is basically that about half of the pause is natural – a small reduction in TSI and more aerosols from volcanoes, while the other half is unknown – including perhaps oversensitivity in models.
[Response: When people say ‘half/half’ it is usually a sign that the analysis has not yet been fully worked out (which is this case here). – gavin]
Then on page 9.5 we read “There is very high confidence that the primary factor contributing to the spread in equilibrium climate sensitivity continues to be the cloud feedback.”
[Response: This is a separate issue and the statement is completely true. – gavin]
How much of this inter-model variability has actually all been hidden under the ANT term ?
[Response: The cloud feedback variation (and the consequent variation in TCR) goes into the analysis by producing different scalings for the different model responses. The mean scaling is about 0.9 (though there is presumably a spread), and that feeds directly into the uncertainty in the attribution. If all the models had the same sensitivity, the errors would be less. – gavin]
So I went back to chapter 10 of AR5 to try and understand what exactly model hindcasts are saying. Gavin insists that models are NOT tuned to match past temperature data. My conclusion is that he is right and the models are not themselves tuned. Instead the external forcings are tuned! Evidence for this can be seen in Figure 10.1 as shown below.
The key plot for me is d) which shows the total forcing for each model instance in the ensemble. They are ALL different. So each model has different values for GHG forcing, and for natural forcings. How exactly are these determined ? Do they all use the same basic data – CO2 levels, Volcanic aerosols, natural forcing ? If so why are those models that grossly overshoot or undershoot the temperature data not simply rejected ? Most other branches of physics end up with a standard model which simply works until eventually disproved.
So I am confused – unless the real intention is to increase the error bands on CMIP5 ensemble projections to cover uncertain natural variability into the future.
Update 13/10: I still haven’t been banned only getting a small amount of abuse yet still getting reasonable answers…..
Hank Roberts says:
For Clive Best: You’ve been fed a line and swallowed it.
Remember the “trick” stories?
“Tuning” stories are often the same sort of deceit.
You fell for it.
“Tuning” means fitting the physics, not matching the past.
Read past the title of this post:
Thanks for that clarification Hank,
You say “Tuning” means fitting the physics, not matching the past. I agree that this is an honest procedure if normalization is done just once and then fixed in time. I also accept that such models indeed reproduce well the observed warming 1950 – 2000.
Maybe I am just being thick here – but please can you explain to me then why these normalized CMIP5 models end up with such different external forcings as shown for example in Fig 10.1 d) in AR5 ?
[Response: These are the effective forcings, not the climate responses. And the variations in the effective forcings are a function of mainly of the aerosol modules, the base climatology and how the indirect effects are paramterised. This diagnostic is driven ultimately by the aerosol emission inventories, but the distribution of aerosols and their effective forcing is a complicated function of the elements I listed above. They are different in different models because the aerosol models, base climatology (including winds and rainfall rates) and interactions with clouds are differently parameterised. – gavin]
Does this not reflect variations in climate sensitivity of the underlying physics between models ?
[Response: No. This is not the temperature response, though it does reflect differences in some aspects of the underlying physics (though not in any trivial way). – gavin]
If so how do we make progress to determine the optimum model ? Is it even possible to have one standard climate model ?
[Response: We don’t. And there isn’t. There is inherent uncertainty in modelling the system with finite computational capacity and imperfect theoretical understanding and we need to sample that. The CMIP ensemble is not a perfect design for doing so, but it does a reasonable job. Making predictions should be a function of those models simulations but also our ability to correct for biases, adjust for incompleteness and weight for skill where we can. Model variation will grow in the future as we sample more of that real uncertainty, but with better ‘out-of-sample’ tests and deeper understanding, predictions (and projections) may well get better. – gavin]