Realclimate: AR5 attribution statement

I posted the following  comment on RealScience today concerning the AR5 SPM statement “It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together”. I  also got a good response from Gavin Schmidt which I appreciate. Here it is:

I can’t help feeling that there is a certain amount of circular argument going on here.
“Over-estimates of model sensitivity would be accounted for in the methodology (via a scaling factor of less than one), and indeed, a small over-estimate (by about 10%) is already factored in.”

It seems clear that each model is tuned to match past temperature trends through individual adjustments to external forcings, feedbacks and internal variability. Then the results from these tuned model are re-presented (via Figure 2 above) as giving strong evidence that nearly all observed warming is anthropogenic as predicted. How could it be anything else ?

[Response: Your premise is not true, and so your conclusions do not follow. Despite endless repetition of these claims, models are *not* tuned on the trends over the 20th Century. They just aren’t. And in this calculation it wouldn’t even be relevant in any case, because the fingerprinting is done without reference to the scale of the response – just the pattern. That the scaling is close to 1 for the models is actually an independent validation of the model sensitivity. – gavin]

Despite this we then read in chapter 9 that.

“Almost all CMIP5 historical simulations do not reproduce the observed recent warming hiatus. There is medium confidence that the GMST trend difference between models and observations during 1998–2012 is to a substantial degree caused by internal variability, with possible contributions from forcing error and some CMIP5 models overestimating the response to increasing greenhouse-gas forcing.”

The AR5 explanation for the hiatus as given in chapter 9 is basically that about half of the pause is natural – a small reduction in TSI and more aerosols from volcanoes, while the other half is unknown – including perhaps oversensitivity in models.

[Response: When people say ‘half/half’ it is usually a sign that the analysis has not yet been fully worked out (which is this case here). – gavin]

Then on page 9.5 we read “There is very high confidence that the primary factor contributing to the spread in equilibrium climate sensitivity continues to be the cloud feedback.”

[Response: This is a separate issue and the statement is completely true. – gavin]

How much of this inter-model variability has actually all been hidden under the ANT term ?

[Response: The cloud feedback variation (and the consequent variation in TCR) goes into the analysis by producing different scalings for the different model responses. The mean scaling is about 0.9 (though there is presumably a spread), and that feeds directly into the uncertainty in the attribution. If all the models had the same sensitivity, the errors would be less. – gavin]

So I went back to chapter 10 of AR5 to try and understand what exactly model hindcasts are saying. Gavin insists that  models are NOT tuned to match past temperature data. My conclusion is that he is right and the models are not themselves tuned. Instead the external forcings are tuned! Evidence for this  can be seen in Figure 10.1 as shown below.

Fig 10.1 from AR4

Fig 10.1 from AR4

The key plot for me is d) which shows the total forcing for each model instance in the ensemble. They are ALL different. So each model has different values for GHG forcing, and for natural forcings. How exactly are these determined ? Do they all use the same basic data – CO2 levels, Volcanic aerosols, natural forcing ?  If so why are those models that grossly overshoot or undershoot the temperature data not simply rejected ? Most other branches of physics end up with a standard model which simply works until eventually disproved.

So I am confused – unless the real  intention is to increase  the error bands on CMIP5 ensemble projections to cover uncertain natural variability into the future.

Update 13/10:  I still haven’t been banned only getting a small amount of abuse yet still getting reasonable answers…..

Hank Roberts says:

For Clive Best: You’ve been fed a line and swallowed it.

Remember the “trick” stories?

“Tuning” stories are often the same sort of deceit.
You fell for it.

“Tuning” means fitting the physics, not matching the past.

Read past the title of this post:

Thanks for that clarification Hank,

You say “Tuning” means fitting the physics, not matching the past. I agree that this is an honest procedure if normalization is done just once and then fixed in time. I also accept that such models indeed reproduce well the observed warming 1950 – 2000.

Maybe I am just being thick here – but please can you explain to me then why these normalized CMIP5 models end up with such different external forcings as shown for example in Fig 10.1 d) in AR5 ?

[Response: These are the effective forcings, not the climate responses. And the variations in the effective forcings are a function of mainly of the aerosol modules, the base climatology and how the indirect effects are paramterised. This diagnostic is driven ultimately by the aerosol emission inventories, but the distribution of aerosols and their effective forcing is a complicated function of the elements I listed above. They are different in different models because the aerosol models, base climatology (including winds and rainfall rates) and interactions with clouds are differently parameterised. – gavin]

Does this not reflect variations in climate sensitivity of the underlying physics between models ?

[Response: No. This is not the temperature response, though it does reflect differences in some aspects of the underlying physics (though not in any trivial way). – gavin]

If so how do we make progress to determine the optimum model ? Is it even possible to have one standard climate model ?

[Response: We don’t. And there isn’t. There is inherent uncertainty in modelling the system with finite computational capacity and imperfect theoretical understanding and we need to sample that. The CMIP ensemble is not a perfect design for doing so, but it does a reasonable job. Making predictions should be a function of those models simulations but also our ability to correct for biases, adjust for incompleteness and weight for skill where we can. Model variation will grow in the future as we sample more of that real uncertainty, but with better ‘out-of-sample’ tests and deeper understanding, predictions (and projections) may well get better. – gavin]

About Clive Best

PhD High Energy Physics Worked at CERN, Rutherford Lab, JET, JRC, OSVision
This entry was posted in AGW, Climate Change, climate science, GCM, Science and tagged , , . Bookmark the permalink.

9 Responses to Realclimate: AR5 attribution statement

  1. Euan Mearns says:

    Hi Clive,

    “It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together”

    This in fact lies close to my long held position. I would say “perhaps as much as half”. But for holding this view I have been vilified by some as a “Flat Earther”. My current view actually fits into the bottom end of the IPCC spectrum. All the IPCC needs now is a leader like Blair to seamlessly slide into the shoes of the sceptic camp and claim victory for “the science”.

  2. Peter Tllman says:

    Good on you, mate, for bearding Gavin in his den, and keeping your cool in the process. Do keep this post updated, if you will, and let us know the outcome of the discussion.

    Best regards,
    Peter D. Tillman
    Professional geologist, amateur climatologist

  3. oneuniverse says:

    Hi Clive, thank you.

    In case you haven’t come across them, these two papers comment on this topic of the different forcings used for different models :

    Kiehl, J. T. (2007), Twentieth century climate model response and climate sensitivity, Geophys. Res. Lett., 34, L22710, doi:10.1029/2007GL031383.

    Knutti, R. (2008), Why are climate models reproducing the observed global surface warming so well?, Geophys. Res. Lett., 35, L18704, doi:10.1029/2008GL034932.

  4. Ken Gregory says:

    Typo alert: The first line says “I posted the following comment on RealScience today”, but the link goes to the RealClimate website.

  5. Your first comment
    “Can you explain why in figure 2(10.5) the error bar on ANT is so small? Naively I would expect this to be the sum of GHG and OA. This would then work out to be an error on ANT of sqrt(2*0.36) = 0.8C. This is also not explained in chapter 10.”
    seems to be a good one, and got a rather waffly reply from Gavin about ‘degeneracies in the footprint’.
    There’s another figure, 8.16, that seems to support your comment and does not fit Gavin’s answer. It shows a much wider pdf for ‘total anthro’ forcing than GHGs.

    • Clive Best says:

      Yes – Gavin tends to reject such observations as simply being dumb !

      He said this is all explained in 10.3.1.1.3, second paragraph. It is quoted below. Unfortunately it makes me think the errors should be much larger

      The results of multiple regression analyses of observed temperature changes onto the simulated responses to greenhouse gas, other anthropogenic, and natural forcings are shown in Figure 10.4 (Ribes and Terray, 2013; Gillett et al., 2013 ; Jones et al., 2013). The results, based on HadCRUT4 and a multi-model average, show robustly detected responses to greenhouse gas in the observational record whether data from 1861–2010 or only from 1951–2010 are analysed (Figure 10.4b). The advantage of analysing the longer period is that more information on observed and modelled changes is included, while a disadvantage is that it is difficult to validate climate models’ estimates of internal variability over such a long period. Individual model results exhibit considerable spread among scaling factors, with estimates of warming attributable to each forcing sensitive to the model used for the analsys (Figure 10.4; Ribes and Terray, 2013; Gillett et al., 2013 ; Jones et al., 2013 ), the period over which the analysis is applied (Figure 10.4; Gillett et al., 2013; Jones et al., 2013), and the EOF truncation or degree of spatial filtering (Ribes and Terray, 2013; Jones et al., 2013). In some cases the greenhouse gas response is not detectable in regressions using individual models (Figure 10.4; Ribes and Terray, 2013; Gillett et al., 2013; Jones et al., 2013), or a residual test is failed (Ribes and Terray, 2013; Gillett et al., 2013 ; Jones et al., 2013), indicating a poor fit between the simulated response and observed changes. Such cases are probably due largely to errors in the spatio-temporal pattern of responses to forcings simulated in individual models (Ribes and Terray, 2013), although observational error and internal variability errors could also play a role. Nonetheless, analyses in which responses are averaged across multiple models generally show much less sensitivity to period and EOF trucation (Gillett et al., 2013; Jones et al., 2013), and more consistent residuals (Gillett et al., 2013), which may be because model response errors are smaller in a multi-model mean.

      I cannot understand how this explains to me why the fingerprint ‘ANT’ error is very small whereas that for ‘GHG’ and ‘AER’ are very large !

  6. Kasuha says:

    I’d say this is largely a wordplay. We may say that models are “evolved” to match observations, if “tuning” is such a bad word.
    How is model made? You take laws of physics. You take observations. You do a lot of analysis and regressions. You make up a few educated guesses. You feed computer with all this and then you run the model. Then quality control comes and says yes or no. Yes if the model behaves, no if it doesn’t behave. Repeat it for sufficient number of cycles and you got a great example of natural selection in practice. And when the model behaves? When it matches observations, of course.

    Models are no better than their inputs – physics implementation, observation analysis, regressions, educated guesses. And quality control. If simulation of oceanic cycles wasn’t priority in quality control, models didn’t have to simulate it. If following the temperature trend was priority in quality control, models evolved to follow it.

  7. Christian says:

    the modells do not show the rapid warming around 1910 and 1940. there are faktors missing between week modelled 0,1K/30a and the observed 0,4K/30a. maybe the insolation “tuning” ist to week in all this modells.

Leave a Reply