The Logical Fallacy behind ‘Representative Concentration Pathways’

Representative Concentration Pathways (RCPs) used in AR5 are future projections of greenhouse emissions according to different mitigation scenarios. So for example  there is a ‘business as usual’ high emission scenario (RCP8.5) and a midrange mitigation emissions scenario (RCP4.5). CMIP5 model projections are then based on runs with each scenario.

Here is the fallacy. The number 8.5 refers to the resultant climate forcing in watts/m^2 at the top of the atmosphere by 2100. Yet  to know what this forcing will be in 2100 you have to use a climate model! This is a circular argument as it predetermines the answer before any ‘testing’ of the models even starts.  So for example  RCP8.5 gives a warming of

(with no feedback) \Delta{T} = \frac{\Delta{F}}{3.2} = 2.65C

(with feedback) \Delta{T} = \frac{\Delta{F}}{(1-f)} = 4.4C  for f=0.4

How can you  test the models with an emission scenario based precisely on those same model results?  These ‘scenarios’ must include highly uncertain  assumptions about carbon cycle feedbacks, cloud forcing feedbacks that are built into the very models you are supposed to be testing.

The SRES scenarios used in CMIP3  predicted atmospheric  CO2 concentration NOT the final forcing. Why was this changed ?

Reading Moss et al. one sees that RCPs were a deliberate choice to define an end goal for radiative forcing rather than CO2 levels, apparently to speed up impact assessments by the sociologists. In fact these are just guesses for radiative forcing in 2100 and there are no emission scenario behind them. Instead the modelers can chose any  plausible emission ‘path’ they like that reaches  the required forcing. Each emission path is just one of many alternative pathways.  This is like saying: We know that the world is going to fry by 2100 if the sun’s output were to increase were by 8.5 watts/m2. Lets simply use that figure for the enhanced GHE by 2100 and call that our ‘business as usual’ scenario RCP8.5. That way we can convince the world to take drastic mitigation action now, by scaring everyone just how bad things will be.

It is interesting also to look again at Fig 10 in the AR5 Summary for Policymakers. This figure had and continues to have a huge political impact.

Figure 1: Overlayed on figure 10 from the SPM are Hadcrut4 data points shown in cyan where  CO2 is taken from Mauna Loa data.  Gtons of anthropogenic CO2 are calculated relative to 1850 and scaled up by a factor 2.3 because 43% of anthropogenic emissions remain in atmosphere. The blue curve is a logarithmic fit to the Hadcrut4 data. This is  because CO2 forcing is known to depend on the logarithm of CO2 concentration and is certainly not linear.  This is derived in Radiative forcing of CO2

Figure 1: Overlayed on figure 10 from the SPM are Hadcrut4 data points shown in cyan where CO2 is taken from Mauna Loa data. Gtons of anthropogenic CO2 are calculated relative to 1850 and scaled up by a factor 2.3 because 43% of anthropogenic emissions remain in atmosphere. The blue curve is a logarithmic fit to the Hadcrut4 data. This is because CO2 forcing is known to depend on the logarithm of CO2 concentration and is certainly not linear. This is derived in Radiative forcing of CO2

Firstly note RCP8.5 seems to imply that the carbon content of the atmosphere will triple by 2100. That means that in the next 85 years we will emit twice as much CO2 as emitted so far  since the industrial revolution. Current CO2 levels are 400ppm – an increase of 120ppm. So by 2100 the graph seems to imply that levels would be 640ppm. Yet that is only a total increase in CO2 forcing of 4.4 W/m2. To reach a forcing of 8.5 W/m2 the atmosphere must reach a CO2 concentration of 1380 ppm

What is going on ? There is a subtle assumption behind Fig 10.3 namely that natural CO2 sinks will saturate and instead of just half emitted CO2 remaining in the atmosphere eventually all of it will remain. There is no real evidence whatsoever that this saturation is occurring.

Could we actually reach 1380 ppm even if all emitted CO2  remained in the atmosphere?

date 50% retention 100% retention
now 400 520
2100 640 1000

We still can’t reach 1380ppm even if all emissions remained for ever in the atmosphere. Can positive feedbacks save RCP8.5 by  boosting external forcing? Perhaps, but my problem with that is the incestuous relation between the assumptions made in the models with the scenarios. You need the high sensitive models to justify the target forcing of 8.5 w/m2, so is it any surprise that those same models then predict 4.5C of warming?  The same argument applies to emissions limit to meet the 2C limit to be discussed in Paris.

Arguments that other anthropogenic greenhouse gases like methane or SO4 make the difference  seem rather dubious since CH4 and SO4 have a lifetime of just 4y in the atmosphere, as do aerosols.

Fig 10. is highly misleading for policy makers (IMHO)!

 

This entry was posted in AGW, Climate Change, climate science, IPCC, Science and tagged , , . Bookmark the permalink.

6 Responses to The Logical Fallacy behind ‘Representative Concentration Pathways’

  1. Hans Erren says:

    Hi Clive,

    Peter Dietze has already in 2001 (!) published a fierce critique on the Bern Sink model. I also made a business as usual forecast based on A1B business as usual emissions, a 1.3 transient climate sensitivity and a non saturating sink, and lo and behold. the warming scare dissapears altogether

    https://klimaathype.wordpress.com/2014/09/24/a-cool-realistic-business-as-usual-scenario/

    peter dietze on John Daly’s website:
    http://www.john-daly.com/dietze/cmodcalc.htm

  2. Ron Graf says:

    Hi Clive,

    You are right on the money. If one uses models to make conclusions one must not use assumptions that are based on the model output. The IPCC models were created because the only way to validate our understanding of a system that we can’t duplicate (the Earth) was to build simplified models and see if they have predictive power. After Marotzke and Forster (2015) it seems to me that impatience has led many to validate models and pathways based on ensembles of models, figuring that they represent the universe of possible outcomes. People are led to believe now, for example, that the models are so sophisticated that they can reproduce the ENSO effect as an emergent property. But when you read the fine print the randomness of the emergence is created by randomly starting the model program at a different initial state, basically randomly creating a horse race finish by randomly placing them on a carousel and saying you are a handicapper. Here is a basic guide to the CMIP5 is used: Taylor, Stouffer and Meehl (2012) http://envsci.rutgers.edu/~toine379/extremeprecip/papers/taylor_et_al_2012.pdf

    Is the fix in? I am not saying that but the whole field is needed a stronger review process, including adversaries at the table, (no more buddy peer reviews). It is alarming to me that there is a move among politicians, as we saw in recent weeks to weed out and silence adversarial voices. To those reading, I am no merchant of doubt; I do it for free. We all need to get more involved in this. We are talking never before seen government control over every industry and individual who travels, buys stuff or makes stuff.

  3. Frank says:

    Clive: In addition to CO2 sinks saturating, I believe that some of the old emissions scenarios assumed that we would eventually stop burning so much coal or start scrubbing it better. This decrease in aerosol forcing will produce a quite a bit of warming, even if GHG levels eventually reach a plateau. This is particularly true for the models with high climate sensitivity – which can match the historic warming record only with high sensitivity to aerosols.

  4. Ron Graf says:

    Here is one of those scientists that is deciding the aerosol recipe for IPCC model HadGEM3-RA, Dr. Karsten Haustein, who claims: “As shown in many papers, you should expect a positive temperature trend even in the absence of any external forcing as Earths energy budget tries to rebalance after strong volcanic activity…” Here. My question is why M&F, Karten and other modelers presume volcanic activity has a clear defined fingerprint, as we see in the models, while CO2’s forcing has a mysterious residence time, maybe decades? How can one forcing have a more defined effect than another competing forcing in the same brew? In fact, I would think volcanoes are the logical natural proxy for understanding sensitivity. Has anyone closely studied just on volcano forcing against GMST?

  5. Pier says:

    Dear Clive, I wanted to know if he was aware of the site http://www.meteoclima.com where a character who claims to be a scientist repeatedly quoting your posts and exceedingly says she has collaborated in the writing of his book. and ‘right?

  6. Paul Milenkovic says:

    There are “Keeling curves” for both total CO2 and C13 concentration
    http://www.esrl.noaa.gov/gmd/outreach/isotopes/c13tellsus.html

    I see that the annual swings on the C13 curve are much larger in proportion to the 15-year trend in the graph than with the total CO2 curve. Normalizing to a range of 18 for the C13 curve, these swings are nearly the same strength as the annual swings on the total CO2 curve. The 15-year trend on the total CO2 is over 13 times weaker, giving 7.3 percent of the C13-tagged human-emitted CO2 (all of it, fossil and burnt plant matter from forest clearing and wood burning is C13 deficient) appearing in the atmosphere. In the July 14, 2011 post, Clive Best gave the figure as 6 percent.

    Is anyone else noticing this?

Leave a Reply to Frank Cancel reply