The new AR5 iconic graph is Figure 10 in the Summary for Policy Makers. Myles Allen appeared on the BBC with 10 lumps of coal on a table to explain how we had already burned 5 of them leaving just 5 left to burn if we want to avoid a catastrophe. It is a simple powerful message understandable by policy makers – but is it actually correct ?

Figure 1: Overlayed on figure 10 from the SPM are Hadcrut4 data points shown in cyan where CO2 is taken from Mauna Loa data. Gtons of anthropogenic CO2 are calculated relative to 1850 and scaled up by a factor 2.3 because 43% of anthropogenic emissions remain in atmosphere. The blue curve is a logarithmic fit to the Hadcrut4 data. This is because CO2 forcing is known to depend on the logarithm of CO2 concentration and is certainly not linear. This is derived in Radiative forcing of CO2

Figure 2: A clearer version of the comparison with Figure 10 is this one taken from the slides of the cabinet presentation made by Prof. Walport. The overlay with the data is good showing consistency of carbon counting between my data and AR5. I left out many of the 19th century points due to poor knowledge of CO2 levels. Overlaid at the point where doubling of CO2 in the atmosphere since 1850 occurs (560ppm) are the “extremely confident” estimates from AR5 of ECS and TCR.
When I saw the graph from Fig 10. I thought there must be a mistake because it showed that all RCP emission scenarios simulated by CMIP5 models result in a simple linear dependence on anthropogenic CO2. This cannot be correct because it is well known that CO2 radiative forcing increases logarithmically with concentration – not linearly. So I decided to investigate.
The novel feature of the SPM presentation is that the x-axis is not time but instead cumulative anthropogenic carbon emissions. Different emission scenarios result in different lengths along essentially the same trajectory. I therefore took the HADCRUT4 annual temperature anomalies scaled to 1860 -1880 and smoothed CO2 concentrations from Mauna Loa in order to map the temperature data onto Gtons of increase of atmospheric CO2 since 1850. It also well known that just 43% of anthropogenic emissions remain airborne annually in the atmosphere, so the actual carbon emissions by man are a factor 2.3 higher than those inferred from CO2 data.
The resultant temperature anomalies are plotted in cyan in figure 1 and purple in figure 2. Previously I have shown that a good fit to the last 163 years of temperature data can be made with logarithmic and natural variability. A logarithmic temperature dependence for TCR is to be expected because CO2 forcing increases logarithmically and temperature is a response to forcing. Therefore I made a new fit to the data as a function of Gtons of CO2. This fit is shown in blue and I maintain that this is a more realistic extrapolation into the future than linear projections. Even with the most pessimistic emission scenarios RCP8.5 temperatures remain at most ~2C in 2100.
The real intention of this plot seems to have been more political than scientific. It draws the viewer onto the scary red linear line and does not show any of the large uncertainty in the models highlighted elsewhere in the WG1. Compare it for example with AR5 Figure 1.4 showing huge uncertainties. It is also incompatible with its own statements regarding climate sensitivity especially TCR:
ECS is likely between 1.5°C to 4.5°C (medium confidence) and extremely unlikely less than 1.0°C”
This assessment concludes with high confidence that the transient climate response (TCR) is likely in the range 1°C to 2.5°C
The TCR limits are shown overlaid on Figure 1. Both TCR and ECS limits are shown in figure 2.
In my opinion the graph is not scientifically correct because a) it hides model uncertainties and b) it portrays a linear dependence on carbon emissions whereas it should logically be logarithmic.
Update: Frank points out correctly that my fit to the Hadcrut4 data really only applies to transient climate response(TCR). It is not clear whether the IPCC projections are for equilibrium temperature after 2100 or the transient temperatures at 2100. If the former then my curve would rise up around 30% but it would still be logarithmic.
Clive, the other thing wrong with Figure 10 is that it happily assumes that global fossil fuel reserves are infinite. (note that resource = what may be in the ground, reserves = that portion of resource that can be economically recovered). While the whole concept of Peak Oil has taken one on the chin recently, it remains the case that we are producing / have produced much of the easy to get at reserves of fossil fuel, upwards pressure on price will force substitution, most likely with nuclear. One IPCC view of the fossil fuel world is a simple extrapolation of past demand growth forever into the future. It seems that one scenario sees 750Gt of C emitted by 2100 and another sees over 2000Gt in the same time frame. Plotting this the way it is plotted gives the impression of alined models whilst in fact, if plotted against time as you suggest hugely divergent models would be self evident.
I am sure you are right. The price of fuel will eventually determine total emissions, and much of the developed world will probably have to switch to nuclear – even Germany. Imposing stringent carbon taxes in the mean time because the government thinks price of fossil fuels is too low is simply self destructive unless the whole world does so together. The market will always win.
First, congratulations on an excellent post, and happy holidays.
Second, Euan is right, but there’s also a subtle issue we tend to ignore: roughly 1/3 of remaining oil resources are extra heavy. And these resources tend to be developed with very high reserves to production ratios, plus they take years of advanced planning to develop. This means the rate is constrained by factors insensitive to price. I realize it’s theoretically possible to toss money at a schedule. But some project timing issues simply require time. This means that as we hit peak crude oil – and it looks like we have already peaked- we rely on extra heavy resources which can’t be developed as fast as we may wish.
I doodled this effect on a simple spreadsheet and observed an interesting outcome nobody seems to mention: we may see fossil fuel production drop to generate about 40 % of the peak emissions, and thereafter, if production drops extremely slowly, we will see co2 concentration peak as well. This applies as long as the carbon sinks function like they do now.
Thus I conclude de carbonization isn’t really needed. What we need is geoengineering kept on standby and yes, nuclear power.
Clive: The graph also shows a much bigger temperature rise for 2001-1010 than in the previous three decades. This is clearly incorrect!
If 50% of carbon dioxide emissions currently are absorbed by sinks and the other 50% accumulates (2 ppm per year), it is obvious that a 50% reduction in emissions would stability atmospheric level of CO2. However, the ocean isn’t in equilibrium with the surface temperatures, so surface temperatures will continue to rise for several decades after the atmospheric CO2 stabilizes (the difference between TCR and ECS). The IPCC’s output – but not your curve following the log of the change in CO2 – contains this “committed warming”.
Furthermore, the IPCC’s models probably include some saturation of sinks for carbon dioxide and a greater accumulation/emission ratio in the future.
The IPCC suppresses uncertainty by using models whose range of TCR’s is not as wide as the uncertainty in TCR the IPCC acknowledges from all sources. Read Section 10.1 of AR4 WG1 for a discussion of the IPCC’s models (“an ensemble of opportunity”) and why they know they should not be drawing statistical inferences from a limited set of models.
Clive,
Frank is right – your calculation relies on an assumption of a constant airborne fraction, but the Earth System Models suggest an increase in the airborne fraction, largely due to (a) a saturation of CO2 fertilization of photosynthesis at higher CO2, and (b) an increase in soil respiration at higher temperatures.
Funnily enough, a common sceptical argument is that in the palaeo record, temperature changes sometimes lead CO2, and this is sometime used to argue that CO2 therefore cannot cause warming. While that latter part is of course incorrect logic (and I know you agree that CO2 is a GHG), it istrue that while CO2 rise causes warming, warming can also lead to a CO2 rise. The influence works in both directions – it’s a feedback.
Although it’s obviously hard to test this in the models on centennial timescales, it has been shown to be correct on interannual timescales – see this paper by my colleague Chris Jones, which shows that (in the observations) the annual rise in CO2 is faster in an El Nino year (when the world is generally a bit warmer – as in 1997-1998) and slower in a La Nina. The Met Office Hadley Centre model does a reasonable job at reproducing the relationship on this timescale.
The exception to the ENSO/CO2 rise relationship is when there is also a volcano, as in the case of Mt Pinatubo 1992. For similar reasons, the temporary cooling led to a slowdown in the rate of CO2 rise.
So, the next time you hear the “temperature leads to increased CO2” argument, remember that this is actually true (as well as CO2 leading to increased temperature) and this is why we expect an increase in the airborne fraction, and hence a given amount of cumulative emissions leading to a faster rise in atmospheric CO2 than you’d otherwise expect – resulting in your method underestimating the emissions:warming relationship.
Richard,
Thanks very much for the explanations.
OK – so if I understand correctly your argument is that the airborne fraction could increase with concentration and therefore temperature increases faster than logarithmic with anthropogenic CO2 emissions because actual CO2 levels rise even faster.
a) Is there any evidence whether CO2 fertilization of photosynthesis does saturate ? Why then do gardeners keep greenhouses at CO2 levels of around 1000ppm ? I would imagine that it is very easy to do experiments to test this.
b) I assume you mean soil respiration of CO2. Again that should be fairly easy to measure. I would expect higher CO2 levels over soil in central Africa than say in Norway. One could simply cover an area of soil with plastic at night and measure it. Has this been done ?
So CO2 is itself a feedback ! This sounds a bit like a bootstrap argument to me. You could use the same argument about H2O. H2O is a greenhouse gas and warmer temperatures lead to more evaporation which increases the GHE leading to higher evaporation. Yet somehow over the last 4 billion years the oceans didn’t boil away due to a run away greenhouse effect either from CO2 or from H2O.
On a physics basis we also have Henry’s law where the solubility of CO2 increases with partial pressure so if levels double so does the solubility which counteracts any increased outgassing with ocean temperature. Again this should be easy to measure. My hunch is that there is a stability point where partial pressure wins out and levels stabilize.
So the overall scenario is: Man digs up some buried carbon and burns it. This increases CO2 levels thereby warming the planet which causes more outgassing of CO2 from the oceans (plus H2O) which warms the planet even more. This triggers outgassing of CO2 from the soil leading to more warming etc. Pretty soon we are all toast.
Well – I don’t think I completely accept that !
Evidence is to the contrary, airborne fraction should decrease. Bio uptake will increase, especially on land.
An analogy is business, some profit is invested in growth and new products and capital. Plants become more water efficient and able to devote more resources to reproduction and soil. More water is available for other plants and plants are able to give carbohydrates to symbiotic bacteria and fungi which fix nitrogen and make phosphorus bio available too. Soil moisture increases with CO2 and warming. (ie, it’s not a simple partial pressure and existing nitrogen and phosphorus conditions)
This happens to a lesser extent for crops because they are not nitrogen or phosphorus constrained. N and P added to soil inhibits bacteria and fungi growth.
More evaporation means more condensation. A small increase in water cycle efficiency offsets a lot of GHE.
CO2 rise during el nino has more to do with circulation and precipitation changes than temp.
Clearing skies (impacted by Pinatubo) in pacific also a factor (warming ocean w/ SW radiation). IR doesn’t warm the ocean quite as much as SW.
@Frank: Yes I spotted that as well. Instead of a warming hiatus there seems to be an acceleration of warming in the IPCC version from 2000 to 2010. These are model results of course and the models fail to reproduce the pause in warming ! The real data points are shown in Cyan.
Yes I agree with you. But they label the projection lines with dates 2100 for example. My curve is correct for the temperature recorded at that date. If emissions post 2100 were zero then we would have to wait another 60-100 years for temperatures to stabilize. These would then increase by about 30%. When I looked into all this before I got a value for ECS of ~ 2.5C
The current version online (http://www.climatechange2013.org/images/uploads/WGIAR5-SPM_Approved27Sep2013.pdf) doesn’t have the dots so that one can distinguish between truly historical – observations – and psuedo-historical – output from models supplied with ESTIMATES of historical levels of GHGs, aerosols and solar radiation. Model results over the historical period do not result in a line, but the multi-model mean might. Any responsible presentation of model output would show mean and standard deviation. (Maybe by averaging 3652 daily mean global surface temperatures over a decade from dozens of historical model runs one can get discrete data points with a standard error of the mean about the size of each point.)
Furthermore, they appear to be playing games with the “y-intercept” for the 1% per year increase in CO2 line. At zero accumulation (pre-industrial), the 1% curve should have a temperature anomaly of zero, instead of 0.2 degC. However, zero on the y-axis is the average temperature for 1861-1880. If there had been any anthropogenic warming in the model output of 1861-1880, the y-intercept for the “1% per year curve should have had a negative intercept! It looks like the height of the 1% curve was adjusted vertically until these was a good fit for 1880, 1990 or 2000. Actually, the y-intercept is determined by the amounts of other GHGs, aerosol, and solar radiation held constant during TCR model runs. For example, they could have used the values of other GHGs, aerosols and solar radiation observed in 1990 or they could have used estimates for pre-industrial levels. By changing the fixed amount of these other forcing agents, they can change the y-intercept to any value they want.
One thing that the IPCC doesn’t emphasize is that a significant amount of warming in various scenarios comes from future reductions in emissions of aerosols. In AR4, increased aerosols negated 25-75% of the forcing from increased CO2. The assumptions about changes in aerosols in each RCP and the uncertainty about the amount of radiative forcing they cause has a big impact the relationship between cumulative emission and temperature.
Frank,
If the IPCC wanted to be consistent then they should have added a fire-hose of spaghetti CMIP5 projections like they do when comparing projections with a yearly time x-axis. In this case they want a sharp clear message to lobby politicians so they give a single line for each emission scenario which presumably is supposed to be the CMIP5 model mean. This is why I say this plot is not scientifically honest.
I think what they have also done is only plot the decadel averages for the historic points, also because this hides the recent pause. What is really strange about this plot is that the last 4 billion years of global temperatures ranging from the hot carboniferous to snowball earth all sit at x=0 because the x-axis is anthropogenic CO2 ! I guess this would be a vertical line running from -18 to +10.
Regarding the y-axis intercept. Of course what they actually have hidden is the reductions in temperature 1940-1970 and earlier 1880 – 1910. You are right that the intercept should really have been below zero. You can see that for instance in the hadcrut data below.

“>
You are also right about aerosols which in other chapters are given as the main reason why temperatures have risen less than models expected. Perhaps the real reason is that CO2+feedback forcings are simply too high in the models.
Whatever the case I am convinced that the projection on this graph should be logarithmic.
Clive: In the past, the IPCC hasn’t discussed uncertainties and assumptions the relationship between projected emissions and “projected accumulation” (actual level of CO2 in the atmosphere) very clear. Many researchers use ISAM or BERN-CC to convert emissions to accumulation (ppm). I’ve seen a 2D plot somewhere that shows how well one of this models agrees with two types of historical data.
http://www.ipcc-data.org/observ/ddc_co2.html
Increasing aerosols (and the lag between forcing and temperature change) are the reasons given for why there is less warming than expected for GHG’s alone. (Total GHG forcing is already equivalent to 2XCO2.) Projections of decreasing aerosols are an unpublicized reason why future projected warming will be enhanced.
Page 20 of the IPCC SPM says,
This computer generated result is not because the models fail to account for the saturation of longwave CO2 absorption bands. Line-by-line code show the forcing response of increasing CO2 to be logarithmic. The fourth figure in this document shows a near perfect fit of the CO2 forcing to the log CO2 concentration;
http://www.friendsofscience.org/index.php?id=533
Computer models correctly capture his effect.
The problem is that the climate models assume an exponentially declining fraction of emitted CO2 being absorbed by sinks, which is contradicted by the data which shows an increasing fraction of emitted CO2 is absorbed by sinks. The models forecast this declining “sink efficiency” will add 50 to 100 ppm CO2 by 2100. The data shows the “sink efficiency” is increasing at over 0.9%/decade as shown by this graph;

from;
http://www.friendsofscience.org/assets/documents/FOS%20Essay/Climate_Change_Science.html#Models_fail
Contrary to the models, the oceans and land biomass is not becoming saturated with CO2.
Pingback: Why climate change is the least of our problems | Climate Scepticism
Clive, much as I applaud your personal explorations (I do likewise), the “scientific community” exists to check each other (unfortunately often viciously so); and also unfortunately, specialists almost-always do a lousy job of explaining themselves, but that doesn’t make them wrong. In this case, I suggest that a logarithmic fit to temperature (as a function of anything) is incorrect. Besides feed-backs messing up simple models, another biggie is that radiative forcing is logarithmic in C/C0, but (by Stefan-Boltzmann) radiative forcing relates to T^4, so your approximation should be the 4th root of a logarithm against C, or, if you fit differentials instead, the 3rd root of a logarithm against C, or inverted, fit a logarithm to C^3 or C^4. That’ll push your fit upwards, big-time. (And BTW if the IPCC should show uncertainties, so should you, for such long extrapolations.)