The Logical Fallacy behind ‘Representative Concentration Pathways’

Representative Concentration Pathways (RCPs) used in AR5 are future projections of greenhouse emissions according to different mitigation scenarios. So for example  there is a ‘business as usual’ high emission scenario (RCP8.5) and a midrange mitigation emissions scenario (RCP4.5). CMIP5 model projections are then based on runs with each scenario.

Here is the fallacy. The number 8.5 refers to the resultant climate forcing in watts/m^2 at the top of the atmosphere by 2100. Yet  to know what this forcing will be in 2100 you have to use a climate model! This is a circular argument as it predetermines the answer before any ‘testing’ of the models even starts.  So for example  RCP8.5 gives a warming of

(with no feedback) \Delta{T} = \frac{\Delta{F}}{3.2} = 2.65C

(with feedback) \Delta{T} = \frac{\Delta{F}}{(1-f)} = 4.4C  for f=0.4

How can you  test the models with an emission scenario based precisely on those same model results?  These ‘scenarios’ must include highly uncertain  assumptions about carbon cycle feedbacks, cloud forcing feedbacks that are built into the very models you are supposed to be testing.

The SRES scenarios used in CMIP3  predicted atmospheric  CO2 concentration NOT the final forcing. Why was this changed ?

Reading Moss et al. one sees that RCPs were a deliberate choice to define an end goal for radiative forcing rather than CO2 levels, apparently to speed up impact assessments by the sociologists. In fact these are just guesses for radiative forcing in 2100 and there are no emission scenario behind them. Instead the modelers can chose any  plausible emission ‘path’ they like that reaches  the required forcing. Each emission path is just one of many alternative pathways.  This is like saying: We know that the world is going to fry by 2100 if the sun’s output were to increase were by 8.5 watts/m2. Lets simply use that figure for the enhanced GHE by 2100 and call that our ‘business as usual’ scenario RCP8.5. That way we can convince the world to take drastic mitigation action now, by scaring everyone just how bad things will be.

It is interesting also to look again at Fig 10 in the AR5 Summary for Policymakers. This figure had and continues to have a huge political impact.

Figure 1: Overlayed on figure 10 from the SPM are Hadcrut4 data points shown in cyan where  CO2 is taken from Mauna Loa data.  Gtons of anthropogenic CO2 are calculated relative to 1850 and scaled up by a factor 2.3 because 43% of anthropogenic emissions remain in atmosphere. The blue curve is a logarithmic fit to the Hadcrut4 data. This is  because CO2 forcing is known to depend on the logarithm of CO2 concentration and is certainly not linear.  This is derived in Radiative forcing of CO2

Figure 1: Overlayed on figure 10 from the SPM are Hadcrut4 data points shown in cyan where CO2 is taken from Mauna Loa data. Gtons of anthropogenic CO2 are calculated relative to 1850 and scaled up by a factor 2.3 because 43% of anthropogenic emissions remain in atmosphere. The blue curve is a logarithmic fit to the Hadcrut4 data. This is because CO2 forcing is known to depend on the logarithm of CO2 concentration and is certainly not linear. This is derived in Radiative forcing of CO2

Firstly note RCP8.5 seems to imply that the carbon content of the atmosphere will triple by 2100. That means that in the next 85 years we will emit twice as much CO2 as emitted so far  since the industrial revolution. Current CO2 levels are 400ppm – an increase of 120ppm. So by 2100 the graph seems to imply that levels would be 640ppm. Yet that is only a total increase in CO2 forcing of 4.4 W/m2. To reach a forcing of 8.5 W/m2 the atmosphere must reach a CO2 concentration of 1380 ppm

What is going on ? There is a subtle assumption behind Fig 10.3 namely that natural CO2 sinks will saturate and instead of just half emitted CO2 remaining in the atmosphere eventually all of it will remain. There is no real evidence whatsoever that this saturation is occurring.

Could we actually reach 1380 ppm even if all emitted CO2  remained in the atmosphere?

date 50% retention 100% retention
now 400 520
2100 640 1000

We still can’t reach 1380ppm even if all emissions remained for ever in the atmosphere. Can positive feedbacks save RCP8.5 by  boosting external forcing? Perhaps, but my problem with that is the incestuous relation between the assumptions made in the models with the scenarios. You need the high sensitive models to justify the target forcing of 8.5 w/m2, so is it any surprise that those same models then predict 4.5C of warming?  The same argument applies to emissions limit to meet the 2C limit to be discussed in Paris.

Arguments that other anthropogenic greenhouse gases like methane or SO4 make the difference  seem rather dubious since CH4 and SO4 have a lifetime of just 4y in the atmosphere, as do aerosols.

Fig 10. is highly misleading for policy makers (IMHO)!

 

Posted in AGW, Climate Change, climate science, IPCC, Science | Tagged , , | 2 Comments

Marotzke & Forster Revisited

Marotzke & Forster(2015) found that 60 year trends in global surface temperatures are dominated by underlying climate physics. However, the  data show that climate models overestimate such 60 year decadel trends after 1940.

Comparison of 60y trends in observations and models (see text for details).

Comparison of 60y trends in observations and models (see text for details).

The recent paper in Nature by Jochem Marotzke & Piers Forster ‘Forcing, feedback and internal variability in global temperature trends’ has gained much attention because it makes the claim that climate models are just fine and do not overstimate warming despite the observed 17 year hiatus since 1998. They attempt to show this by demonstrating that 15y trends in the Hadcrut4 data can be expected in CMIP5 models through quasi random internal variability, whereas any 60y trends are deterministic (anthropogenic). They identify ‘deterministic’ and ‘internal variability’ in the models through a multi-regression analysis with their known forcings as input.

\Delta{T} = \frac{\Delta{F}}{(\alpha + \kappa)} + \epsilon

where \Delta{F} is the forcing, \alpha is a climate feedback and \kappa is fraction of ocean heat uptake and \epsilon is random variation.

This procedure was criticised by Nic Lewis and generated an endless discussion on Climate Audit and Climate-Lab  about whether this procedure made statistical sense. However for the most part I think this is irrelevant as it is an analysis of the difference between models and not observational data.

Firstly the assumption that all internal variability is quasi-random is likely wrong. In fact there is clear evidence of a 60y oscillation in the GMST data probably related to the AMO/PDO – see realclimate. In this sense all models are likely wrong because they fail to include this non-random variation. Secondly as I will show below the observed 15y trends in Hadcrut4 are themselves not quasi-random. Thirdly I demonstrate that the observed 60y trends after 1945 are poorly described by the models and that by 1954 essentially all of the models predict higher trends than those observed. This means that the ‘deterministic’ component of all CMIP5 models do indeed overestimate  the GMST response from increasing greenhouse gas concentrations.

Evidence of regular climate oscillations

Hadcrut4 anomaly data compared to a fit with a 60y oscillation and an underlying logarithmic anthropogenic term.

Hadcrut4 anomaly data compared to a fit with a 60y oscillation and an underlying logarithmic anthropogenic term.

Figure 1 shows that the surface data can be well described by a formula (described here) that includes both an net CO2 forcing term and a 60y oscillation as follows:

DT(t) = -0.3 + 2.5\ln{\frac{CO2(t)}{290.0}} + 0.14\sin(0.105(t-1860))-0.003 \sin(0.57(t-1867))-0.02\sin(0.68(t-1879))

The physical justification for such a 0.2C oscillation is the observed PDO/AMO which just like ENSO can effect global surface temperatures, but over a longer period. No models currently include any such  regular natural oscillations. Instead the albedo effect of aerosols and volcanoes have been tuned to agree with past GMST and follow its undulations. Many others have noted this oscillation in GMST, and even Michael Mann is now proposing that a downturn in the PDO/AMO is responsible for  the hiatus.

15y and 60y trends in observations and models

I have repeated the analysis described in M&F. I use linear regression fits over periods of 15y and 60y to the Hadcrut4 data and also to the fitted equation described above. In addition I have downloaded  42 CMIP5 model simulations of monthly surface temperature data from 1860 to 2014, calculated the monthly anomalies and then averaged them over each year. Then for each CMIP5 simulation  I calculated the 15y and 60y trends for increasing start year as described in M&F.

Figure 2 shows the calculated  15y trends in the H4 dataset compared to trends from the fit. For comparison we first show Fig 2a taken from  M&F below.

15y trends from M&P compared to model regressions

Fig 2a: 15y trends from M&F compared to model regressions. Error bars for  random internal variation  are ± 0.26C which dominate ‘deterministic’ (AGW) error spread beween models of ±0.11 C

M&F regression analysis then goes on to show that the deterministic effects in the CMIP5 models should dominate for longer 60y trends. In particular the error on the 60y trends as given across  models is ± 0.081 C which is 30% lower  than random variation. Therefore the acid test of the models comes when comparing 60y model trends to the obervation because now statistical variation is much smaller. These are my results below.

a) 15y trends derived from Hadcrut4 data and the fit described above. Note how the trends are not random but also follow a regular variation in phase with the fit.  b) 60y trends in Hadcrut4 data (black circles) comparted with the fitr (blu line) and an  ensemble of CMIP5 modle calculations. The rted curve is the avergae of all CMIP5 models

a) 15y trends derived from Hadcrut4 data and the fit described above. Note how the trends are not random but also follow a regular variation in phase with the fit.
b) 60y trends in Hadcrut4 data (black circles) comparted with the fitr (blu line) and an ensemble of CMIP5 modle calculations. The rted curve is the avergae of all CMIP5 models

This analysis shows two effects which were  unreported by M&F. Firstly the 15y variation in trends of the observed data is not random but shows a periodic shape as is also reproduced by the fit. This is characteristic of an underlying natural climate oscillation. The quasi-random natural variation in the CMIP5 models as shown in Fig 2a above  encompases the overall magnitude of the variation but not its structure.

Secondly the 60y trends also show a much smaller but still residual structure reflecting the  underlying oscillation shown in blue. The spread in 42 models is of course due to their different effective radiative forcing and feedbacks. The fact that before 1920 all  model trends can track the observed trends is partly due to parametric tuning in aerosols to agree with hindcast temperaures. After 1925 the observed trend begins to fall beneath the average of CMIP5 so that by 1947 the observations lie below all 42 model trends in the CMIP5 ensemble. This increase in model trends above the observed 60y trend cannot now be explained by natural variation since M&F argue that the deterministic component must dominate.  The models must be too sensitive to net greenhouse forcing. However M&F dismiss this fact simply  because they can’t determine what component within the models causes the trend . In fact the conclusion of the paper is based on analysing model data and not the observation data. It is bizarre. They conclude their paper as follows:

There is scientific, political and public debate regarding the question of whether the GMST difference between simulations and observations during the hiatus period might be a sign of an equilibrium model response to a given radiative forcing that is systematically too strong, or, equivalently, of a simulated climatefeedback a that is systematically too small (equation (2)). By contrast, we find no substantive physical or statistical connection between simulated climate feedback and simulated GMST trends over the hiatus or any other period, for either 15- or 62-year trends (Figs 2 and 3 and Extended Data Fig. 4).The role of simulated climate feedback in explaining the difference between simulations and observations is hence minor or even negligible. By implication, the comparison of simulated and observed GMST trends does not permit inference about which magnitude of simulated climate feedback—ranging from 0.6 to 1.8 W m22 uC21 in the CMIP5 ensemble—better fits the observations. Because observed GMST trends do not allow us to distinguish between simulated climate feedbacks that vary by a factor of three, the claim that climate models systematically overestimate the GMST response to radiative forcing from increasing greenhouse gas concentrations seems to be unfounded.

It almost seems like they  have reached the conclusion that they  intended to reach all along – namely that the models are fit for purpose and the hiatus is a statistical fluke not unexpected in 15y trend data. This way they can save the conclusions of AR5, but  only by ignoring the evidence that the observational data support the AMO/PDO oscillation and moderate gloabl warming.

Physics has always been based on developing theoretical models to describe nature. These models make predictions which can then be  tested by experiment. If the results of these experiments dissagree with the predictions then either the model  can be updated  to explain the new data or else discarded. What one can’t do is to discard the experimental data because the models can’t distinguish why they dissagree with the data.

My conclusion is that the 60y trend data show strong evidence that CMIP5 models do indeed overestimate global warming from increased greenhouse gasses. The discrepency of climate projections with observations will only get worse as the hiatus continues for probably another 10 years. The current 60y  decadel trend is in fact only slightly larger than that that in 1900. Once the oscillation reverses around 2030 warming will resume, but climate sensitivity is still much less than most models predict.

Posted in AGW, Climate Change, climate science, Science | Tagged , , , | 26 Comments

IPCC Scientist’s dilemma

The headlines used by most politicians and green pressure groups are based on the IPCC attribution of the human impact on the climate. Climate change policy and political soundbites can usually be traced back to the ‘Attribution statements’ contained in each of the 4 yearly asessment reports. The political pressure on scientists to forever increase their “certainty” about  man-made global warming is intense. The stumbling block is that the pause in warming since 1998 is getting harder to explain away and  is beginning to undermine this process. The more scientists try to explain the pause the more difficulties they find themselves getting into . The latest to make this mistake is Michael Mann who can now ‘explain’ the pause as being due to a natural cooling trend of the AMO/PDO since 2000, thereby masking underlying anthropogenic warming.

Mann's identification of a natural oscillation component in global temperature data. NMO is a  net combination of AMO and PDO. Note the amplitude of the oscillation is 0.4C

Mann’s identification of a natural oscillation component in global temperature data. NMO is a net combination of AMO and PDO. Note the amplitude of the oscillation is 0.4C

Mann is quite right that the PDO/AMO may likely be the cause of the hiatus, but by accepting this possibility he unfortunately  drives a coach and horses through the AR5 attribution analysis described in chapter 10. This is because the probability analysis used there depends on natural variability being precisely zero since 1951.

First let’s look at the ever growing IPCC certainty about AGW since the first assesment in 1990

  • 1990 AR1: ‘The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more. ‘
  • 1995 AR2: “The balance of evidence suggests a discernable human influence on global climate”
  • 2001 AR3: “most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations.”
  • 2007 AR4: “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”
  • 2013 AR5: “It is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010. The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.”

So 23 years of intense research has supposedly increased scientific consensus from an agnostic position (AR1), a discernable signal (AR2), through likely (66%-90%)  in AR3, to very likely (90-99%) in AR4, and finally to extremely likely (>99% certain) in AR5. This ratcheting up in scientific certainty, egged on by various pressure groups has underpinned the  increasing political pressure on worldwide governments to abandon fossil fuels. Taking a step backwards now is unthinkable.

However is this increased confidence actually justified in view of the fact there has been essentially no surface warming at all  since 1998?  The AR5 attribution analysis is all based on figure 10.5 shown below and the seemingly small error bar on the Anthropogenic component ANT. Despite the much higher uncertainty in the two individual anthropogenic components GHG and aerosols, the ‘fingerprinting’ analysis can supposedly isolate ANT with a high degree of certainty. This fingerpriniting is extremely obscure and is not at all well explained in the text of chapter 10. Even Science of Doom is not convinced by their arguments. However let’s be generous and assume that they are correct and the error on ANT can  indeed be shown to be that small. The real problem they now have is that the probability that ANT and Observed agree depends on the assumption that Internal Variability is 0.0 ± 0.1 C – but we just saw that this is now increasingly unlikely.

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Invoking the AMO/PDO to explain the pause in global warming essenially means that Internal Variability can no longer be assumed to be zero. It must instead have a significant signal during the study period 1951-2010 used. Let’s look at how large that signal could  be. To do this we simply fit HADCRUT4 data to inculde a 60y AMO/PDO term.

 

Figure 1. A Fit to HADCRUT4 temperature anomaly data

Figure 1. A Fit to HADCRUT4 temperature anomaly data

The ‘observed’ global warming content is then isolated by measuring the warming from one peak of the oscillation to the next peak, thereby removing the internal variability effect. Now the observed signal from 1951-2010 is reduced to 0.4 ±0.1C. Including the NMO oscillation now makes it clear that there remains a  net natural component to the warming  during the study period 1951-2010 of 0.2±0.1 C. The AR5 attribution analysis is therefore wrong and should be corrected accordingly. We do this by assigning 0.2C of the original ‘observed’ warming to Internal Variability. The new Fig 10.5 now looks as follows.

Updated Fig 10.5 This shows the attribution of warming from 1950 to 2010 where natural internal variation is measured by comparing  1940 to 2010 as the correct measure of the observed warming  because 1940 and 2010 are peaks of the  natural variation.

Updated Fig 10.5 This shows the attribution of warming from 1950 to 2010 where natural internal variation is measured by comparing 1940 to 2010 as the correct measure of the observed warming because 1940 and 2010 are peaks of the natural variation.

We see immediately that  the GHG component is now far too large and furthermore that the ‘fingerprint’ ANT component lies 2 sigma beyond the ‘observed’ global warming component. In order to explain the pause in warming Mann has had to accept the existance of a significant natural internal component. This then overturns the chapter 10 attribution analysis.

I am convinced that there is indeed a 6 decade long oscillation  in the temperature data which the IPCC  studiously avoided discussing in AR5. I think the main reason why was to avoid weakening  the headline attribution statement mainly for political reasons. This position is now becoming untenable. In fact chapter 9 which evaluates model performance essentially shows that there is only a 5% chance the models can explain the hiatus without including an  internal multidecadel oscillation.  In IPCC parlance this translates to : It is “extremely unlikely” that AR5 models alone can explain the hiatus in global warming (95% probability).

 

 

Posted in AGW, Climate Change, climate science, Institiutions, IPCC, Science | Tagged | 20 Comments