IPCC Scientist’s dilemma

The headlines used by most politicians and green pressure groups are based on the IPCC attribution of the human impact on the climate. Climate change policy and political soundbites can usually be traced back to the ‘Attribution statements’ contained in each of the 4 yearly asessment reports. The political pressure on scientists to forever increase their “certainty” about  man-made global warming is intense. The stumbling block is that the pause in warming since 1998 is getting harder to explain away and  is beginning to undermine this process. The more scientists try to explain the pause the more difficulties they find themselves getting into . The latest to make this mistake is Michael Mann who can now ‘explain’ the pause as being due to a natural cooling trend of the AMO/PDO since 2000, thereby masking underlying anthropogenic warming.

Mann's identification of a natural oscillation component in global temperature data. NMO is a  net combination of AMO and PDO. Note the amplitude of the oscillation is 0.4C

Mann’s identification of a natural oscillation component in global temperature data. NMO is a net combination of AMO and PDO. Note the amplitude of the oscillation is 0.4C

Mann is quite right that the PDO/AMO may likely be the cause of the hiatus, but by accepting this possibility he unfortunately  drives a coach and horses through the AR5 attribution analysis described in chapter 10. This is because the probability analysis used there depends on natural variability being precisely zero since 1951.

First let’s look at the ever growing IPCC certainty about AGW since the first assesment in 1990

  • 1990 AR1: ‘The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more. ‘
  • 1995 AR2: “The balance of evidence suggests a discernable human influence on global climate”
  • 2001 AR3: “most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations.”
  • 2007 AR4: “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”
  • 2013 AR5: “It is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010. The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.”

So 23 years of intense research has supposedly increased scientific consensus from an agnostic position (AR1), a discernable signal (AR2), through likely (66%-90%)  in AR3, to very likely (90-99%) in AR4, and finally to extremely likely (>99% certain) in AR5. This ratcheting up in scientific certainty, egged on by various pressure groups has underpinned the  increasing political pressure on worldwide governments to abandon fossil fuels. Taking a step backwards now is unthinkable.

However is this increased confidence actually justified in view of the fact there has been essentially no surface warming at all  since 1998?  The AR5 attribution analysis is all based on figure 10.5 shown below and the seemingly small error bar on the Anthropogenic component ANT. Despite the much higher uncertainty in the two individual anthropogenic components GHG and aerosols, the ‘fingerprinting’ analysis can supposedly isolate ANT with a high degree of certainty. This fingerpriniting is extremely obscure and is not at all well explained in the text of chapter 10. Even Science of Doom is not convinced by their arguments. However let’s be generous and assume that they are correct and the error on ANT can  indeed be shown to be that small. The real problem they now have is that the probability that ANT and Observed agree depends on the assumption that Internal Variability is 0.0 ± 0.1 C – but we just saw that this is now increasingly unlikely.

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Invoking the AMO/PDO to explain the pause in global warming essenially means that Internal Variability can no longer be assumed to be zero. It must instead have a significant signal during the study period 1951-2010 used. Let’s look at how large that signal could  be. To do this we simply fit HADCRUT4 data to inculde a 60y AMO/PDO term.

 

Figure 1. A Fit to HADCRUT4 temperature anomaly data

Figure 1. A Fit to HADCRUT4 temperature anomaly data

The ‘observed’ global warming content is then isolated by measuring the warming from one peak of the oscillation to the next peak, thereby removing the internal variability effect. Now the observed signal from 1951-2010 is reduced to 0.4 ±0.1C. Including the NMO oscillation now makes it clear that there remains a  net natural component to the warming  during the study period 1951-2010 of 0.2±0.1 C. The AR5 attribution analysis is therefore wrong and should be corrected accordingly. We do this by assigning 0.2C of the original ‘observed’ warming to Internal Variability. The new Fig 10.5 now looks as follows.

Updated Fig 10.5 This shows the attribution of warming from 1950 to 2010 where natural internal variation is measured by comparing  1940 to 2010 as the correct measure of the observed warming  because 1940 and 2010 are peaks of the  natural variation.

Updated Fig 10.5 This shows the attribution of warming from 1950 to 2010 where natural internal variation is measured by comparing 1940 to 2010 as the correct measure of the observed warming because 1940 and 2010 are peaks of the natural variation.

We see immediately that  the GHG component is now far too large and furthermore that the ‘fingerprint’ ANT component lies 2 sigma beyond the ‘observed’ global warming component. In order to explain the pause in warming Mann has had to accept the existance of a significant natural internal component. This then overturns the chapter 10 attribution analysis.

I am convinced that there is indeed a 6 decade long oscillation  in the temperature data which the IPCC  studiously avoided discussing in AR5. I think the main reason why was to avoid weakening  the headline attribution statement mainly for political reasons. This position is now becoming untenable. In fact chapter 9 which evaluates model performance essentially shows that there is only a 5% chance the models can explain the hiatus without including an  internal multidecadel oscillation.  In IPCC parlance this translates to : It is “extremely unlikely” that AR5 models alone can explain the hiatus in global warming (95% probability).

 

 

This entry was posted in AGW, Climate Change, climate science, Institiutions, IPCC, Science and tagged . Bookmark the permalink.

20 Responses to IPCC Scientist’s dilemma

  1. omanuel says:

    The AGW fable exposed totalitarian rule of science under the United Nations.

    Michael Crichton tried to warn us of this in his AGW novel, “State of Fear.”

    Research on “Solar energy” revealed evidence Stalin’s troops captured:

    1. Japan’s atomic bomb plant at Konan, Korea in AUG 1945;

    2. The crew of an American B-29 bomber in AUG 1945; and

    3. Held them for negotiations in SEPT 1945; leading to

    4. Formation of the UN in OCT 1945; followed by

    5. Seventy years (1945-2015 = 70 yrs) of lies disguised as consensus science to enslave and isolate humanity from reality (God).

    See: “Solar energy” https://dl.dropboxusercontent.com/u/10640850/Solar_Energy_For_Review.pdf

  2. Back in my 2009 submission to the House of Commons Climategate inquiry I said that the 20th century signal was indistinguishable from noise. That is to say it was no bigger or different from normal natural variation. That was and remains my expert judgement having looked at similar signals and it is even more true today than it was back then.

    You’ve got to be either a complete idiot with no idea of statistics, noise & signals, delusional or a fraud to suggest something like >99% certain.

    History will not judge these academics at all well.

  3. Bob Peckham says:

    On this issue, I wonder if you saw the BBC TV Program “Climate Change By Numbers” which was shown on monday 2nd March at 9pm? If you did not see it, it is still available for viewing on the BBC iPlayer for 28 days. If I understood it, the program tells us that recent climate change can be summarised in three numbers: 0.85 degC (warming since the 1880’s) , 95% (confidence that at least half of this warming is due to AGW) and 1trillion tonnes (carbon burned). Of course they were trying to simplify things for the average viewer, but I found it surprising that they could discuss climate change which has been going on for millions of years, without mentioning anything that happened before 1880 ! (the excuse I suppose being that we only have good measurements since then).
    How do the above three numbers square up with yours Clive ?

    • Clive Best says:

      Hi Bob,
      I saw the programme yesterday on catchup. Here is a comment I left on Bishop-hill

      Two of the three numbers selected by the programme were perhaps the most controversial parts of AR5.

      0.85C : This covers the standard temperaure anomaly analysis showing a rise of 0.85C since 1880 and is uncontroversial. However the recent pause in warming is controversial and was dismissed as a statistical fluke which also may also be biased because of undersampling in the Arctic. Incredibly the Evaluation of Climate models in chapter 9 WG1 shows that the chance of a pause hapenning by accident is only 5%. In IPCC parlance this translates to:
      “It is very unlikely that CMIP5 models can explain the hiatus in warming observed since 1998”.

      95% : This refers to the statement in SPM: “It is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010. The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.” However this is controversial as there is no actual description of the statistical analysis which led to this conclusion in chapter 10 of AR5. This is perhaps the most opaque chapter in the report. On what basis were the authors able to increase their confidence in the models from 90% to 95% from AR4 in view of the fact there has been no warming since AR4 ? Surely the confidence should if anything be less than in AR4. The political pressure on scientists to forever increase their “certainty” about man-made global warming is intense. They clearly succumbed to that pressure. Chapter 10 studiously avoided any discussion of the pause. Attempts now to explain the pause as due to natural variability – see for example Michael Mann’s recent ariticle on realclimate – dig a bigger hole for them. See my take on this above

      1 Trillion tons: This refers to the maximum amount of carbon we can burn to avoid exceeding 2C of warming. It is based on the AR5 iconic graph – Figure 10 in the Summary for Policy Makers. Myles Allen appeared on the BBC with 10 lumps of coal on a table to explain how we had already burned 5 of them leaving just 5 left to burn if we want to avoid a catastrophe. It is a simple powerful message understandable by policy makers – but is it actually correct ?
      Everyone knows that CO2 forcing goes as the logarithm of concentration DS = 5.3 ln(C/C0) so how can the temperature depend linearly on CO2 content. The answer of course is that can’t DT = ? DS. So how come Myles Allen and co. managed to get a linear dependence? The answer is that they make a huge assumption that the carbon cycle will saturate in the future. Currently half of our CO2 emissions are absorbed by the natural world. There is an assumption in so-called Earth-System models that this will stop in the near future until practically all our emissions remain in the atmosphere. There is as yet no evidence whatsoever that this is hapenning. If saturation doesn’t occur then we have over 2 trillion tons to go before reaching a 2C limit. see : http://clivebest.com/blog/?p=5300

  4. Les Johnson says:

    Do you need to clean up the last chart (updated 10.5), and get rid if the y-axis gradations?

  5. Pingback: Último descubrimiento calentólogo: La Pausa no existe, pero por fin hemos “comprendido” lo que la causa | PlazaMoyua.com

  6. JPC Lindstrom says:

    If the natural oscillation components (in the oceans) were to be combined with the supposedly decrease in cloud cover, couldn´t that, in total, actually explain all the observed warming?

    • Clive Best says:

      I think the two are related. The downturn in the oscillation results in a rise in cloud cover. There is an underlying AGW but it’s effect is modest. 1.5 C warming for a doubling of CO2.

      • HaroldW says:

        While I have reservations about the results of fitting to a rather limited dataset, 1.5 K is pretty much midway between the transient sensitivity of Otto et al. (1.3 K/doubling) and CMIP5 models (TCR mean&median 1.8 K).

  7. Pingback: Week in review | Climate Etc.

  8. Pingback: Week in review | Enjeux énergies et environnement

  9. JK says:

    Doesn’t your updated figure 10.5 double count the correction? As I understand figure 10.5, first black bar is supposed to be a representation of the observational record. The various bars underneath this are supposed to add up to the observed total.

    I would represent your theory that 0.2 degrees C of warming since 1951 is due to a natural oscillation by adding the red bar to internal variability, as you have done.

    But why then subtract that from the observed warming? How could it not be observed? Warming due to internal variability is just as real as a forced response. I don’t understand why you would subtract it. In this post you don’t seem to be contesting the IPCC’s account of the observational record – just the explanation for it.

    An alternative might be to subtract the 0.2 degrees from the observed record as an attempt to isolate the forced component of change. But in that case I would change the labels on the chart. The ‘observed warming’ would have to be relabeled by ‘estimated forced warming’. The reason is that you would have subtracted away your best estimate of internal variability. The ‘internal variability’ would have to be relabeled ‘estimated residual internal variability’. This would rightly be zero, as you have subtracted away all of your best estimate.

    In summary I can make sense of plotting the red bar as internal variability or of subtracting the value from the observed. I cannot make sense of doing both at once.

    • clivebest says:

      You’re probably right, I was trying to be too clever.

      The ‘Observed’ shown is the net residual after subtrating the 0.2C from the original ‘Observed’. This is confusing. In reality the point I really want to make is.

      Observed warming = Observed ANT + 0.2

      Observed ANT = Observed- 0.2

      Model ANT = 1.5* Oserved ANT models are overestimating the anthropogenic component by about 50%

      The IPCC close the period 1950 to 2010 because it maximisies the apparent warming trend. If instead you cose the period 1940 to 2010 you get 0.2C less warming !

  10. R Graf says:

    Clive, IPCC big-wigs Jochem Marotzke and Piers Forster (M&F) published a paper in Nature in Jan 2015 that clears the IPCC models, CMIP5, as good-to-go. They compared 15-year and 62-year intervals 1900 to 2012 and concluded the short-term periods were dominated by unforced variability and the longer term periods by GHG forcing. These conclusions although motivated to keep the pause explained as an unforced variability, then went much further into two very controversial, and I believe, unfounded conclusions: “For either trend length, spread in simulated climate feedback leaves no traceable imprint on GMST trends or, consequently, on the difference between simulations and observations. The claim that climate models systematically overestimate the response to radiative forcing from increasing greenhouse gas concentrations therefore seems to be unfounded.”

    There was an 800-plus comment post on Climateaudit.org about this and there were many points of attack but no consensus of one provable mathematical fatal flaw (yet). Of course, M&F2015 refused to supply their data but they did respond on Climate-lab-book here: http://www.climate-lab-book.ac.uk/2015/marotzke-forster-response/

    M&F used the model simulations archive to plot the behavior of temperature to diagnose GHG forcing F, adjusted for radiative feedbacks, and used the following equation with regression against the observed temp record: dT = dF/(a+k)
    where (a) is effects from cloud and lapse rate changes, and (k) ocean uptake.

    Question: Is one able to make any conclusions about validation by simply comparing the models fitness to temperature data from which they were derived. Is that not circular no matter how innovative a method you have?

    The purpose of M&F15 was to provide evidence that the models are bias free. The critics apparently have been saying models are just thrown together combinations of forcing and feedbacks tuned to approximate the recorded GMST from 1850-1998, which as Clive pointed out is becoming more and more apparent. Considering the lowest theoretical CO2 forcing was in 1850 and the highest is at present the IPCC needs to explain then how the last 16-year trend is flat. Okay M&F want to validate the models despite this, but how? The modelers had access to the temp record so the fact that their simulations generally track the observed record and converge near the end of the 20th century is no validation, especially since each of the 56 model pairs has a different recipe for getting there. And, considering what makes validation scientific is the absence of opacity or places in which bias of tuning and trickery can lurk, proof is found only in the power to predict. Thus the only thing one can do is wait, and so far it doesn’t look promising. But that did not stop the Max Plank Institute, (Marotzke’s employer), from issuing a worldwide press blast that the models are validated with M&F’s “innovative method.”

    Clive, do you have any thoughts? Here is M&F’s paper: http://www.indiaenvironmentportal.org.in/files/file/global%20temperature%20trends.pdf

    Although M&F do not mention it in their paper most all their data came from Forster’s 2013 paper deriving variables for 36 models using an “innovative” reverse diagnostic of their temperature simulation trends. Another question: Are their hazards from using an equation in reverse if there are thermodynamic-ally irreversible components to the equation due to it being a steady state rather than equilibrium?

  11. PlanetaryPhysicsGroup says:

     

    What is the sensitivity for each 1% of the most prolific “greenhouse gas” (namely water vapor) in Earth’s atmosphere?

    To help any of you answer the question, here are some facts:

    Fact 1: Water vapor absorbs a significant amount of incident solar radiation as shown %3Bhttp%253A%252F%252Fen.wikipedia.org%252Fwiki%252FAir_mass_(solar_energy)%3B800%3B595″ rel=”nofollow”>here. The atmosphere absorbs about 20% of incident solar radiation and that absorbing is not by nitrogen, oxygen or argon. (Carbon dioxide also absorbs incident photons in the 2.1 micron range which each have about 5 times the energy of 10 micron photons coming up from the surface. On Venus over 97% of the energy from incident solar radiation is retained in carbon dioxide molecules.)

    Fact 2: The concentration of water vapor varies between about 1% and 4%. (The concentration of carbon dioxide above Mauna Loa is 0.04% and, as this graph shows, temperatures there have not increased since 1959.)

    Fact 3: The IPCC claims that water vapor does nearly all of “33 degrees of warming” of Earth’s surface. It must do most of it because it dominates CO2 in concentration and also in the number of frequency bands in which it absorbs and radiates. But in fact water vapor lowers the “lapse rate” so that the temperature profile rotates downwards at the surface end, making the surface cooler. (In fact, as per my paper, there is no 33 degrees of warming being done by any back radiation because it is gravity which props up the surface end of the temperature profile.)

    When you have answered the question, work out how much hotter the IPCC conjecture implies a region with 4% water vapor would be than a similar region with 1% water vapor at a similar altitude and latitude. Then look up the study in the Appendix of my paper and see what real world data tells us about how water vapor cools rather than warms. And if you don’t believe my study, then spend half a day doing your own.

    Finally, note that it is quite clear in the energy diagram here and the text I wrote beneath it that they have certainly added 324W/m^2 of back radiation to 168W/m^2 of solar radiation in order to use this in Stefan Boltzmann calculations to determine the temperature of the surface. Obviously they worked out by difference what the back radiation figure had to be and made it 66% greater than the 195W/m^2 of upward radiation from the atmosphere to space. They need not have bothered, because their whole paradigm is wrong, because they ignored the fact that the Second Law of Thermodynamics tells us that gravity forms the temperature and density gradients – which represent the state of thermodynamic equilibrium.

  12. Brian H says:

    There’s a logical principle here: if NatVar can arbitrarily .and unpredictably overwhelm the postulated AGW effect once, then it can do so any time. It is always, therefore, in control.

Leave a Reply