Logic of Gavin

Gavin Schmidt (@climateofgavin) has been defending the detection and attribution chapter in AR5 against criticisms made by Judith Curry and others.  His technique is to dismiss such criticism either because it covers a slightly different time scale or else because the authors  fail to understand the IPCC ‘fingerprinting’ studies.  He avoids addressing the growing evidence of a 60 year natural oceanic heat cycle or that its effects on warming are likely to be non zero between  1950 – 2005. This is despite the fact that a downturn in this cycle can also explain the  16 year long pause in warming since 1998.  The IPCC fingerprinting studies described in chapter 10 of AR5 seem to be rather opaque and even a bit like ‘black magic’. This is because at its core  the attribution always fall back on ‘expert’ assessments, whose quantitative reasoning is not documented. In particular, why is it that expert assessments of  ‘natural variation’ in the temperature data are  found to be essentially zero?  Why then  is  the observed warming from 1940 to 2000 only 0.42C  when compared to 0.6C from 1951 to 2005 on which the attribution studies are based?

Key examples of Gavin’s logic to any counter-argument are as follows:

– In reply to Judith Curry’s post 50-50 attribution.

Watch the pea under the thimble here. The IPCC statements were from a relatively long period (i.e. 1950 to 2005/2010). Judith jumps to assessing shorter trends (i.e. from 1980) and shorter periods obviously have the potential to have a higher component of internal variability. The whole point about looking at longer periods is that internal oscillations have a smaller contribution. Since she is arguing that the AMO/PDO have potentially multi-decadal periods, then she should be supportive of using multi-decadal periods (i.e. 50, 60 years or more) for the attribution.

Well I am not sure she was actually saying that. She was using a hypothetical short time period just to make her point clearer. Gavin jumped on that to then dismiss the rest of her points. Her main argument was indeed for the 1951-2010 period.

Here however,  Gavin appears to be clutching at straws to avoid the conclusion of Tsung and Zhou that the AMO oscillation is real.

What is the evidence that all 60-80yr variability is natural? Variations in forcings (in particularly aerosols, and maybe solar) can easily project onto this timescale and so any separation of forced vs. internal variability is really difficult based on statistical arguments alone (see also Mann et al, 2014). Indeed, it is the attribution exercise that helps you conclude what the magnitude of any internal oscillations might be. Note that if we were only looking at the global mean temperature, there would be quite a lot of wiggle room for different contributions. Looking deeper into different variables and spatial patterns is what allows for a more precise result. –

The assessment that internal variability is zero is just based on “expert assessment”. So if you or I can see a clear 60 year oscillation in the temperature data, we are simply deluded because we are not “experts”. Only experts can interpret the ‘fingerprint’ of global warming.

– Second example:  does not the following seem to be more like downright fiddling?

Judith’s argument misstates how forcing fingerprints from GCMs are used in attribution studies. Notably, they are scaled to get the best fit to the observations (along with the other terms). If the models all had sensitivities of either 1ºC or 6ºC, the attribution to anthropogenic changes would be the same as long as the pattern of change was robust. What would change would be the scaling – less than one would imply a better fit with a lower sensitivity (or smaller forcing), and vice versa (see figure 10.4).

So the ‘experts’ can scale up or down the anthropogenic forcings in the models so as to exactly match the data.  Therefore because the models use stochastic internal variation while excluding multi-decadal variation, the ‘expert opinion’ of the modellers will be that natural variation averages to zero. In other words when  you now scale zero by any factor whatsoever you still always get  zero !

Third example –  Now lets look at Fig 10.4

 

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Both myself and Paul Mathews queried in Oct 2013 how it was possible for the ANT forcing to have small error bars while its component parts were much larger. Namely GHG has an error of 0.4 C and OA (aerosols) has an error of 0.35 C, while ANT has an error of just 0.1 C. The normal way to combine the sum of errors is to take their RMS summed value which would have resulted in an ANT error of 0.5 C.

This was Gavin’s response.

gavin says: 14 Oct 2013 at 1:01 PM
“Just for completeness, and to preempt any confusion, this post from Paul Matthews, is a typical example of what happens when people are so convinced by their prior beliefs that they stop paying attention to what is actually being done. Specifically, Matthews is confusing the estimates of radiative forcing since 1750 with a completely separate calculation of the best fits to the response for 1951-2010. Even if the time periods were commensurate, it still wouldn’t be correct because (as explained above), the attribution statements are based on fingerprint matching of the anthropogenic pattern in toto, not the somewhat overlapping patterns for GHGs and aerosols independently. Here is a simply example of how it works. Let’s say that models predict that the response to greenhouse gases is A+b and to aerosols is -(A+c). The “A” part is a common response to both, while the ‘b’ and ‘c’ components are smaller in magnitude and reflect the specific details of the physics and response. The aerosol pattern is negative (i.e. cooling). The total response is expected to be roughly X*(A+b)-Y*(A+c) (i.e. some factor X for the GHGs and some factor Y for the aerosols). This is equivalent to (X-Y)*A + some smaller terms. Thus if the real world pattern is d*A + e (again with ‘e’ being smaller in magnitude), an attribution study will conclude that (X-Y) ~= d. Now since ‘b’ and ‘c’ and ‘e’ are relatively small, the errors in determining X and Y independently are higher. This is completely different to the situation where you try and determine X and Y from the bottom up without going via the fingerprints (A+b or A+c) or observations (A+d) at all. 

This is a remarkable statement (apart from the time-scale rebuff). In the above argument Gavin claims that Anthropogenic indeed means GHG + Aerosols, but that the two are inversely proportional. So apart from some minor terms, Aerosols = -const*GHG . That really is the same thing as saying

ANT = GHG – fudge factor*GHG

ANT = (1- (fudge factor))*GHG = all ‘observed’ warming  –  eliminating the need for any natural component at all.

Fudge Factor is the year to year tuning needed to make  CMIP5 hindcasts agree with the temperature data.

Now look at what  Chapter 7 of AR5  actually has to say about aerosols.

‘Aerosols dominate the uncertainty in the total anthropogenic radiative forcing. A complete understanding of past and future climate change requires a thorough assessment of aerosol-cloud-radiation interactions.

Or at what a real aerosol expert has to say

‘The IPCC is effectively saying that the cooling influence from aerosols is slightly weaker than previously estimated and that our understanding has improved.

Aerosol radiative forcing estimates.
The other values refer to different ways of calculating the impact and it is these numbers that inform the overall value in the report. The satellite based value refers to studies where satellite measurements of aerosol properties are used in conjunction with climate models; they are not wholly measurement based. In terms of studies using climate models on their own, the IPCC used a subset of climate models for their radiative forcing assessment, choosing those that had a “more complete and consistent treatment of aerosol-cloud interactions”.

The satellite based central value of -0.85 W/m2 is less negative than the central value from the climate models, which means the models indicate more cooling than the satellite based estimate. Compared to the subset of climate models that the IPCC used for their radiative forcing judgement, there is little overlap between their ranges also.

If we examine more details from the climate models, we see that there are large differences between models in terms of what types of aerosol it considers to be important. For example some models say that dust is a major contributor to the global aerosol burden, while others disagree. These are important details as climate models can sometimes broadly agree in terms of the radiative forcing estimate they provide but for very different reasons. Black carbon is another species that can contribute to varying degrees in different models, which is important as it warms the atmosphere; how a model represents black carbon is going to have a strong influence on the reported cooling. Nitrate is a potentially important species that often isn’t even included in climate models.

The current state of understanding of aerosols suggests that they’ve exerted a cooling influence on our climate, which has offset some of the warming expected from the increase in greenhouse gases in our atmosphere. Improving this understanding will be crucial for assessing both past and future climate change.

So these experts emphasize the uncertainties in the net forcings of aerosols and clouds. There is little evidence of any proportional anthropogenic cooling with CO2 emissions. Furthermore the models are still exaggerating the cooling effects of aerosols.

Finally here is a classic Gavin put down .

I have tried to follow the proposed logic of Judith’s points here, but unfortunately each one of these arguments is either based on a misunderstanding, an unfamiliarity with what is actually being done or is a red herring associated with shorter-term variability. If Judith is interested in why her arguments are not convincing to others, perhaps this can give her some clues.

This simply dismisses any evidence of a short term variation that is visibly present in the data, because it goes against the ‘expert fingerprint assessment’. Nor is the AR5 attribution time-scale anyway near long enough that natural variation averages to zero. This can simply be seen in the figure below.

Figure 1. A Fit to HADCRUT4 temperature anomaly data

Figure 1. A Fit to HADCRUT4 temperature anomaly data

Looks very much like a 60 year oscillation to me – but then again I am no ‘expert’ !

quotes above taken from these posts

  1. Natural versus Anthropogenic
  2. IPCC attribution statements redux: A response to Judith Curry
  3. The 50 50 argument
  4. The IPCC AR5 Attribution Statement

 

 

This entry was posted in AGW, Climate Change, climate science, Institiutions, IPCC and tagged , , . Bookmark the permalink.

7 Responses to Logic of Gavin

  1. omanuel says:

    Logic? What logic?

    Gavin cannot understand the causes of climate change because technological improvements in solar measurements after WWII were more than offset by dogmatic belief in the Standard Solar Model of Hydrogen-filled stars [1].

    Fred Hoyle explained this change on pages 153-154 of his autobiography [2] when mainstream opinions on the internal composition of ordinary stars were abruptly changed from

    a.) Mostly iron (Fe) in 1945 to
    b.) Mostly hydrogen (H) in 1946

    References:

    1. “Super-fluidity in the solar interior:
    Implications for solar eruptions and climate”, Journal of Fusion Energy 21, 193-198 (2002): http://www.springerlink.com/content/r2352635vv166363/ Or http://www.omatumr.com/abstracts2003/jfe-superfluidity.pdf

    2. Fred Hoyle, Home Is Where the Wind Blows [University Science Books, 1994, 441 pages]. Pages 153-154 explain how the Standard Solar Model came into existence in 1946 with no discussion or debate!

  2. tallbloke says:

    So since NAT is effectively zero in the IPCC’s estimation, it appears Gav believes all climate change is anthropogenically caused.

    But wait, there’s a sleight of hand here. If Solar forcing was positive 1950-2003 but is now strongly negative, then the forcing is zero 1950-2014, but the effect of the downturn isn’t yet seen because it’s ‘still in the pipeline’ due to the thermal inertia of the system.

    If temperatures go south in the next few years, it will also be proof that the Sun was responsible for most of the 1975-2005 warming, aided and abetted by the positive phase of the AMO/PDO

    • clivebest says:

      NAT is zero only because the models assume it is zero. It is as simple as that.
      AGW is real but over-hyped. Egg will eventually emerge from the deep ocean to meet ‘scared scientists’ faces.

  3. Alastair B. McDonald says:

    “… this post from Paul Matthews, is a typical example of what happens when people are so convinced by their prior beliefs that they stop paying attention to what is actually being done.”

    Isn’t this is exactly the problem that Gavin et al. have? They are so convinced that current climate science is correct that they are blind to where it fails. In your very first post you pointed out that if all the radiation is absorbed in the boundary layer then how can increasing CO2 cause more warming? Their explanation that the effective temperature is emitted from a higher and colder altitude in the troposphere so causing the lapse rate to shift is nonsense. The outgoing radiation too is saturated, So ny increase in CO2 will not cause an imbalance at the top of the atmosphere.

  4. pdtillman says:

    @Alastair B. McDonald,
    Gavin Schmidt is often “a typical example of what happens when people are so convinced by their prior beliefs that they stop paying attention to what is actually being done.”

    Ayup. Blinded by science! At least by Confirmation Bias.

    I don’t see the previous bias of the GISS (under former chair/activist James Hansen) as likely to change.

    • clivebest says:

      There would appear to be a certain amount of intellectual arrogance on show to dismiss any criticism. I find the ‘detection and attribution’ analysis to be opaque relying on ‘expert opinion’ rather than any sort of statistical ‘fingerprinting’.

      What actually is being done – is to rely on the prior beliefs of these ‘experts’ and the models. The continuing hiatus is proving ever more difficult for them.

  5. Andyj says:

    I’ve read just about all of Judith Currys post and the comments.
    starting with the subject matter… So much text in an attempt to obfuscate the IPCC’s abject failings to understand reality.

    The AGW commenter’s. What fun! They must know you know they are ducking and diving.

    Clive, Tallbloke
    Solar and HADCRUT4. Makes a good picture here. Will leave it to you why it tails up at the end. I wouldn’t put it past those “adjustments”.
    http://www.woodfortrees.org/plot/sidc-ssn/mean:12/from:1850/plot/hadcrut4gl/mean:48/offset:1/scale:100

Leave a Reply