The paper: “Spread in model climate sensitivity traced to atmospheric convective mixing by Sherwood, Bony & Dufresne, Nature 2/1/14” has generated a lot of media hype that global warming by 2100 will be > 4C. It has also recently been highlighted on realclimate.
Professor Sherwood, of the University of New South Wales in Australia, told The Guardian: “This study breaks new ground twice: first by identifying what is controlling the cloud changes and second by strongly discounting the lowest estimates of future global warming in favour of the higher and more damaging estimates.
“4C would likely be catastrophic rather than simply dangerous,” Sherwood said. “For example, it would make life difficult, if not impossible, in much of the tropics, and would guarantee the eventual melting of the Greenland ice sheet and some of the Antarctic ice sheet” – causing sea levels to rise by many metres as a result.
The basis of this paper is an interpretation by the authors that lower troposphere circulation described by high sensitivity models is more favored by observation than that described by low sensitivity models. But just how robust is this claim and what does it really mean for future global warming? I have been struggling my way through the details of this paper for the last two days. My conclusion is that all the media hype and claims made by Prof. Sherwood simply don’t stand up to serious scrutiny. There are far too many inconsistencies in the analysis to justify any such bold statements as the one above. The evidence provided in the paper from direct observation is simultaneously poor, selective and says little about climate sensitivity, its evolution, nor even that much about clouds.
The authors appear to me to have assumed at the outset what it was they consequently set out to prove. They assumed that increasing low tropospheric mixing as the earth warms will lead to less low clouds despite higher evaporation rates from the ocean. Less low level clouds means net positive cloud feedback in climate models despite an implied increase in medium and convective clouds. The paper then analyses 42 climate models with differing sensitivities (ECS) to demonstrate that indeed those with high sensitivity have more low tropospheric mixing in the tropics than those with low sensitivity. In order to show this they define 2 indexes to “measure” this mixing effect.
- S = (DR(700-850)/100.0 – DT(700-850)/9)/2 which they interpret as a drying term? S is high if there a large fall off in relative humidity than expected by the lapse rate falls. The pressure height conversion is 700mb = 3000m 850mb = 1500m (top of boundary layer)
- D = <H(w2-w1))H(-w1)>/<-w2H(-w2)> where w1 is the vertical pressure velocity below the boundary layer and w2 is that above the boundary layer , H is a step function at the boundary.<> means average.
This is described by the authors to be the ratio of ascending air mixing below the boundary level to mixing above the boundary or as they call it large scale lower-troposphere mixing. The larger D is the more mixing below the boundary layer occurs.
They then demonstrate that high sensitivity models have different values of S and D than low sensitivity models in these parameters and assume that this is due differences in low cloud feedback, although they don’t actually prove that. They also don’t describe how they derived these model indices. They finally combine both indices to give one LTMI (lower tropospheric mixing index) by simply adding the two together. To compare these indices to data they need to use precisely the same criteria and this is where the real problems start. The data they use are ERAI + MERRA re-analysis of satellite data, themselves partly dependent on models. These are the inconsistencies I have so far identified:
- They only compare one month (September) for comparison. Why choose September when there are clear seasonal variations of water content in the atmosphere? Are the model indices also restricted just to September ? This is not explained and essentially rules out anyone reproducing their result.
- When averaging D why did they chose to only use observational values from the eastern Pacific and Atlantic. Why exclude the Indian Ocean and western Pacific (or Indonesia for that matter used to evaluate S)? Is this an indication that they are hiding problems say with monsoons that change their result ?
- They also don’t specify which year the data is taken from ? Are model results for the same year and the same geographic area as observations? Surely high sensitivity model values of their indices must evolve quickly with time by definition. So what fixed time are we talking about?
- The S indices are derived from a completely different geographic area to the D indices! The S indices are selected just from a region around Indonesia and some coastal areas. Why these particular areas? Is it perhaps because they know that the circulation is very high there? Should not the models then also be restricted to identical times and these narrow regions ? Otherwise comparison makes no sense. MERRA data is global and available monthly. How can you then combine S and D to give a single LTMI if they apply to completely separate geographic regions?
- Why is the error on S shown to be essentially zero for MERRA/ERAI ? Is that perhaps just a simple mistake ?
- How do we know that the differences between model sensitivity are only due to low clouds rather than say differences in their treatment of water vapor feedback or lapse rate feedback ?
- As the paper is written it is impossible to reproduce their results from the MERRA data given the information provided. Which year was used ? What are the geographic selections? How was the averaging done ?.. etc.
- Why were none of the above points picked up under NATURE peer review?
The conclusion of the paper is based on the following assumptions.
High sensitivity models predict fewer low clouds in the tropics with increasing CO2 resulting in positive feedback. The underlying reasons for this can be correlated to some arbitrary indices of moist circulation calculated at specific locations for one instant in time. Observations made in these zones for these specific indices are more similar to high sensitivity models than to low sensitivity models (at least for September).
Do other zones and times show similar correlations ? Even if the observed profile were shown to be similar , does it logically follow that the time evolution would anyway follow the high sensitivity models? Where is the evidence for that ? The paper does not really demonstrate that high climate sensitivity is high. To do that you would need to demonstrate a time evolution over the tropics consistent with high sensitivity models. We should not accept the conclusions of the paper because as it stands, the evidence is far too flimsy.
The test of a good paper is whether it provides sufficient information to allow someone to independently validate the results. This paper doesn’t and seems to me on balance to be more propaganda than hard science.
We are already 14% into this century and the increase in the average global atmospheric temperature s 0%.
Full paper is now available at (might be very slow):
http://www.see.ed.ac.uk/~shs/Climate%20change/Climate%20model%20results/Sherwood%203c%20from%20mixing.pdf
I don’t see how one could come to different conclusions than Clive has done here.
Clive every now and then I read a paper and don’t understand it at all. Sherwood was one of those. I’ve been told on a number of occasions that I don’t understand climate science – and that is probably true. But it’s not really science, is it? The Bishop is now comparing Nature to The Sunday Sport.
Sherwood makes sweeping statements potentially affecting the lives of billions of people. Their results are based on a small area around indonesia and half of tropical oceans – but only for september. It makes no sense to me when precipitation anyway changes dramatically with seasons – see this video. Note also how they exclude the monsoons over the Indian Ocean from their “analysis”!
Clive, thanks for this analysis, very useful. Shocking how a paper with so many flaws passed peer review. Or maybe not, Nature became a joke when it went with Steig 09 on the cover.
Euan, it was the Sunday Sport, not the Sunday Post! But an easy mistake to make; both are largely fiction dressed up as journalism, and while the Sport also features soft porn, the Sunday Post prefers stories about large onions. Well it used to, and hopefully it still does.
Clive: Congratulations on following Sherwood’s logic, or lack thereof, through to the bitter end. I gave up after the second read-through.
I don’t know whether you’ve seen the Mann and Schmidt writeup on Sherwood at RealClimate, but I get the impression that even they are somewhat less than overwhelmed by the results.
http://www.realclimate.org/index.php/archives/2014/01/a-bit-more-sensitive/#more-16609
Yes Sherwood makes at least 3 leaps in assumption to reach the conclusion he set out to prove in the first place. I actually made this comment on realclimate:
The reply from Paul S is only half true because they talk mostly about how they treated the models. There is very little detail about how they actually got the “observational” points on which the conclusion depends.
My main objection to Sherwood is his use of a trumped-up tropospheric mixing index of dubious significance to evaluate climate sensitivity. Climate sensitivity is defined not by tropospheric mixing but by the increase in global surface air temperature resulting from a doubling of atmospheric CO2, and had Sherwood compared the model results with this metric he would have found that the models that show high climate sensitivity are the ones that depart farthest from surface air temperature observations. (I say farthest because the models with low climate sensitivity show too much warming as well.)
Exactly. He has dreamed up two indices which are supposed to represent lower tropospheric mixing. He then assumes that more mixing will lead to LESS clouds although he doesn’t even investigate clouds. Only one of these indices shows any real difference between low and high sensitivity models – D. D is then only calculated where he thinks there will be a difference – Eastern Pacific and Atlantic Tropics. He excludes the very area where he calculates the other index – S. It is opaque as to how he calculates the observed values.
It is a conjuring trick to pull a hot rabbit out of a hat.
The biggest “stupidity” in this paper is that the conclusion that the “Models” are not showing high enough Climate Sensitivity and thus temperature rises by 2100 is that means they would diverge even more from reality than they currently do.
Which appears to be the big problem with most “Climate Scientists”, they have no grasp on reality.
The methods, assumptions, statistics and data they use is all totally meaningless if it doesn’t match the reality of the real world which everyone else can see with their own lying eyes.