IPCC Scientist’s dilemma

The headlines used by most politicians and green pressure groups are based on the IPCC attribution of the human impact on the climate. Climate change policy and political soundbites can usually be traced back to the ‘Attribution statements’ contained in each of the 4 yearly asessment reports. The political pressure on scientists to forever increase their “certainty” about  man-made global warming is intense. The stumbling block is that the pause in warming since 1998 is getting harder to explain away and  is beginning to undermine this process. The more scientists try to explain the pause the more difficulties they find themselves getting into . The latest to make this mistake is Michael Mann who can now ‘explain’ the pause as being due to a natural cooling trend of the AMO/PDO since 2000, thereby masking underlying anthropogenic warming.

Mann's identification of a natural oscillation component in global temperature data. NMO is a  net combination of AMO and PDO. Note the amplitude of the oscillation is 0.4C

Mann’s identification of a natural oscillation component in global temperature data. NMO is a net combination of AMO and PDO. Note the amplitude of the oscillation is 0.4C

Mann is quite right that the PDO/AMO may likely be the cause of the hiatus, but by accepting this possibility he unfortunately  drives a coach and horses through the AR5 attribution analysis described in chapter 10. This is because the probability analysis used there depends on natural variability being precisely zero since 1951.

First let’s look at the ever growing IPCC certainty about AGW since the first assesment in 1990

  • 1990 AR1: ‘The unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more. ‘
  • 1995 AR2: “The balance of evidence suggests a discernable human influence on global climate”
  • 2001 AR3: “most of the observed warming over the last 50 years is likely to have been due to the increase in greenhouse gas concentrations.”
  • 2007 AR4: “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”
  • 2013 AR5: “It is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010. The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.”

So 23 years of intense research has supposedly increased scientific consensus from an agnostic position (AR1), a discernable signal (AR2), through likely (66%-90%)  in AR3, to very likely (90-99%) in AR4, and finally to extremely likely (>99% certain) in AR5. This ratcheting up in scientific certainty, egged on by various pressure groups has underpinned the  increasing political pressure on worldwide governments to abandon fossil fuels. Taking a step backwards now is unthinkable.

However is this increased confidence actually justified in view of the fact there has been essentially no surface warming at all  since 1998?  The AR5 attribution analysis is all based on figure 10.5 shown below and the seemingly small error bar on the Anthropogenic component ANT. Despite the much higher uncertainty in the two individual anthropogenic components GHG and aerosols, the ‘fingerprinting’ analysis can supposedly isolate ANT with a high degree of certainty. This fingerpriniting is extremely obscure and is not at all well explained in the text of chapter 10. Even Science of Doom is not convinced by their arguments. However let’s be generous and assume that they are correct and the error on ANT can  indeed be shown to be that small. The real problem they now have is that the probability that ANT and Observed agree depends on the assumption that Internal Variability is 0.0 ± 0.1 C – but we just saw that this is now increasingly unlikely.

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Invoking the AMO/PDO to explain the pause in global warming essenially means that Internal Variability can no longer be assumed to be zero. It must instead have a significant signal during the study period 1951-2010 used. Let’s look at how large that signal could  be. To do this we simply fit HADCRUT4 data to inculde a 60y AMO/PDO term.

 

Figure 1. A Fit to HADCRUT4 temperature anomaly data

Figure 1. A Fit to HADCRUT4 temperature anomaly data

The ‘observed’ global warming content is then isolated by measuring the warming from one peak of the oscillation to the next peak, thereby removing the internal variability effect. Now the observed signal from 1951-2010 is reduced to 0.4 ±0.1C. Including the NMO oscillation now makes it clear that there remains a  net natural component to the warming  during the study period 1951-2010 of 0.2±0.1 C. The AR5 attribution analysis is therefore wrong and should be corrected accordingly. We do this by assigning 0.2C of the original ‘observed’ warming to Internal Variability. The new Fig 10.5 now looks as follows.

Updated Fig 10.5 This shows the attribution of warming from 1950 to 2010 where natural internal variation is measured by comparing  1940 to 2010 as the correct measure of the observed warming  because 1940 and 2010 are peaks of the  natural variation.

Updated Fig 10.5 This shows the attribution of warming from 1950 to 2010 where natural internal variation is measured by comparing 1940 to 2010 as the correct measure of the observed warming because 1940 and 2010 are peaks of the natural variation.

We see immediately that  the GHG component is now far too large and furthermore that the ‘fingerprint’ ANT component lies 2 sigma beyond the ‘observed’ global warming component. In order to explain the pause in warming Mann has had to accept the existance of a significant natural internal component. This then overturns the chapter 10 attribution analysis.

I am convinced that there is indeed a 6 decade long oscillation  in the temperature data which the IPCC  studiously avoided discussing in AR5. I think the main reason why was to avoid weakening  the headline attribution statement mainly for political reasons. This position is now becoming untenable. In fact chapter 9 which evaluates model performance essentially shows that there is only a 5% chance the models can explain the hiatus without including an  internal multidecadel oscillation.  In IPCC parlance this translates to : It is “extremely unlikely” that AR5 models alone can explain the hiatus in global warming (95% probability).

 

 

Posted in AGW, Climate Change, climate science, Institiutions, IPCC, Science | Tagged | 3 Comments

Central England Temperature Anomalies

The Met Office reports  that 2014 was the warmest year in the 354 year series of temperature measurements in central england. Ed Hawkins also has a post on this.

So is it true and what does it really mean?

CET annual mean temperatures since 1990. The red line is a long term trend fit described below

CET annual mean temperatures since 1990 to 2014. The red line is a long term trend fit described below.

Well 2014 does indeed scrape through  above 2006 as the warmest year, but the quoted measurement error is 0.1C.  So statistically it would be more correct  to say that it is  60% probable that 2014 broke the record. However in this post I want to understand better the full time series and identify a  long term warming trend in CET.

This gives a rather different narrative than the simplistic one of just CO2 induced warming of the UK climate.

Full  354 year series of CET.  A clear warming trend is evident over the full erod. The red curve shows a linear fit to the data

Full 354 year series of CET. A clear warming trend is evident over the entire period. There is little evidence of an acceleration of this trend after the industrial revolution. The red curve shows a linear fit to the data showing a net warming of 0.03C/decode

The data shows that there has definitely been a slow but continuous warming trend since 1660 until the present time beginning well before the industrial revolution. Furthermore there is no obvious evidence of any CO2 induced acceleration in warming as emissions increased post 195o.

So let’s do something a little different and calculate temperature anomalies relative to that long term trend instead of relative to 1961-1990. The result of this procedure is shown below

Temperature anomaly relative to the linear long term trend in CET

Temperature anomaly relative to the linear long term trend in CET

Relative to the 350 year long term trend there is no real evidence for any recent anthropogenic warming. Now let’s simply put a spline through the anomaly data to see if there are  shorter time scale trends.

Spline fit to CET anomaly data

Spline fit to CET anomaly data

There is indeed an apparent upturn after 1970 but nothing that is really remarkably different to that in the 1700s. This is then  followed by a downturn back to normal.

What could be the cause of the evident  slow long term warming trend? Most likely this is due to a recovery from the ‘little Ice Age”

During the so-called “little Ice Age” the Thames was regularly frozen and ice fairs held in winter. The Great Frost of 1683–84, the worst frost recorded in England,[3][4][5] the Thames was completely frozen for two months, with the ice reaching a thickness of 11 inches (28 cm) in London.

The last Ice Fair was held in 1841 because the climate was growing milder well before CO2 levels were of any concern. Since then the climate has become milder. Once  this natural warming trend is included , then the lack of a more  rapid acceleration post 1950 becomes evident.

The distribution of these anomalies results in a normal distribution about a mean of zero. Based on this definition of anomalies against a long term trend, the next plot shows the ranked warmest years in the full series.

Ranked warmest years based on temperature anomalies to long term trends.

Ranked warmest years based on temperature anomalies to long term trends.

2014 is certainly way up there as an exceptionally warm year, but if natural climate trends are taken into account the average temperature is not exceptional. There is therefore  little hard evidence for an anthropogenic warming signal in CET.

I am convinced that enhanced CO2 levels must change the energy balance within the atmosphere so I would expect to see a first  order warming effect . However the complexities of the rest of the climate system on earth are still immense and still full of surprises.

Posted in AGW, Climate Change, climate science, GCM, Institiutions, IPCC, Meteorology, Science, UK Met Office | Tagged , , | 17 Comments

AR5 Attribution Studies

Just how reliable is the IPCC AR5 advice to policy makers?

The recent IPCC report stated that climate scientists are 95-100% certain that the observed temperature rise since 1850 is anthropogenic. The headline attribution statement in Chapter 10 was

“It is extremely likely that human activities caused more than half of the observed increase in GMST from 1951 to 2010.The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.”

World political leaders are basing policy on the validity of this statement, which is entirely based on comparing CMIP5 models to global surface temperature data. These ‘fingerprinting’ studies are described in chapter 10, which I find all but impossible to comprehend. The underlying assumption in AR5  is that natural climate variability has essentially played no role in warming since 1950. However is this actually true ?

Figure 1 shows a comparison between the ensemble of CMIP5 models and observations.

CMIP5 model ensemble compared to obeservations (Hadcrut4)

Figure 1: CMIP5 model ensemble compared to obeservations (Hadcrut4). Is the pause in warming since 1998 natural ?

Agreement until 2000 would appear to be reasonably good. However we then read in Chapter 9 Box 9.1 that in reality:

Model tuning aims to match observed climate system behaviour and so is connected to judgements as to what constitutes a skilful representation of the Earth’s climate. For instance, maintaining the global mean top of the atmosphere (TOA) energy balance in a simulation of pre-industrial climate is essential to prevent the climate system from drifting to an unrealistic state. The models used in this report almost universally contain adjustments to parameters in their treatment of clouds to fulfil this important constraint of the climate system.

So the models are tuned so as to describe past observations. Furthermore periods of cooling are explained by volcanoes which are simulated by something called ‘EMICS’ – Earth System Models of Intermediate Complexity up until 2005. Please don’t ask me what EMICS are but there is also no doubt in my mind that these are also tuned so that  aerosols can then match the global temperature response to volcanic erruptions such as  Pinatubu!

Science of Doom has written a detailed analysis of AR5 attribution studies and even he is not convinced.  He writes:

Chapter 10 of the IPCC report fails to highlight the important assumptions in the attribution studies. Chapter 9 of the IPCC report has a section on centennial/millennial natural variability with a “high confidence” conclusion that comes with little evidence and appears to be based on a cursory comparison of the spectral results of the last 1,000 years proxy results with the CMIP5 modeling studies.

He proposes an alternative summary for Chapter 10 of AR5:

It is extremely likely [95–100%] that human activities caused more than half of the observed increase in GMST from 1951 to 2010, but this assessment is subject to considerable uncertainties.

The pause in warming since 1998 is clearly evident in Figure 1 above. This  pause undermines the statement that natural variability is unimportant. Whichever way you look at it, the lack of warming for the last 18 years must be natural whatever the final explanation may be. So how well do the models actually perform at simulating such natural varaibility. This from Knudson et al. ( again thanks to SoD).

The model control runs exhibit long-term drifts. The magnitudes of these drifts tend to be larger in the CMIP3 control runs than in the CMIP5 control runs, although there are exceptions. We assume that these drifts are due to the models not being in equilibrium with the control run forcing, and we remove the drifts by a linear trend analysis (depicted by the orange straight lines in Fig. 1). In some CMIP3 cases, the drift initially proceeds at one rate, but then the trend becomes smaller for the remainder of the run. We approximate the drift in these cases by two separate linear trend segments, which are identified in the figure by the short vertical orange line segments. These long-term drift trends are removed to produce the drift corrected series.

In other words, any ‘natural’ trends generated by models in the temperature data are assumed to be an artifact and simply removed.

Knutson et al. Figure 1:

Knutson et al. Figure 1:

Five of the 24 CMIP3 models, identified by “(-)” in Fig. 1, were not used, or practically not used, beyond Fig. 1 in our analysis. For instance, the IAP_fgoals1.0.g model has a strong discontinuity near year 200 of the control run. We judge this as likely an artifact due to some problem with the model simulation, and we therefore chose to exclude this model from further analysis

This certainly does not sound to me that the CMIP3(5) models are correctly simulating natural variability. Many models  have drifts which are assumed to be anomalous and need to be corrected into zero dependency. Then any resulting model structure is discarded as being an artifact. However  the observed cooling from 1940-1970 and the pause in warming post 1998 are certainly not artifacts, so why reject those models that produce something similar? The drifts and structure of these models are of the same order as observed warming. Do models correctly simulate the actual climate or do they just simulate the anthropogenic component? The AR5 attribution study depends critically on them being able to simulate both anthtopogenic forcing and natural variability. This is clearly evident from Fig 10.5 in chapter 10.

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Now suppose there actually is a natural 60 year oceanic heat oscillation present – AMOC as described by Chen and Tung. This would necessarily imply that there has been a natural warming component of magnitude of about 0.2C since 1950. This is also  clearly visible in a simple fit to the Hadcrut4 data including such a term.

Figure 1. A Fit to HADCRUT4 temperature anomaly data

Figure 1. A Fit to HADCRUT4 temperature anomaly data

In this case the (Internal Variability) term in Figure 10.5 turns out to be  non-zero and the result now looks rather different. The models now run way too warm as shown below.

Updated Fig 10.5 This shows the attribution of warming from 1950 to 2010 where natural internal variation is measured by comparing  1940 to 2010 as the correct measure of the observed warming  because 1940 and 2010 are peaks of the  natural variation.

Updated Fig 10.5 This shows the attribution of warming from 1950 to 2010 where natural internal variation is measured by comparing 1940 to 2010 as the correct measure of the observed warming because 1940 and 2010 are peaks of the natural variation.

The anthropogenic term ANT is now no longer in agreement with the observations and the attribtion statement should be modified to account for the non-zero natural component. As a consequence it becomes clear that CMIP5 models are running about 50% too hot. Therefore the attributrion statement should  now probably read instead.

“It is very likely that human activities caused about half  of the observed increase in GMST from 1951 to 2010. The best estimate of the human-induced contribution to warming is 55% of the observed warming over this period.”

This could also explain discrepancies in the spatial distribution of warming. In general models and data agree fairly well  that most warming occurs over land and arctic regions. However van Oldenborgh et al.  show there is significant disagreement mostly over the oceans, overestimating warming in the northern Pacific and underestimating warming in the southern tropical oceans. Figures 4 and 5 show simple CMIP5 modle comparisons for the two hemispheres.

CMIP-NH

Comparison of CMIP5 models for Northern Hemsiphere temperature anomalies with Hadcrut4

CMIP-SH

Comparison of CMIP5 models for Southern Hemisphere with Hadcrut4

The southern hemisphere is dominated by oceans and warms slower. Overall the spatial agreement on a grid spacing level is fairly good except for the last 15 years. However there are discrepencies both in overestimating warming in about one quarter of the world and underestimating it in  about one third.

WG1 Box 9.2 AR5 admits that “Hiatus periods of 10/15 years can arise as a manifestation of internal decadel climate variability, which sometimes enhances and sometimes counteracts the long term forcing trend” followed by a long explanation as to why this does not affect ever increasing confidence in climate models.

I am not convinced – nor do I suspect are some of the authors of AR5. One can only imagine the intense pressure that has been placed on them to increase the certainty of human warming, thereby maintaining  the various green/governmental interests and funding lobbies for another 4 years.

I wonder who will eventually jump first  ?

 

 

Posted in AGW, Climate Change, climate science, IPCC, Science | Tagged , , | 10 Comments