A comparison of CMIP5 Climate Models with HadCRUT4.6

Overview: Figure 1. shows a comparison of the latest HadCRUT4.6 temperatures with CMIP5 models for Representative Concentration Pathways (RCPs). The temperature data lies significantly below all RCPs, which themselves only diverge after ~2025.

Fig 1. Model comparisons to data 1950-2050. Spaghetti are individual annual model results for each RCP. Solid curves are model ensemble annual averages.

Modern Climate models originate from Global Circulation models which are used for weather forecasting. These simulate the 3D hydrodynamic flow of the atmosphere and ocean on earth as it rotates daily on its tilted axis, and while orbiting the sun annually. The meridional flow of energy from the tropics to the poles generates convective cells, prevailing winds, ocean currents and weather systems. Energy must be  balanced at the top of the atmosphere between incoming solar  energy and out going infra-red energy. This depends on changes in the solar heating, water vapour, clouds , CO2, Ozone etc. This energy balance determines the surface temperature.

Weather forecasting models use live data assimilation to fix the state of the atmosphere in time and then extrapolate forward one or more days up to a maximum of a week or so.  Climate models however run autonomously from some initial state, stepping  far into the future assuming that they correctly simulate a changing climate due to  CO2 levels, incident solar energy, aerosols, volcanoes etc. These models predict  past and future surface temperatures, regional climates, rainfall, ice cover etc. So how well are they doing?

Fig 2. Global Surface temperatures from 12 different CMIP5 models run with RCP8.5

The disagreement on the global average surface temperature is huge – a spread of 4C. This implies that there must still be a problem relating to achieving overall energy balance at the TOA. Wikipedia tells us that the average temperature should be about 288K or 15C. Despite this discrepancy in reproducing net surface temperature the model trends in warming for  RCP8.5 are similar.

Likewise weather station measurements of temperature have changed with time and place, so they too do not yield a consistent absolute temperature average. The ‘solution’ to this problem is to use temperature ‘anomalies’ instead, relative to some fixed normal monthly period (baseline).  I always use the same baseline as CRU 1961-1990. Global warming is then measured by the change in such global average temperature anomalies. The implicit assumption of this is that nearby  weather station and/or ocean measurements warm or cool coherently, such that the changes in temperature relative to the baseline can all be spatially averaged together. The usual example of this is that two nearby stations with different altitudes will have different temperatures but produce the similar ‘anomalies’. A similar procedure is used on the model results to produce temperature anomalies. So how do they compare to the data?

Figure 3 shows the result for HadCRUT4.6 compared to the CMIP5 model ensembles run with CO2 forcing levels from RCP8.5, RCP4.5, RCP2.4 and where anomalies use the same 30y normalisation period.

Fig 3. HadCRUT4.6 compared to 41 models run with 3 widely different RCP forcing.

Note how all models now converge to the zero baseline (1961-1990) eliminating differences in absolute temperatures. This apparently allows models to be compared directly to measured temperature anomalies, although each use anomalies for different reasons. Those of the data is due to poor coverage while that of the models is due to poor agreement in absolute temperatures.  The various dips seen in Fig 3. before 2000 are due to historic volcanic eruptions whose cooling effect has been included in the models.

Fig 4. Model comparisons to data 1950-2050

Figure 4 shows a close up detail from 1950-2050. This shows how there is a large spread in model trends even within each RCP ensemble. The data falls below the bulk of model runs after 2005 except briefly during the recent el Nino peak in 2016.

Figure 1. shows that the data are now lower than the mean of every RCP, furthermore we won’t be able to distinguish between RCPs until after ~2030.

Method: I have downloaded and processed all CMIP5 runs from KNMI Climate Explorer for each RCP. I then calculated annual averages for the 1961-1990 baseline and combined them in all into a single CSV file.  These can each be download using for example this URL:  RCP85

To retrieve any of the others just change ’85’ to ’60’ or ’45’ or ’26’ in the URL.

Posted in AGW, Climate Change, IPCC | 18 Comments

Global Temperatures – the big picture

Suppose you wanted to measure whether the total number of ants on earth has been increasing. The number of ants at any given place depends on location and on season. Let’s assume that today there are 10,000 botanists at fixed locations across the world diligently measuring the number of  ants passing through each square meter. The daily average population at each location can then be estimated as  the sum of the maximum daytime population plus the minimum nighttime population divided by two. Unfortunately though  100y ago there were only 30 such botanists at work and they used pen and paper to record the data. How can we possibly hope to determine whether the global ant population has been increasing since then? The only way is to do that is to assume that changes  in ant population are the same everywhere because it is a global phenomenon – for example it depends on oxygen levels. Our botanists sample this change at random fixed places. Then as far as possible we should remove any spatial biases inherent in this ever changing historical sampling coverage. We can only do this by normalising at each location the population time series relative to its ‘average’ value within say a standardised  30 year (seasonal) average. Then we can subtract this normal value  to derive the ant population differentials (anomalies). Next  we form a ‘spatial’ average of all such disparate ant anomalies (essentially differentials) for each year in the series. What we can then deduce are annual global ant population ‘anomalies’ , but in doing so we have essentially giving up hope of ever knowing what the total number of ants alive on earth were at any given time.

Measuring global temperatures is rather analogous because they too are based on the same  assumptions, namely that a) temperatures change coherently over vast areas and b) these changes are well reflected by stochastic  sampling over the earth’s surface. The global temperature anomaly is a spatial average over all measurements of localised monthly temperature differentials relative to their average over a fixed period.

Figure 1 shows the decadal smoothed results from GHCN V3/HadSST3. The big picture shows there are four phases:

  1. 1880-1910 Falling or flat
  2. 1910-1945 Increasing
  3. 1945-1975 Falling or flat
  4. 1975-2015 Increasing

Figure 1: a) Global Temperature Anomalies. Data are 10 year rolling averages  b) Differences between GHCN anomaly trends.

The individual station anomalies for Tav, Tmin and Tmax have each been computed using their respective seasonal averages between from 1961-1990. Note how all the series get zeroed together at the fixed normalisation period. This is an artefact of the choice of baseline period. We can also observe the following apparently odd effects in Figure 1b.

  1. Tav ‘warms’ faster than both Tmin and Tmax after 1980.
  2. Tmin warms faster than Tmax after 1970. There is other evidence that nights warm faster than days
  3. Oceanic temperatures were warmer than global temperatures  before 1910 and then again between 1930 to 1972, but have since lagged behind land temperatures. This appears to be a cyclic phenomenon.

The zeroing effect in differences is again an  artefact of using temperature anomalies.  However, if one looks at these trends dispassionately one must conclude that there is an underlying  natural oceanic cycle of amplitude ~0.3C and wavelength ~90y which drove global temperatures until ~1930.  Since then a slow but increasing CO2 induced warming effect has emerged which has now distorted this natural cycle.  This has resulted  in an underlying  global warming of about 0.8C.

 

Posted in AGW, Climate Change, climate science | 34 Comments

Denied

I have just finished reading the book ‘Denied’ by Richard Black, the director of the Energy and Climate Intelligence Unit (ECIU) and ex-BBC environment editor. It is well written and easy to read. He claims to demolish the climate “contrarian” and climate “denier” positions on climate change and renewable energy. As far as I can work out contrarians are lukewarmers who accept the science but argue that any future warming will be far less damaging than the consequences of overreacting to it right now, while ‘deniers’ argue that CO2 can have no effect on the climate at all. His main targets in the book are the usual suspects: the GWPF, Nigel Lawson, Matt Ridley, Christopher Brooker, James Delingpole, Judith Curry and anyone else who dissents from mainstream climate and green energy orthodoxy. Richard is also upset with the BBC for interviewing Nigel Lawson on the Today programme, and with various newspapers for giving Brooker, Delingpole or David Rose any column space. In this vein then I too should be placed in the Contrarian camp, particularly concerning renewable energy policy.

Richard however does acknowledge that climate gate exposed a certain amount of sculduggery within the climate science community and because climate gate also coincided with a 15y pause in global warming, the position of sceptics back in 2014 was actually very strong and put climate scientists onto the back foot. The chapter in AR5 discussing this warming hiatus reflects this defensiveness. However,  by the end of 2018 Richard argues in his book that consequent events have now demolished all those contrarian or “denier” arguments and therefore these opinions should now be denied any undue influence in determining future climate policy. His main arguments for this are:

  1. Warming has continued as expected and the pause never actually happened. Temperature data are now compatible with model ‘projections’.
  2. Prices of new off-shore wind capacity and of solar panels have dropped so dramatically that they are competitive or cheaper than Nuclear and Gas. So objecting to green energy is illogical.
  3. The new SR15 report has highlighted the need for  immediate action to close down fossil fuels as soon as possible and invest in ‘clean’ energy. Shale Gas is an illusion.

Let’s look at each claim in turn.

  1. The Pause is no more.

Here is the original plot from AR5 showing a comparison of four major temperature datasets versus CMIP5 model projections.

Fig 1. AR5 Comparison of global temperature anomalies with CMIP5 models

In 2012 all the major temperature sets (HadCRUT, GISS,NCDC) showed no consequent year warmer than the El Nino year 1998. Furthermore the trend was dropping below model predictions. Since then a huge effort has been made to add new weather stations in Arctic regions where warming is fastest and to improve the spatial coverage averaging.

Fig 2. How the pause ‘disappeared’ .

HadCRUT4.6 has about 2000 more stations than HadCRUT3 had in 2012, but also dropped some stations in S. America (they were cooling). Temperature homogenisation on land and oceans has also had a net warming effect, although quite why seems to be a bit of a mystery. The method of spatial averaging also has an important net effect on the global temperature. Cowtan & Way used kriging to infill empty regions, whereas I use a 3D spherical triangulation to provide natural global coverage. The results are almost identical.

What is interesting though is that the flat trend prior to 2014 has now disappeared in HadCRUT4.6 which uses the same averaging procedure throughout. So this must be due to ongoing data corrections and to all the new stations added consequently. Each new monthly release of data shows that earlier values of global temperatures also get updated. Data homogenisation is a continuing process updating past measurements as well as new ones. Note however that temperatures have been falling for the last two years following the the 2016 El Nino peak. If 2019 continues this cooling trend then the pause or hiatus in warming could well return. 

2. Falling prices of renewable energy. 

Recent auctions for new off-shore wind farm capacity  “Contracts for difference”  have reached as low as £57:50/MWh for 15 years , apparently undercutting both new gas and  the Hinkley C price £92/MWh for 35 years.  So if the price of new wind and solar generation is so cheap why don’t we just buy more of it? The problem of course is that we are not comparing like with like. Although Hinkley is very expensive (consequent nuclear stations should strike a much lower price) it is still cheaper than the London Array  which receives £150/MWh, but the main difference is that nuclear is predictable. Nuclear provides constant base load power which can in the future be used for charging EVs overnight. Wind energy is fickle and may or may not be available to meet peak demand, and this inherent unreliability will not change in the future. Other advantages of nuclear over wind are

  • that its environmental footprint is tiny
  • that it lasts 3 times as long as wind farms (60 years as opposed to 20 years)

Solar Energy makes little sense in Britain because it produces no power in winter. Annual peak demand is around 5pm during winter evenings when solar output is zero. So if energy security is your goal then  solar power is useless no matter how cheap the price falls. The only thing going for it is that  it can displace fossil generation during summer months thereby reducing CO2 emissions if that is the goal, but this then increases the overall energy cost.

3. Urgent action to avoid climate disaster.

How realistic is it to expect the world to cut carbon emissions in half by 2030 and eliminate them by 2050 to meet the 1.5C target? If we get it wrong by acting too soon then we don’t get a second chance because we destroy our economies in the attempt. David Mackay said we need a plan that works for the UK. Similarly each country needs a plan that works for its particular environment. To get to zero carbon we also have to electrify both transport and heating. This means that electricity peak demand will increase to  a minimum of around 90GW. The problem then is that this increased demand must be reliably met on cold winter evenings or people will die.  Does Richard Black imagine that by expanding wind power alone one could achieve this goal, or that somehow battery storage might cover such wind lulls ?  Last night is a good example of the problems we already face after installing well over 20GW of wind capacity and over 30GW of solar capacity.

Power generation by source for 2/3 Jan 2019.  UK still regularly depends on coal when green energy lets us down each winter, despite all the hype. Coal was fired up to provide ~6GW because  wind+solar combined could only manage ~2GW

Renewables simply cannot provide energy security. Nor can some magic smart grid or energy storage system cover several day long wind lulls  affecting Northern Europe each winter.  So given that we must provide power 24/7 to maintain modern life then the only realistic low carbon solution is nuclear power. Roughly 30 identical Hinkley sized nuclear plants would do the job nicely. I doubt whether Richard Black nor his ECIU would agree with that.  I wonder who might eventually be called an energy denier?

The book is a good read though 😉

Posted in AGW, Climate Change, climate science, Energy | Tagged , | 35 Comments