Untangling UK Wind power production

There are currently 6044 operational wind turbines in the UK with a total capacity of 12.133 GW. Do we know how much electrical power they generate? The answer is not simple. These 6044 turbines are installed in over 700 sites, some of which are very large while others are only a single turbine.  There are 3 ways to connect them to the Grid.

1. Direct transmission line to the Grid. This is suitable only for large wind farms especially off-shore. The output of such wind farms is metered through the ‘balancing mechanism’, from which Gridwatch and this site get their live updates. A full list of these directly connected wind farms are given below.

Wind Farm Capacity (MW)
Arecleogh 120
Clyde 462
Carraig Gheal 46
Crystal Rig 138
Greater Gabbard Offshore 501
Griffin Wind Farm 204
Gwynt y Mor offsgore 592
Hadyard Hill 130
Humber Offshore 220
Harestanes 126
London Array 720
Lincs Offshore 540
Ormonde 160
Sherringham Shoal Offshore 315
Thanet Offshore 300
West of Duddon Sands 382
Whitelee 511
Walney Offshore 369
Westermost Rough 205

Total Capacity = 7.1 GW

I had thought that these were all the ‘metered’ wind farms included in the Balancing Mechanism BM reports. However I later discovered that those in category 2 are also metered because they receive constraint payments.

2. Secondly there are wind farms that are registered  with the Balancing Mechanism, but are ‘BM embeded’ in the local distribution network. These large to medium wind farms are still visible to the Balancing Mechanism and their output is metered. We know this as they receive constraint paayments to disconnect when there is too much wind. This is the list of such wind farms

Baillie Wind Farm 52.5
BETHW-1 29.75
Braes of Doune 74
Berry Burn Wind Farm 66.7
Beinn An Tuirc 2 43.7
Burbo Offshore Wind 90
CLDRW-1 37
Clachan Flats 15
Dalswinton Windfarm 30
Gordonstown 13
Goole Fields Wind Farm 34.476
Glens of Foudland 26.7
Gunfleet Sands Demo 11.7
Great Yarmouth Power Limited 405
Hill of Towie 48
Minsca Wind Farm 36.8
E_RHEI-1 52
Tullo Extension 25

Total capacity = 1.1GW

Therefore the total metered capacity of wind farms within the BM system is simply the sum of category 1 and category 2.

Total Metered Capacity = 8.2 GW

The real-time output from category 1 and 2  wind farms is shown below:

3. Now there are about 600 smaller wind farms ranging from 1 up to to 40 turbines that have a connection to the regional Distribution Network Operator (DNO) and are paid ‘Feed-In Tariffs’ (FITS). These smaller wind farms are not part of the balancing mechanism and are therefore not metered centrally. Their net effect on the National Grid is to reduce demand slightly via the local distribution network. They must have an on-site transformer to convert generated DC to 3-phase AC and connect to the local DNO. They may also use generated energy locally and then get paid a discount because it is ‘green’. The exported electricity  is metered locally and receives the FIT as shown below.

Feed-in-tariff

The estimated total FIT capacity of these 600 farms =  3.8 GW (the difference of 12GW and the metered total).

Unfortunately the output from wind farms in category 3 is never made public. It is impossible to know the real-time power from these wind farms. What I originally set out to discover was  what percentage of  total wind power is measured by the Balancing Mechanism. It has been a headache to get all this information together, but I think we can now estimate the total output from all UK wind farms. To do this we can simply assume that the load capacity for the  feed-in wind farms is the same as that for the metered farms, (which may be optimisitic as the largest farms are off-shore). Under this assumption  the correction factor to be applied to the BM reports values is 12/8.2 = 1.46.  So the actual wind power  from all UK wind farms to electricity generation in the UK right now is:

Therefore I will in future increase the BM reports values for wind power output by this factor to better reflect the actual situation.

Sources:

  1. UK Wind Energy Database (UKWED)
  2. Elexon
  3. Variable Pitch
  4. Renewable Energy Foundation
Posted in Energy, renewables, Science, wind farms | Tagged | 12 Comments

The Logical Fallacy behind ‘Representative Concentration Pathways’

Representative Concentration Pathways (RCPs) used in AR5 are future projections of greenhouse emissions according to different mitigation scenarios. So for example  there is a ‘business as usual’ high emission scenario (RCP8.5) and a midrange mitigation emissions scenario (RCP4.5). CMIP5 model projections are then based on runs with each scenario.

Here is the fallacy. The number 8.5 refers to the resultant climate forcing in watts/m^2 at the top of the atmosphere by 2100. Yet  to know what this forcing will be in 2100 you have to use a climate model! This is a circular argument as it predetermines the answer before any ‘testing’ of the models even starts.  So for example  RCP8.5 gives a warming of

(with no feedback) \Delta{T} = \frac{\Delta{F}}{3.2} = 2.65C

(with feedback) \Delta{T} = \frac{\Delta{F}}{(1-f)} = 4.4C  for f=0.4

How can you  test the models with an emission scenario based precisely on those same model results?  These ‘scenarios’ must include highly uncertain  assumptions about carbon cycle feedbacks, cloud forcing feedbacks that are built into the very models you are supposed to be testing.

The SRES scenarios used in CMIP3  predicted atmospheric  CO2 concentration NOT the final forcing. Why was this changed ?

Reading Moss et al. one sees that RCPs were a deliberate choice to define an end goal for radiative forcing rather than CO2 levels, apparently to speed up impact assessments by the sociologists. In fact these are just guesses for radiative forcing in 2100 and there are no emission scenario behind them. Instead the modelers can chose any  plausible emission ‘path’ they like that reaches  the required forcing. Each emission path is just one of many alternative pathways.  This is like saying: We know that the world is going to fry by 2100 if the sun’s output were to increase were by 8.5 watts/m2. Lets simply use that figure for the enhanced GHE by 2100 and call that our ‘business as usual’ scenario RCP8.5. That way we can convince the world to take drastic mitigation action now, by scaring everyone just how bad things will be.

It is interesting also to look again at Fig 10 in the AR5 Summary for Policymakers. This figure had and continues to have a huge political impact.

Figure 1: Overlayed on figure 10 from the SPM are Hadcrut4 data points shown in cyan where  CO2 is taken from Mauna Loa data.  Gtons of anthropogenic CO2 are calculated relative to 1850 and scaled up by a factor 2.3 because 43% of anthropogenic emissions remain in atmosphere. The blue curve is a logarithmic fit to the Hadcrut4 data. This is  because CO2 forcing is known to depend on the logarithm of CO2 concentration and is certainly not linear.  This is derived in Radiative forcing of CO2

Figure 1: Overlayed on figure 10 from the SPM are Hadcrut4 data points shown in cyan where CO2 is taken from Mauna Loa data. Gtons of anthropogenic CO2 are calculated relative to 1850 and scaled up by a factor 2.3 because 43% of anthropogenic emissions remain in atmosphere. The blue curve is a logarithmic fit to the Hadcrut4 data. This is because CO2 forcing is known to depend on the logarithm of CO2 concentration and is certainly not linear. This is derived in Radiative forcing of CO2

Firstly note RCP8.5 seems to imply that the carbon content of the atmosphere will triple by 2100. That means that in the next 85 years we will emit twice as much CO2 as emitted so far  since the industrial revolution. Current CO2 levels are 400ppm – an increase of 120ppm. So by 2100 the graph seems to imply that levels would be 640ppm. Yet that is only a total increase in CO2 forcing of 4.4 W/m2. To reach a forcing of 8.5 W/m2 the atmosphere must reach a CO2 concentration of 1380 ppm

What is going on ? There is a subtle assumption behind Fig 10.3 namely that natural CO2 sinks will saturate and instead of just half emitted CO2 remaining in the atmosphere eventually all of it will remain. There is no real evidence whatsoever that this saturation is occurring.

Could we actually reach 1380 ppm even if all emitted CO2  remained in the atmosphere?

date 50% retention 100% retention
now 400 520
2100 640 1000

We still can’t reach 1380ppm even if all emissions remained for ever in the atmosphere. Can positive feedbacks save RCP8.5 by  boosting external forcing? Perhaps, but my problem with that is the incestuous relation between the assumptions made in the models with the scenarios. You need the high sensitive models to justify the target forcing of 8.5 w/m2, so is it any surprise that those same models then predict 4.5C of warming?  The same argument applies to emissions limit to meet the 2C limit to be discussed in Paris.

Arguments that other anthropogenic greenhouse gases like methane or SO4 make the difference  seem rather dubious since CH4 and SO4 have a lifetime of just 4y in the atmosphere, as do aerosols.

Fig 10. is highly misleading for policy makers (IMHO)!

 

Posted in AGW, Climate Change, climate science, IPCC, Science | Tagged , , | 5 Comments

Marotzke & Forster Revisited

Marotzke & Forster(2015) found that 60 year trends in global surface temperatures are dominated by underlying climate physics. However, the  data show that climate models overestimate such 60 year decadel trends after 1940.

Comparison of 60y trends in observations and models (see text for details).

Comparison of 60y trends in observations and models (see text for details).

The recent paper in Nature by Jochem Marotzke & Piers Forster ‘Forcing, feedback and internal variability in global temperature trends’ has gained much attention because it makes the claim that climate models are just fine and do not overstimate warming despite the observed 17 year hiatus since 1998. They attempt to show this by demonstrating that 15y trends in the Hadcrut4 data can be expected in CMIP5 models through quasi random internal variability, whereas any 60y trends are deterministic (anthropogenic). They identify ‘deterministic’ and ‘internal variability’ in the models through a multi-regression analysis with their known forcings as input.

\Delta{T} = \frac{\Delta{F}}{(\alpha + \kappa)} + \epsilon

where \Delta{F} is the forcing, \alpha is a climate feedback and \kappa is fraction of ocean heat uptake and \epsilon is random variation.

This procedure was criticised by Nic Lewis and generated an endless discussion on Climate Audit and Climate-Lab  about whether this procedure made statistical sense. However for the most part I think this is irrelevant as it is an analysis of the difference between models and not observational data.

Firstly the assumption that all internal variability is quasi-random is likely wrong. In fact there is clear evidence of a 60y oscillation in the GMST data probably related to the AMO/PDO – see realclimate. In this sense all models are likely wrong because they fail to include this non-random variation. Secondly as I will show below the observed 15y trends in Hadcrut4 are themselves not quasi-random. Thirdly I demonstrate that the observed 60y trends after 1945 are poorly described by the models and that by 1954 essentially all of the models predict higher trends than those observed. This means that the ‘deterministic’ component of all CMIP5 models do indeed overestimate  the GMST response from increasing greenhouse gas concentrations.

Evidence of regular climate oscillations

Hadcrut4 anomaly data compared to a fit with a 60y oscillation and an underlying logarithmic anthropogenic term.

Hadcrut4 anomaly data compared to a fit with a 60y oscillation and an underlying logarithmic anthropogenic term.

Figure 1 shows that the surface data can be well described by a formula (described here) that includes both an net CO2 forcing term and a 60y oscillation as follows:

DT(t) = -0.3 + 2.5\ln{\frac{CO2(t)}{290.0}} + 0.14\sin(0.105(t-1860))-0.003 \sin(0.57(t-1867))-0.02\sin(0.68(t-1879))

The physical justification for such a 0.2C oscillation is the observed PDO/AMO which just like ENSO can effect global surface temperatures, but over a longer period. No models currently include any such  regular natural oscillations. Instead the albedo effect of aerosols and volcanoes have been tuned to agree with past GMST and follow its undulations. Many others have noted this oscillation in GMST, and even Michael Mann is now proposing that a downturn in the PDO/AMO is responsible for  the hiatus.

15y and 60y trends in observations and models

I have repeated the analysis described in M&F. I use linear regression fits over periods of 15y and 60y to the Hadcrut4 data and also to the fitted equation described above. In addition I have downloaded  42 CMIP5 model simulations of monthly surface temperature data from 1860 to 2014, calculated the monthly anomalies and then averaged them over each year. Then for each CMIP5 simulation  I calculated the 15y and 60y trends for increasing start year as described in M&F.

Figure 2 shows the calculated  15y trends in the H4 dataset compared to trends from the fit. For comparison we first show Fig 2a taken from  M&F below.

15y trends from M&P compared to model regressions

Fig 2a: 15y trends from M&F compared to model regressions. Error bars for  random internal variation  are ± 0.26C which dominate ‘deterministic’ (AGW) error spread beween models of ±0.11 C

M&F regression analysis then goes on to show that the deterministic effects in the CMIP5 models should dominate for longer 60y trends. In particular the error on the 60y trends as given across  models is ± 0.081 C which is 30% lower  than random variation. Therefore the acid test of the models comes when comparing 60y model trends to the obervation because now statistical variation is much smaller. These are my results below.

a) 15y trends derived from Hadcrut4 data and the fit described above. Note how the trends are not random but also follow a regular variation in phase with the fit.  b) 60y trends in Hadcrut4 data (black circles) comparted with the fitr (blu line) and an  ensemble of CMIP5 modle calculations. The rted curve is the avergae of all CMIP5 models

a) 15y trends derived from Hadcrut4 data and the fit described above. Note how the trends are not random but also follow a regular variation in phase with the fit.
b) 60y trends in Hadcrut4 data (black circles) comparted with the fitr (blu line) and an ensemble of CMIP5 modle calculations. The rted curve is the avergae of all CMIP5 models

This analysis shows two effects which were  unreported by M&F. Firstly the 15y variation in trends of the observed data is not random but shows a periodic shape as is also reproduced by the fit. This is characteristic of an underlying natural climate oscillation. The quasi-random natural variation in the CMIP5 models as shown in Fig 2a above  encompases the overall magnitude of the variation but not its structure.

Secondly the 60y trends also show a much smaller but still residual structure reflecting the  underlying oscillation shown in blue. The spread in 42 models is of course due to their different effective radiative forcing and feedbacks. The fact that before 1920 all  model trends can track the observed trends is partly due to parametric tuning in aerosols to agree with hindcast temperaures. After 1925 the observed trend begins to fall beneath the average of CMIP5 so that by 1947 the observations lie below all 42 model trends in the CMIP5 ensemble. This increase in model trends above the observed 60y trend cannot now be explained by natural variation since M&F argue that the deterministic component must dominate.  The models must be too sensitive to net greenhouse forcing. However M&F dismiss this fact simply  because they can’t determine what component within the models causes the trend . In fact the conclusion of the paper is based on analysing model data and not the observation data. It is bizarre. They conclude their paper as follows:

There is scientific, political and public debate regarding the question of whether the GMST difference between simulations and observations during the hiatus period might be a sign of an equilibrium model response to a given radiative forcing that is systematically too strong, or, equivalently, of a simulated climatefeedback a that is systematically too small (equation (2)). By contrast, we find no substantive physical or statistical connection between simulated climate feedback and simulated GMST trends over the hiatus or any other period, for either 15- or 62-year trends (Figs 2 and 3 and Extended Data Fig. 4).The role of simulated climate feedback in explaining the difference between simulations and observations is hence minor or even negligible. By implication, the comparison of simulated and observed GMST trends does not permit inference about which magnitude of simulated climate feedback—ranging from 0.6 to 1.8 W m22 uC21 in the CMIP5 ensemble—better fits the observations. Because observed GMST trends do not allow us to distinguish between simulated climate feedbacks that vary by a factor of three, the claim that climate models systematically overestimate the GMST response to radiative forcing from increasing greenhouse gas concentrations seems to be unfounded.

It almost seems like they  have reached the conclusion that they  intended to reach all along – namely that the models are fit for purpose and the hiatus is a statistical fluke not unexpected in 15y trend data. This way they can save the conclusions of AR5, but  only by ignoring the evidence that the observational data support the AMO/PDO oscillation and moderate gloabl warming.

Physics has always been based on developing theoretical models to describe nature. These models make predictions which can then be  tested by experiment. If the results of these experiments dissagree with the predictions then either the model  can be updated  to explain the new data or else discarded. What one can’t do is to discard the experimental data because the models can’t distinguish why they dissagree with the data.

My conclusion is that the 60y trend data show strong evidence that CMIP5 models do indeed overestimate global warming from increased greenhouse gasses. The discrepency of climate projections with observations will only get worse as the hiatus continues for probably another 10 years. The current 60y  decadel trend is in fact only slightly larger than that that in 1900. Once the oscillation reverses around 2030 warming will resume, but climate sensitivity is still much less than most models predict.

Posted in AGW, Climate Change, climate science, Science | Tagged , , , | 28 Comments