## Lapse rate feedback

In the tropics the tropopause is much higher in altitude than in temperate regions. The reason is for this is that warm ocean surfaces cool by evaporation moving latent heat upwards where it heats the upper atmosphere through condensation. The lapse rate moves towards the moist lapse rate. The tropopause is where the lapse rate peters out because radiation from GHGs to space dominates over convection. The tropopause is much higher in the tropics because there is so much water vapour in the atmosphere that radiation can only dominate where H2O wavelengths are no longer opaque. Despite this the temperature at these heights is not less than in temperate zones and the radiation loss is significant. This shows how the troposphere lapse rate and height is defined by greenhouse gas concentrations. If you add more water vapour to the atmosphere the troposphere increases in height. This is a negative feedback because more H2O does not lead to enhanced surface warming. On the contrary surface temperatures are stabilised by evaporation

What then happens if you add more CO2 ? Essentially the same negative feedback must occur. The effective height for CO2 radiative cooling to space increases to an initially colder level so leading to a slight increase in forcing. This slight forcing at the surface must also lead to more evaporation from the ocean reducing the lapse rate and thereby increasing the temperature of the effective height for CO2 radiation to space. The result is a negative feedback counteracting the original CO2 forcing.

This same effect happens every day in the tropics as solar radiation increases around midday then so too does evaporation. This is a strong negative feedback keeping ocean temperatures below ~30 deg.C . Exactly the same effect must occur for CO2.

I have been away from the UK for 5 weeks in Australia and Vietnam. Even Darwin ocean temperatures are below 30c while the air approaches 100% relative humidity and 38 C during midday. It then rains most days cooling the surface. Extreme temperatures only occur in deserts where there are no evaporation sources. Otherwise the oceans stabilise temperatures on Earth.

Scary 5.5 meter crocodile on the Adelaide river just outside Darwin. Keeping cool for the last 50 million years!

Posted in AGW, Climate Change, Water Feedback | Tagged , | 3 Comments

## UK summer temperatures since 1933

Have summer temperatures in the UK increased since the 1930s? If so is CO2 to blame? Well it turns out that just cloud cover mostly determines summer UK temperatures while CO2 plays only a  minor role.  1975 and 1976 were the warmest summers since 1933 across the UK just because there was more sunshine and less cloud. There is barely any evidence of a long term warming trend in the data. If CO2 forcing is included in the calculation then the Transient Climate Response(TCR) to a doubling of CO2 works out to be at a modest 1.4C.  See below modelled results without AGW

Average of maximum temperatures (Tmax) across 22 weather stations all over UK. Tcalc is the calculated temperatures based only on  cloud forcing,  ignoring all CO2 forcing. See description of the model given in the text.

In June I received an email from Euan Mearns concerning UK temperature data and their interpretation. The historic data from 22 stations are available from the Met Office http://www.metoffice.gov.uk/climate/uk/stationdata/ He had noticed a striking correlation between monthly Hours of Sunshine and the Maximum Temperatures after about 1956. He interpreted hours of sunshine as being a proxy for cloud cover. His hypothesis was that perhaps average monthly cloud cover determined the maximum recorded temperature. He asked me for help to develop  a simple radiative forcing model to explain the data and we started a collaboration. This started nearly a 3 month study of cloud forcing compared to CO2 forcing. The previous two posts describe the annual average temperatures compared to a combined cloud & CO2 forcing model. The results show a modest CO2 effect and a strong cloud effect on UK annual temperatures. One criticism of the cloud forcing is that clouds behave differently with season. So in summer cloud cover is more likely to have a net warming effect whereas in winter clouds may have a net warming effect as daylight hours reduce. For this reason we looked at summer months only – June July & August to compare with the model on a yearly basis. This is more complex as for each station we need to calculate summer daylight hours and the incident insolation as these vary with latitude.

To proceed I  used the NASA climatology to obtain the clear sky solar radiative flux S0 for each latitude and longitude and month. The number of daylight hours for each station (dayhrs) were calculated and then used to derive the average cloud cover from the sunshine hours(sunhrs) each month.:

CC = (dayhrs-sunhrs)/dayhrs

For each station we then calculated the forcing due to the measured values of CO2 and Clouds averaged over the 3 months JJA for each year using the model described previously. The calculation was started  in 1930 and done indpendently for each station. Tcalc was normalised to the individual 1930 temperature and then  for each successive year Tcalc(y) was simply set to Tcalc(y-1) + DT.  The average value of Tcalc over all stations was then compared to the average Tmax over all measured values.

Variations in cloud cover explains all the yearly variations in summer temperatures. Including a CO2 term gives a reasonable fit based on R2 using NCF=0.69 (Net Cloud Forcing) and TCR = 1.4C – see Figure 1.

Fig 1: Comparison of calculated temperatures to measured temperatures for 22 stations across UK.

Fig3: Correlation plot of Tcalc versus Tmes. y=x is a perfect fit.

The fit is remarkably good. The clean air act was introduced in 1956 which helped reduce smoke across the UK and the correlation of the annual temperatures was not good before 1956. However the summer months were free of such problems because there were no open fires in towns so the correlation between sunshine hours (cloudiness) and temperatures is clear throughout the time series.

Is there any strong evidence for AGW in this data ? To check this I set TCR=0 and got the following result using cloud forcing only and NCF=0.69.

Fig 2: Comparison to cloud forcing only by setting CS=0 – i.e. ignoring CO2 forcing.

The fit is also very good which shows just how unimportant AGW has been for UK summers so far. The most important factor governing climate in the UK is changes in cloud cover – or seasonal weather. We kind of knew that already ! Another interesting observation is that good summers tend to come in runs  2 or 3 successive years.

So as far as the UK summer temperatures are concerned global warming is pretty much a non-issue thus far. Yearly variations are still much greater. So if we have a warm summer in 2014 no doubt the BBC will tell us global warming is to blame – but it won’t really be true. It will have been because we have had less cloud than normal. 1976 was the hottest summer experienced in the UK since 1933. I got married in 1976 in Wales and it actually rained that afternoon  for the first time in over 2 months !

References:

1. Myhre, G., E.J. Highwood, K.P.Shine,, F.Stordal, New Estimate of Radiative Forcing Due to Well Mixed Greenhouse Gases, Geophys. Res. Lett. 25, 2715-2718, 1998

## UK Temperatures since 1933 – part 2

### This a joint post with Euan Mearns

In this post we present evidence that suggests 88% of temperature variance and one-third of net warming observed in the UK since 1956 can be explained by cyclical change in UK cloud cover. The post is co-authored by Euan Mearns and builds on an earlier post that described the UK Met Office climate station data from 1933 to present (links given below).

A copy of a manuscript submitted to and rejected by Nature can be downloaded here. This post is also based on a seminar given at The University of Aberdeen on 12th November that can be downloaded here (4.1MB).

Background

The objective of this study is to explain an observed cyclical relationship between sunshine hours and temperature from 23 UK Met Office weather stations (Figures 1 and 2) [1]. The relationship (R2=0.8 on 5y means) is observed in data from 1956 to 2012. The pre-1956 data are believed to be affected by air pollution as previously described onEnergy Matters and clivebest.com

Figure 1 Tmax and sunshine hours averaged for 23 UK weather stations. The UK Met Office report monthly data. The first stage of data management was to compute annual means. The above chart shows a 5 year running mean through the annual data.

Figure 2 Data from Figure 1 cross plotting Tmax and sunshine hours, 1956-2012.

We recognised that the temperature trend could in part be controlled by dCloud and in part by dCO2 and wanted to determine the relative importance of these two forcing variables. Other variables such as dCH4, are of secondary importance, and have not been included in our analysis.

Line by line radiative transfer codes calculate the forcing of CO2 in the atmosphere. CO2 absorbs infrared (IR) photons from the surface in tight bands of quantum excitations of vibrational and rotational states of the molecule and on Earth the 15 micron band is dominant. The central region is saturated at current CO2 levels so the enhanced greenhouse effect is mainly due to increases in side lines. The net effect of this is that CO2 forcing is found to increase logarithmically with concentration. This dependence has been parameterised by Myhre et al. (1988) [2] to be:

S = 5.3 ln(C/C0) watts/m2

where C is the new level of CO2 relative to a start value C0. Climate Sensitivity is defined as the temperature increase following a doubling of CO2 levels in the atmosphere. The change in forcing is:

5.3 ln(2) = 3.66 watts/m2

so applying the value of the Planck response (3.5 Watts/m2/?C) we get a CO2 climate sensitivity of 1.05°C. Global circulation models (GCM)  include multiple feedback effects from H2O, clouds and aerosols resulting in larger values of (equilibrium) climate sensitivity ranging from 1.5°C to 4.5°C  (AR5) [3].

Our CO2 forcing model applied to the UK is simply:

CS x 5.3 ln(C/C0)

where CS represents a “feedback” factor to be determined by the data.

We use the annually averaged Mauna Loa measurements of CO2 [4] and assume these values apply to the UK. Then the annual change in temperature due to Anthropogenic Global Warming (AGW) between year y-1 to year y is given by:

DT = (5.3 x ln(CO2(y)/CO2(y-1))/3.5
and
Tcalc(y) = Tcalc(y-1) + DT

For non-physicists, the graphic picture of the CO2 forcing model (Figure 3) may help visualise how it works.

Figure 3 The CO2 radiative forcing model outputs. The model is initiated by setting Tcalc = Tmax in 1956. Model outputs are plotted for transient climate response (TCR) = 1, 2, 3 and 4°C. The contribution of CO2 with high TCR in the range 2 to 4°C can explain some of the warming trend but little of the structure of the temperature record.

The sunshine–surface temperature-forcing model

Clouds have two forcing effects on climate. First they reflect incoming solar radiation back to space providing an effective cooling term. Secondly they absorb IR radiation from the surface while emitting less IR radiation from cloud tops thereby increasing the green house effect (GHE). The interplay between these two effects is complex and depends on latitude and cloud height. Recent CERES satellite measurements have determined that globally the net cloud radiative effect is negative (-21 W/m2) [9] – net cooling of the Earth. UK climate is dominated by low cloud which will increases the net cooling effect. We define the Net Cloud Forcing (NCF) factor in the UK to be the ratio of solar forcing for cloudy skies to that for clear skies. Then for a given station with average solar radiation S0 (taken from NASA climatology) [5] and fractional cloud cover CC (where hours of cloud is defined as daylight hours without sunshine) we find for year y:

CC(y) = (4383-sunshine(y))/4383

the effective solar forcing

Seff(y) = (1-CC(y)).S0 + NCF.CC(y).S0

Thus we see that an increase in the radiative forcing for a given UK station due to decreasing cloud cover will change the surface temperature to balance the change through the so-called Planck response. The Planck response (4.sigma.Teff^3) is about 3.5 Watts/m2/°C, which is the increase in outgoing IR for a 1C rise in surface temperature. So the change in average temperature DT between one year and the next is given by:

DT(y) = (Seff(y) – Seff(y-1))/3.5

The model therefore predicts the average temperature Tcalc based only on CC (cloud cover) and NCF (net cloud forcing factor).

Tcalc(y) = Tcalc(y-1) +DT(y)

For each station we normalise the Tcalc(1956) to the actual average temperature Tmax(1956) and then calculate all future temperatures based only on CC (sunshine hours). The only variable in the model is NCF. Finally, all stations are averaged together to compare the model with the actual temperature record.

For those who don’t quite follow the physics the graphic output shown in Figure 4 should help visualise how the model works.

Figure 4 Output from the sunshine–surface temperature-forcing model for net cloud forcing (NCF) factors of 0.3, 0.4, 0.5 and 0.6. The model is initialised by setting Tcalc = Tmax in 1956. All subsequent years are calculated using only dSunshine (i.e. dCloud). By way of reference, NASA report mean cloud transmissibility of 0.4 for the latitude of interest [5]. NCF values >0.4 in our model incorporate a component of the greenhouse warming effect of clouds. NCF = 1 = total opacity of cloud, all radiation is reflected would be represented by a flat line on this chart. NCF = 0 = total transmissibility of cloud, all radiation reaches the surface would be represented by a high amplitude curve.

From Figure 4 it can be seen that none of the NCF values provide a perfect fit of model to measured data. NCF=0.6 fits the front end but not the back end of the time temperature series. NCF=0.3 fits the back end but not the front end of the time temperature series. It was apparent to us that an NCF value close to 0.6 could provide a good fit if temperatures were lifted at the back end by increasing CO2. The next stage, therefore, was to combine the CO2 radiative forcing and sunshine surface temperature forcing models.

Optimised combined model output

The optimised combined model output should satisfy the following criteria:

Gradient of Tmax v Tcalc = 1
Intercept = 0
R2 = 1
Sum of residuals = 0

The model is optimised with NCF = 0.54 and TCR = 1.28°C as shown in Figures 5, 6 and 7. This provides:

Intercept = +0.01
R2 = 0.85
Sum of residuals = -0.71°C

Figure 5 Comparison of model (Tcalc) with observed (Tmax) data. The model is initialised by setting Tcalc=Tmax in 1956. Thereafter Tcalc is determined by variations in sunshine hours and CO2 alone.

Figure 6 Cross plot of the model versus actual data plotted in Figure 5.

Figure 7 Residuals calculated by subtracting Tcalc from Tmax. Not only is the sum of residuals for the optimised model close to zero but they are also evenly distributed along the time series.

Model example using TCR = 3

In order to illustrate a different output, let’s assume that there was “unequivocal evidence” that TCR = 3°. How would our combined model cope? Setting TCR = 3°C, we have adjusted NCF to produce the best possible fit as illustrated in Figures 8, 9 and 10. The optimised parameters are as follows:

Intercept = +0.15
R2 = 0.84
Sum of residuals = -11.1°C

Notably, it is possible to get a good fit on three out of 4 of our criteria but a quick examination of Figures 8 and 10 shows that the fit is visibly poorer than the optimised model. The extent to which this precludes TCR as high as 3°C is for the reader to decide.

Figure 8 Setting TCR=3°C, the model is optimised with NCF=0.72. This provides reasonable m, c and R2 (Figure 6) but a clearly poor fit as evidenced by sum of residuals = -11.1°C (Figure 10).

Figure 9 Cross plot of the model versus actual data plotted in Figure 8.

Figure 10 Residuals calculated by subtracting Tcalc from Tmax. With TCR set to 3°, the Tcalc model produces temperatures that are consistently too high producing heavily biased negative residuals along the time – temperature series.

If one accepts that cyclical changes in sunshine / cloud contribute to the net warming of the UK since 1956, then this must reduce the contribution to warming from CO2. Hence, it becomes impossible to produce a good fit of model to observations by lending CO2 a role larger than the model can accommodate.

Relative contributions to the optimised model

Setting the combined model parameters so that there is zero effect from CO2 and zero transmissibility of cloud to incoming radiation we discovered that the output was not a flat line (Figure 11). The reason for this is because the data inputs from 23 weather stations are discontinuous (Figure 12) and this imparts some structure to the averaged data stack (Figure 11). Taking this into account, the percentage contributions of dCO2, dCloud and dArtifacts add up to 100% along our time series as shown in Figure 11.

Figure 11 The relative contributions to the optimised model from dCO2, dCloud and data artifacts. It can be seen that along the time series CO2 makes the greatest contribution followed by cloud followed by artifacts.

Figure 12 The opening and closing of weather stations imparts some structure to the Tmax and sunshine data that needs to be taken into account in this and all other interpretations of such data series.

Integrating the modulus of the curves for the optimised model shown in Figure 11 along the time series and calculating the percentage contribution to the temperature record (gross dT) provides the following result:

dCO2 – 5%
dCloud – 88.5%
dArtifact – 6.5%

However, looking at the overall final contribution of each component between 1956 and 2012 (net dT; Figure 11) produces this result:

dCO2 – 49%
dCloud – 32%
dArtifact – 19%

In other words, variance in cloud cover accounts for nearly all the structure variance in UK temperature but somewhat less than half of the total temperature rise since 1956.

Discussion

The data and conclusions presented here apply only to the UK, a small island group off the West coast of Europe that currently occupies the northern end of the temperate climatic belt in a western maritime climatic setting. The polar jet stream is typically overhead and has a profound impact upon the weather regime in the UK. The NCF value of 0.54 derived from our optimised model will apply only to the UK. Other geographic locations should yield different values since they will occupy different latitudes and have different mean cloud geometries – that will fluctuate with time.

However, other localities on the Earth’s surface may be expected to display cyclical change in cloud cover that impacts surface temperature evolution. Perhaps some localities show a negative correlation between sunshine and temperature, in which case the net globally averaged effect may converge upon zero. But our analysis of global cloud cover and temperature evolution that is currently out to review suggests this is not the case [6]. Global cloud cover has fluctuated over the past 40 years and has imparted structure to the temperature record in a manner similar to that described here for the UK.

Global circulation models (GCM) that do not take into account cyclical change in cloud cover have little chance of producing accurate results. Since the controls on dCloud are currently not understood there is a low chance that GCMs can accurately forecast future changes in cloud cover and as a consequence of this they cannot forecast future climate change on Earth.

Professor Dave Rutledge from Caltech reviewed an early version of the manuscript sent to Nature and pointed out that the optimised TCR from our model = 1.28° was identical to the value reported by Otto et al (2013) [7]. The Otto et al work was based on a review of GCMs used in ICCP reports and applies globally. In the UK, we need to call upon increasing CO2 to produce a transient response resulting in higher temperatures to explain the observed temperature record.

Conclusions and consequences

• UK sunshine records suggest that cloud cover fluctuates in a cyclical manner. This imparts structure to the UK temperature record (confidence = very high)
• A combined CO2 radiative forcing and sunshine – surface temperature forcing model is optimised with NCF = 0.54 and TCR = 1.28?C (confidence = medium; uncertainty unquantified)
• Our empirically constrained value for TCR = 1.28°C is identical to the value of 1.3?C reported by Otto et al [7]
• Our model aggregates dT over a 56 year period and provides a good fit of calculated versus observed temperature based on dCloud and dCO2 alone.
• The consequences of the above are quite profound, especially when combined with the findings of Otto et al. It removes the urgency but does not remove the long-term need to deal with CO2 emissions.
• Global cloud cover as recorded by the International Satellite Cloud Climatology (ISCCP) [8] program also shows cyclical change that helps explain the global temperature record.
• The cause of temporal changes in cloud cover remains unknown.

References

[1] MetOffice: Historic station data.(2013).at <http://www.metoffice.gov.uk/climate/uk/stationdata/>
[2] Myhre, G., Highwood, E. J., Shine, K. P. & Stordal, F. New estimates of radiative forcing due to well mixed greenhouse gases. Geophysical Research Letters 25, 2715–2718 (1998).
[3] IPCC AR5 Summary for Policy Makers (2013)
[4] Keeling, C. D. et al. Atmospheric carbon dioxide variations at Mauna Loa Observatory, Hawaii. Tellus 28, 538–551 (1976).
[5] Kusterer, J. M. NASA Langley Atmospheric Science Data Center (Distributed Active Archive Center). (2008).at <https://eosweb.larc.nasa.gov/index.html>
[6] Effect of Cloud Radiative Forcing on Climate between 1983 and 2008, C. H. Best and E. W. Mearns (under review)
[7] Otto, A. et al. Energy budget constraints on climate response. Nature Geoscience 6, 415–416 (2013)
[8] The International Satellite Cloud Climatology Project (ISCCP) <http://isccp.giss.nasa.gov/
[9]Richard P. Allan, Combining satellite data and models to estimate cloud radiative effects at the surface and in the atmosphere, RMetS Meteorol. Appl. 18: 324–333, 2011

## UK temperatures since 1933 – Part 1.

This is a repost written by Euan Mearns and is an introduction to the work we consequently did this summer concerning cloud and CO2 radiative effects on UK temperatures. Two more posts will follow describing the radiative model in more detail.

Summary

• Terrestrial sunshine records provide an inverse proxy for cloud cover. Sunshine at surface means cloud free line of sight between the point on the surface and the Sun.
• We present concordant sunshine and temperature records for 23 UK Met Office weather stations. Data is available for a handful of stations from 1908 but it is only from 1933 that there are a sufficient number of stations to provide representative cover of the UK.
• Data from 1933 to 1956 is believed to be affected by air pollution from burning coal for home heat and power generation, therefore our main analysis focusses on the time interval 1956 to 2012.
• Both temperature (Tmax) and sunshine hours show cyclic variation, both showing a tendency to rise in the period 1980 to 2000 in keeping with global warming that has been documented in many studies.
• In the UK there is a high degree of covariance between sunshine and Tmax, sunny years tend to be warmer. The correlation coefficient (R2) between sunshine hours and Tmax is 0.8 whilst R2 for CO2 and Tmax is 0.66 (calculated on 5 year means). A significant portion of warming observed in the UK may be attributed to temporal variations in sunshine and cloud cover.
• This post presents a summary of the raw data in 14 charts. Next week we will present a combined net cloud forcing and radiative forcing model with the aim of quantifying the relative contributions of dCloud and dCO2.

Figure 1 Maximum daily temperature (Tmax, red, LH scale) and minimum daily temperature (Tmin, blue, RH scale) from the Leuchars weather station. The red and blue lines are annual averages. The black lines are centred 5y moving averages. Note high degree of co-variation between Tmax and Tmin. Also note how temperatures drifted higher during the 1990s and 2000s but recently are drifting down again, in keeping with the global temperature trend.

Figure 2 Average annual hours of sunshine at Leuchars (blue columns) with a 5y centred moving average in red. Note how the 1990s and early 2000s were clearly sunnier than the preceding decades and how more recently the amount of sunshine seems once again to be in decline and this broadly mirrors the temperature evolution (Figure 1).

Preamble

For more times than I care to recall I have sat down to write a book on energy and climate change. On each occasion my endeavour foundered early on through being distracted by detail. And so it was earlier this year. I had written down my lifelong recollections of climate change in Scotland – cold snowy winters in the 1970s, getting sunburned working in the fruit fields of Perthshire in the 1980s, frost free winters in the 2000s, cold snowy winters today – and I wanted to check my recollections against data – a fatal mistake. I stumbled upon the UK Met Office climate station database, a wonderful resource, and downloaded data from Leuchars, Braemar and Nairn, the three stations closest to where I grew up in Kirriemuir and where I now live in Aberdeen.

Some of this “raw data” from Leuchars is shown in Figures 1 and 2 and from looking at a few charts like these I observed cyclical changes in temperature with time that seemed to be matched by cyclical changes in the amount of sunshine. Warm years were sunnier than colder years. This led me to compile data from 23 UK stations (Figure 3) from which a clear picture of co-variance between sunshine and temperature emerged.

I wanted to be able to quantify the relationship between sunshine and temperature and contacted physicist Dr Clive Best who seemed pre-eminantly qualified to help. This has led to a 3 month collaboration and writing two papers, one on UK and the other on Global variations in cloud cover and its impact on temperature trends. The UK paper was rejected twice by Nature and by one other journal and so we have decided to hang the establishment and publish this work on our blogs. The Global paper is still out for review. This is the first of three posts on UK climate records starting with a simple description of the database. If there are any editors or academics out there who want to see this published in peer reviewed literature then please get in touch (read the Blog Rules).

Database

All Met Office stations record maximum daily temperature (Tmax) and minimum daily temperature (Tmin), rainfall and the number of frost free days. A subset of stations also record sunshine hours and it was stations with lengthy sunshine records that formed the basis for station selection. The data are reported as monthly means. The records are not 100% complete (I’d estimate >99% complete) and where data is missing it has been patched with data from the preceding year. If there was no preceding year, the succeeding year was used.

The selected sites are shown in Figure 3 and the distribution of records in Figure 4.

Figure 3 Met Office climate stations used in this study.

Figure 4 The time distribution of records. A handful of Met Office stations have sunshine records from 1908 but this small number fails to provide statistically representative cover of the UK. It is only from 1933 that a large enough number of stations were reporting both sunshine and temperature records to provide representative geographic cover. Hence all data presentations and analysis are based on the 1933 to 2010 time interval. Since we use 5y centred means in our analysis, data from 1931 to 2012 is captured. Over this period the number of operating stations varies, with a peak in the 1980s.

Variance in Sunshine and Tmax

Looking at 100 years of records from 23 stations represents a huge amount of data that presents challenges in how best to display it. Figures 5 and 6 show 5y running averages of Tmax and sunshine for all 23 stations. The most northerly station, Lerwick on the Shetland Islands, is shown in bold blue and one of the most southerly stations, Southampton is shown in bold red. The key observations:

• There is a large variation in temperatures from N to S produced by 10? of latitude separation. Lerwick is, on average, about 5?C colder than Southampton (Figure 5).
• There is also a large N-S range in sunshine received. Note that over a year, every point on the globe should receive the same hours sunshine with a spherical horizon. That is (365.25*24)/2 = 4383 hours per year. The variance in hours sunshine therefore reflects N-S trends in cloud cover. Southampton receives about 600 more hours sunshine each year than Lerwick. Eastbourne is anomalously sunny (Figure 6).
• There is a high degree of cyclic co-variance in Tmax across the country. Note how spikes and troughs in Lerwick match spikes and troughs in Southampton (Figure 5).
• The N-S variance in sunshine / cloud cover is more chaotic, and lacks the strong co-variation seen in the Tmax data (Figure 6).

Figure 5 Tmax, 5y running averages for 23 UK stations.

Figure 6 Sunshine, 5y running averages for 23 UK stations.

Figure 7 The mean Tmax and sunshine from all 23 stations, 1y average

Averaging the data for all 23 stations shows a degree of co-variance between temperature and sunshine although there are instances of negative correlations where spikes down in Tmax are matched by spikes up in sunshine (Figure 7). This may reflect annual variations in sunshine distribution, for example, some years may have sunny summers while others have sunny winters. It is also evident that temperatures where higher in the 1930s and 1940s, lower in the 1950s to 1980s and higher again in the 1990s and 2000s and this decadadal structure in Tmax is also reflected in sunshine / cloud cover. If this is not obvious, then further smoothing of the data using a 5y mean shows clearly that cyclic change in Tmax is mirrored by cyclic change in sunshine hours (Figure 8).

Figure 8 The data shown in Figure 7 smoothed further by applying a 5 year running average.

The degree of correlation between sunshine and temperature is quite striking though imperfect. At the beginning of the time series it is evidently lacking altogether and this is surprising since co-variance in sunshine and temperature is intuitively expected. To explain this we call on the introduction of clean air legislation in the UK in 1956. Prior to this date, coal was burned in open hearths throughout UK cities and power stations were also located in cities, for example the iconic Battersea Power Station in central London (inset photograph). Burning all this coal produced dense and lethal smogs, and we suggest that this pre-1956 pollution has perturbed the expected correlation between sunshine and temperature. Looking at seasonal data we see that the link between temperature and sunshine holds good for the summer months, pre-1956, when burning coal was at a minimum. This will be the subject of the third post in this series.

Tmax and Tmin

The radiative and CO2 forcing models that we will present next week will consider only Tmax. That is because when considering the impact of sunshine and cloud cover on the temperature record it is daily Tmax that is most relevant. However, it transpires that there is a very high degree of co-variance between Tmax and Tmin (Figures 9 and 10), hence, conclusions drawn for Tmax may equally apply to Tmin and daily average temperatures.

Figure 9 Tmax, left hand scale and Tmin right hand scale. In the UK Tmin is typically 6.5?C cooler than Tmax

Figure 10 Cross plot of data shown in Figure 9 showing an exceptional degree of correlation between Tmax and Tmin. By and large night time temperatures have a memory of the day before.

Figure 11 Tmax minus Tmin

Figure 11 shows the difference between Tmax and Tmin over time. The trend is perceptibly down by about 0.2?C over a 70 year period and it seems possible this may be due to increased radiative heating at night.

Tmax – correlations with sunshine and CO2

Figure 12 Comparison of Tmax varaiance in the UK with CO2 smoothed from Moana Loa data.

Figure 12 shows the correlation between CO2 and Tmax (compare with Figure 8) and highlights a key problem with all models that seek to explain troposphere warming by CO2 alone ± other natural forcing such as volcanoes and variance in solar insolation. CO2 is periodically discordant with cooling trends, e.g 1933 to 1963 and with cyclic ups and downs in the temperature record. In contrast, cyclical change in sunshine / cloud cover can explain the cyclical variance in Tmax (Figure 8).

Figure 13 Overall, CO2 and Tmax shows reasonable correlation. But there are 5 periods of marked negative correlation where temperature is falling as CO2 is rising.

Figure 14 Data from Figure 8 cross plotting Tmax and sunshine hours has a better correlation than CO2

There is a correlation between Tmax and CO2 with R2=0.66 (Figure 13). But the correlation between Tmax and sunshine is stronger with R2=0.80 (Figure 14).

This is as far as I (EM) was able to take the empirical analysis but recognised that a physicist should be able to calculate from these data the component of Tmax variation attributable to sunshine and cloud cover and that attributable (if any) to CO2. At this point Dr Clive Best offered his assistance that led on to 3 months of fruitful collaboration. In a post next week we will present the results of combined net cloud forcing and radiative forcing models. The analysis does show that a significant portion of warming in the UK may be attributable to a decline in cloud cover and any global climate model that does not take variance in natural cloud forcing into account will overestimate the role of CO2.

Posted in AGW, Climate Change, climate science, UK Met Office | Tagged , , | 9 Comments

## How robust is Figure 10 in AR5-SPM ?

The new AR5 iconic graph is Figure 10 in the Summary for Policy Makers.   Myles Allen appeared on the BBC with 10 lumps of coal on a table to explain how we had already burned 5 of them leaving  just  5 left to burn if we want to avoid a catastrophe. It is a simple powerful message understandable by policy makers – but is it actually correct ?

Figure 1: Overlayed on figure 10 from the SPM are Hadcrut4 data points shown in cyan where  CO2 is taken from Mauna Loa data.  Gtons of anthropogenic CO2 are calculated relative to 1850 and scaled up by a factor 2.3 because 43% of anthropogenic emissions remain in atmosphere. The blue curve is a logarithmic fit to the Hadcrut4 data. This is  because CO2 forcing is known to depend on the logarithm of CO2 concentration and is certainly not linear.  This is derived in Radiative forcing of CO2

Figure 2: A clearer version of the comparison with Figure 10 is this one taken from the slides of the cabinet presentation made by Prof. Walport. The overlay with the data is good showing consistency of carbon counting between my data and AR5. I left out many of the 19th century points due to poor knowledge of CO2 levels. Overlaid at the point where doubling of CO2 in the atmosphere since 1850 occurs (560ppm) are the “extremely confident” estimates from AR5 of ECS and TCR.

When I saw the graph from Fig 10. I thought there must be a mistake because it showed that all RCP emission scenarios simulated by  CMIP5 models result in a simple linear dependence on anthropogenic CO2. This cannot be correct because it is well known  that CO2 radiative forcing increases logarithmically with concentration – not linearly.  So I decided to investigate.

The novel feature of the SPM  presentation is that the x-axis is not time but instead cumulative anthropogenic carbon emissions. Different emission scenarios result in different lengths along essentially the same trajectory. I therefore took the HADCRUT4 annual temperature anomalies scaled to 1860 -1880 and smoothed CO2 concentrations from Mauna Loa in order to map the temperature data onto Gtons of increase of atmospheric CO2 since 1850. It also well known that just 43% of anthropogenic emissions remain airborne annually in the atmosphere, so the actual carbon emissions by man are a factor 2.3 higher  than those inferred from CO2 data.

The resultant temperature anomalies are plotted in cyan in figure 1 and purple in figure 2. Previously I have shown that  a good fit to the last 163 years of temperature data can be made with logarithmic and natural variability. A logarithmic temperature dependence for TCR is to be expected because CO2 forcing increases logarithmically and temperature is a response to forcing. Therefore I made a new fit to the data as a function of Gtons of CO2. This fit is shown in blue and I maintain that this is a more realistic extrapolation into the future than  linear projections.  Even with the most pessimistic emission scenarios RCP8.5 temperatures remain at most ~2C in 2100.

The real intention of this plot seems to have been  more  political than scientific. It draws the viewer onto the scary red linear line and does not show any of the large uncertainty in the models highlighted elsewhere in the WG1.  Compare it for example with AR5 Figure 1.4 showing huge  uncertainties.  It is also incompatible with its own statements regarding climate sensitivity especially TCR:

ECS is likely between 1.5°C to 4.5°C (medium confidence) and extremely unlikely less than 1.0°C”

This assessment concludes with high confidence that the transient climate response (TCR) is likely in the range 1°C to 2.5°C

The TCR limits are shown overlaid on Figure 1. Both TCR and ECS limits are shown in figure 2.

In my opinion the graph is not scientifically correct because a) it hides model uncertainties and b) it portrays a linear dependence on carbon emissions whereas it should logically be logarithmic.

Update: Frank points out correctly that my fit to the Hadcrut4 data really only applies to transient climate response(TCR). It is not clear whether the IPCC projections are for equilibrium temperature after 2100 or the transient temperatures at 2100. If the former then my curve would rise up around 30% but it would still be logarithmic.

Posted in AGW, Climate Change, climate science, Science | Tagged , , | 10 Comments

I posted the following  comment on RealScience today concerning the AR5 SPM statement “It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together”. I  also got a good response from Gavin Schmidt which I appreciate. Here it is:

I can’t help feeling that there is a certain amount of circular argument going on here.
“Over-estimates of model sensitivity would be accounted for in the methodology (via a scaling factor of less than one), and indeed, a small over-estimate (by about 10%) is already factored in.”

It seems clear that each model is tuned to match past temperature trends through individual adjustments to external forcings, feedbacks and internal variability. Then the results from these tuned model are re-presented (via Figure 2 above) as giving strong evidence that nearly all observed warming is anthropogenic as predicted. How could it be anything else ?

[Response: Your premise is not true, and so your conclusions do not follow. Despite endless repetition of these claims, models are *not* tuned on the trends over the 20th Century. They just aren't. And in this calculation it wouldn't even be relevant in any case, because the fingerprinting is done without reference to the scale of the response - just the pattern. That the scaling is close to 1 for the models is actually an independent validation of the model sensitivity. - gavin]

Despite this we then read in chapter 9 that.

“Almost all CMIP5 historical simulations do not reproduce the observed recent warming hiatus. There is medium confidence that the GMST trend difference between models and observations during 1998–2012 is to a substantial degree caused by internal variability, with possible contributions from forcing error and some CMIP5 models overestimating the response to increasing greenhouse-gas forcing.”

The AR5 explanation for the hiatus as given in chapter 9 is basically that about half of the pause is natural – a small reduction in TSI and more aerosols from volcanoes, while the other half is unknown – including perhaps oversensitivity in models.

[Response: When people say 'half/half' it is usually a sign that the analysis has not yet been fully worked out (which is this case here). - gavin]

Then on page 9.5 we read “There is very high confidence that the primary factor contributing to the spread in equilibrium climate sensitivity continues to be the cloud feedback.”

[Response: This is a separate issue and the statement is completely true. - gavin]

How much of this inter-model variability has actually all been hidden under the ANT term ?

[Response: The cloud feedback variation (and the consequent variation in TCR) goes into the analysis by producing different scalings for the different model responses. The mean scaling is about 0.9 (though there is presumably a spread), and that feeds directly into the uncertainty in the attribution. If all the models had the same sensitivity, the errors would be less. - gavin]

So I went back to chapter 10 of AR5 to try and understand what exactly model hindcasts are saying. Gavin insists that  models are NOT tuned to match past temperature data. My conclusion is that he is right and the models are not themselves tuned. Instead the external forcings are tuned! Evidence for this  can be seen in Figure 10.1 as shown below.

Fig 10.1 from AR4

The key plot for me is d) which shows the total forcing for each model instance in the ensemble. They are ALL different. So each model has different values for GHG forcing, and for natural forcings. How exactly are these determined ? Do they all use the same basic data – CO2 levels, Volcanic aerosols, natural forcing ?  If so why are those models that grossly overshoot or undershoot the temperature data not simply rejected ? Most other branches of physics end up with a standard model which simply works until eventually disproved.

So I am confused – unless the real  intention is to increase  the error bands on CMIP5 ensemble projections to cover uncertain natural variability into the future.

Update 13/10:  I still haven’t been banned only getting a small amount of abuse yet still getting reasonable answers…..

Hank Roberts says:

For Clive Best: You’ve been fed a line and swallowed it.

Remember the “trick” stories?

“Tuning” stories are often the same sort of deceit.
You fell for it.

“Tuning” means fitting the physics, not matching the past.

Read past the title of this post:

Thanks for that clarification Hank,

You say “Tuning” means fitting the physics, not matching the past. I agree that this is an honest procedure if normalization is done just once and then fixed in time. I also accept that such models indeed reproduce well the observed warming 1950 – 2000.

Maybe I am just being thick here – but please can you explain to me then why these normalized CMIP5 models end up with such different external forcings as shown for example in Fig 10.1 d) in AR5 ?

[Response: These are the effective forcings, not the climate responses. And the variations in the effective forcings are a function of mainly of the aerosol modules, the base climatology and how the indirect effects are paramterised. This diagnostic is driven ultimately by the aerosol emission inventories, but the distribution of aerosols and their effective forcing is a complicated function of the elements I listed above. They are different in different models because the aerosol models, base climatology (including winds and rainfall rates) and interactions with clouds are differently parameterised. - gavin]

Does this not reflect variations in climate sensitivity of the underlying physics between models ?

[Response: No. This is not the temperature response, though it does reflect differences in some aspects of the underlying physics (though not in any trivial way). - gavin]

If so how do we make progress to determine the optimum model ? Is it even possible to have one standard climate model ?

[Response: We don't. And there isn't. There is inherent uncertainty in modelling the system with finite computational capacity and imperfect theoretical understanding and we need to sample that. The CMIP ensemble is not a perfect design for doing so, but it does a reasonable job. Making predictions should be a function of those models simulations but also our ability to correct for biases, adjust for incompleteness and weight for skill where we can. Model variation will grow in the future as we sample more of that real uncertainty, but with better 'out-of-sample' tests and deeper understanding, predictions (and projections) may well get better. - gavin]

Posted in AGW, Climate Change, climate science, GCM, Science | Tagged , , | 8 Comments

## Day 2: Royal Society Meeting (ICCP-AR5)

The talks on the second day were very interesting and were mainly about the status of the science, future challenges and uncertainties. The earth’s climate is just inherently complex . These notes are also partly for my benefit !

1. Clouds : David Randall

Clouds are hard to model. They scatter and emit raditaion. They also transport  energy, moisture and momentum large distances. Cloud models go back to the 60s but have remained very much at the macro scale and this is one of the main problems because cloud formation depends more on micro-physics. Cloud seeding is from natural and anthropogenic aerosols and models lack resolution at these scales.

Globally the net cooling effect of clouds is about -20 W/m2. The water content of clouds is 100 times smaller than the water vapour content of the atmosphere, yet their effect on climate is huge. Large  convective  rain clouds tend to be on average radiation neutral while low clouds are strongly cooling and can be caused by sinking air from convective areas. Cirrus clouds have a net warming effect.  CMIP5 models have an average positive cloud feedback of  ~ 0.6 w/m2/degC. There are two arguments for a net positive feedback.

1. Anvil hypothesis: Tropical anvil convective clouds flatten off at a fixed temperature of 200K so emit the same IR independent of surface temperature. Therefore as surface temperature rises IR radiation doesn’t – a net positive feedback .
2. Low clouds diminish with warner temperatures ( low confidence).

It is entirely possible that cloud feedback is in fact negative and this is the largest current uncertainty of GCMs.

2. Aerosols: Olivier Boucher

Aerosols have 3 main effects:

1. They scatter incoming solar radiation cooling the earth.
2. They (e.g. black carbon) absorb both incoming solar radiation and surface IR radiation
3. They help seed clouds formation – net cooling effect.

Energy imbalance $Q = F -\lambda\Delta{T}$  where $\lambda$ is the aerosol feedback.

Models trade off aerosols against Climate Sensitivity to match observed temperatures. Aerosols are essentially the tuning parameter that match GCMs in hindcasts to previous surface temperatures.

Globally 20-40% of aerosol optical depth is of anthropogenic origin. Somewhere between 1/4 to 2/3 of cloud condensation is on nuclei concentrations of anthropogenic origin. The amount of black carbon emitted in Asia is underestimated but probably overestimated elsewhere. One question that occurred to me was whether we know if the early 19th century temperature measurements in Europe were suppressed due to the wide-scale burning of coal both in households and industry.  Is some of the warming after 1956 not actually due to the clean air acts across Europe ?  Weather station data were concentrated in Europe and N. America before the early 20th century.  All CMIP5 models use the same historic aerosol trends. Just how well are these trends really known ?

There is no correlation of cloud cover with cosmic rays. The statement for the Hiatus was that 50% of the pause can be explained by natural variation ( 1/3 volcanic, 2/3 solar). The other 50% is of unknown origin !

Aerosol-clouds are estimated  as a negative feedback   ~-0.45 Wm-2deg.C-1. The total effective radiative forcing due to aerosols is ~ -0.9 Wm-2.

3. Carbon + Geochemical Cycles: Corinne le Quere

CO2 is 40% above pre-industrial levels.  An extra 180 Gtons of Carbon has been added to atmosphere. Sources of atmospheric CO2 emissions which are currently running at 10 Gtons/year :

1. Deforestation =   2.9 Gtons/year    regrowth = 1.3 Gtons/year
2. Emissions  is 2/3 of rise or 6.6 Gtons/year
3. Land use is 1/3 of rise or 3.3 Gtons/year

4. Weather extremes: Denns Hartman

The AR5 assessment plays down the risk of climate catastrophes. Increases in extremes are based on

• warming is small ~0.6C since 1950
• Corresponds to a 4% change in saturated vapour pressure (C-C equation)
• making statements about changes in extremes is very difficult but studies show (based on datasets HADEX2  HADGHCND ) that   warm nights have increased    4.5 ± 0.9 % and warm days increased by 3± 1.8 %

Weak statements can be made about precipitation. In general wet areas get wetter and dry areas get dryer. There is little evidence that droughts have got worse. There is also low confidence that flooding also has got worse.

• Tropical Storms: No significant change
• Intensity of storms: No significant change.

5. Model confidence : Peter Stott

Very much a party line talk this one. Humans are the “dominant” cause of warming since the mid-20th century. It is extremely likely(95% confidence) that more than HALF of the observed warming is anthropogenic. See table 10.1 for justification for this statement. A summary of forcing causes is shown in Figure 10.5 below.

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Presumably then up to HALF of the observed warming  could also be natural.  He accepts that solar output and volcanic aerosols are forcing agents which can explain half the hiatus, but he dismisses natural internal variability (PDO, ENSP, AMO etc) as being only of order ± 0.1 W/m2.

Kenneth Trenberth then asks the question:  Given the observed 15 year hiatus in global warming how can  natural forcings be just zero ± 0.1 ?  Why is ENSO not included in assessments ?

Peter Stott gave a hand waving non-answer to this point. Has Trenberth become a skeptic ?

6. Circulation: Ted Shepherd

His point is that all GCM models really deal with is energy balance locally. They do not really handle circulation at all and exhibit severe biases. The importance of dynamic circulation is the horizontal transport of moisture, energy and momentum. A severe limitation is  understanding the dynamic circulation aspect of climate.

• North Atlantic Oscillation is natural and so is are other internal variability.
• Hadley cell has been widening but it narrows during el Nino.
• WCRP grand challenge on Clouds, Circulation and Climate Sensitivity.
• CMIP5 are not true ensembles and probability distribution funstions do not apply

7. Paleoclimate models: Gavin Schmidt

CMIP5 models have been applied to understanding climates of the past. This means using the known orbital parameters, CO2 , Ice cover , sea-level that were prevalent then.

2 examples:

• Tier 2: Mid-holocene (6000 – LGM 21 K years ago)
• Tier 3 last Millenium

He claims that overall they work well but there are model dependent divergences in Sahel, SW US etc.

However he admits that none of the models are able to reproduce the on-set of an Ice Age or predict the next Ice Age ! They are static representations rather than dynamic.

8. Politics & Propaganda: John Ashton

Posted in AGW, Climate Change, climate science, Science | Tagged , , | 6 Comments

## “Next Steps in Climate Science” – Royal Society 2-3 Oct

The Royal Society is located just off Pall Mall in a glorious Nash terrace in one of London’s most exclusive areas. As we discuss the end of the world “as we know it” inside, Bentleys and Porches glide effortlessly past outside the window. The latest IPCC assessment AR5 implies that cuts in emissions of 80% are needed to keep temperatures below 2C. However, several of the scientists had actually just arrived on transatlantic flights and most are habitual globe trotters! Bob Ward and John Ashton were more interested in their battle of ideas which they feel must be won in the public arena against their policy enemies. These enemies appear to be the old economic order, energy companies, bankers, reactionary politicians, right wing pressure groups and especially climate skeptics. Somehow they are seemingly striving for some green revolution to overturn  ”outdated” growth economic models of the past and return to some  low energy “sustainable” nirvana. To the bustling world outside, this message has about the same impact as a speaker in Hyde Park Corner screaming “repent now for the end of the world is nigh” ! Even worse their message comes over to the public more like “repent now and save the world 100 years after you are dead” !

Most scientists accept that we must move away from an over-reliance on fossil fuels. To get the message across to the public however, you need to give a positive vision of the future and not a negative one. It’s no good preaching that we should abandon cars and quit taking overseas holidays, while simultaneously lobbying politicians to increase taxes to impose exactly that end. Instead we need a realistic plan of how to achieve a low carbon society that does not destroy living standards. Basically this means a plan to generate 50% more power than we do today from non-carbon sources as efficiently as possible. For the UK this would means a future generating capacity of at least 100GW. I think there are only two solutions that could possibly achieve this.

1. Nuclear Power
2. Carbon Capture

Wind power can only every provide a maximum average of ~10% of demand, because of its extremely low energy density. So we should be making a realistic plan and a vision of how a low cost secure electric future could be achieved. Present this to the public in a positive way and then get on with implementing it. All this can be achieved by 2050 without bankrupting the economy provided engineers and not activists are allowed to make the decisions.

The overall meeting was very well organised and the science sessions were fascinating also because they highlighted the areas of uncertainties that still exist simply because the Earth’s climate is so complex. It is only the politics which has become too polarized.

The first day: The overall review of the SPM by Thomas Stocker was I suspect a talk he had given to policy makers. There is a clear underlying IPCC branded message that they feel  obliged to be portrayed to politicians. There was much talk about how to control the communication with the public. He diminished the importance of the recent pause hiatus stated that it would need to last for 3 decades before it meant anything at all. The evidence for AGW comes from the 3 decades 1970 – 1999 !

The hiatus in global warming itself was addressed by Joachem Marotzke in a very low key talk. IPCC confirms that there has indeed been a pause in warming that is also independent of cherry picking the start date. Are models able to reproduce the hiatus? Should they be even expected to do so he asked? All the models essentially fail to explain the hiatus despite including natural variation. 111/114 simulations predict too high trends.

Comparison of CMIP5 models and observed temperature trends.

In IPCC parlance this translates to : It is “extremely unlikely” that AR5 models can explain the hiatus in global warming (95% probability).

There has been a lot of discussion on Climate Audit about why AR5 dropped Fig 1.4 in the leaked draft which showed FAR,SAR, TAR and AR4 predictions significantly above the temperature measurements. Instead we got a washed out blended plot which seemed to include the hiatus data post 1998. Below is the real graph we should be studying showing direct comparison with CMIP5  model runs. The thick red curve is the multi-model mean which lies above the measurements.

In summary, the observed recent warming hiatus, defined as the reduction in GMST trend during 1998–2012 as compared to the trend during 1951–2012, is attributable in roughly equal measure to a cooling contribution from internal variability and a reduced trend in external forcing (expert judgment, medium confidence). The forcing trend reduction is primarily due to a negative forcing trend from both volcanic eruptions and the downward phase of the solar cycle. However, there is low confidence in quantifying the role of forcing trend in causing the hiatus, because of uncertainty in the magnitude of the volcanic forcing trend and low confidence in the aerosol forcing trend.

A detailed study of temperature data he claimed show that the hiatus is concentrated in winter months in the northern hemisphere. Gavin Schmidt proposed that soot emissions and aerosols from China may be to blame !

The consensus partial explanation for the hiatus is that it is half natural – solar cycle, aerosols & volcanoes, and half unknown. So in other words there is the basic assumption that the models are right and that if only nature didn’t occasionally screw things up for them the predictions would be just fine. But anyway for them this is irrelevant as the pause will soon stop and warming will continue advancing according to model predictions. I suspect and some floor comments noted that perhaps the hiatus could well last another 10 years partially due to PDO.

Cryosphere:

The arctic is loosing Ice. This year’s recovery was within the year to year variations but the trend is still downwards. It is expected that the Arctic will be ice free by 2050-2060.

Antarctica is loosing some ice mass yet is actually gaining in sea ice extent. Any overall reduction  is small.

Sea level rise will be ~60 cm by 2100. This was not enough for some of the audience who wanted to use more scary figures measured in meters. Sea defense engineers and flood protection experts need best estimates of worst case values. Such worst case estimates  would realistically be 90cm by the end of this century. Others preferred to continue to use the AR4 figures of over 1-2 m. What are the causes of the rise in sea level? 50% is due to the thermal expansion of oceans. Of the rest glacier melting  was twice as large as that contributed from the arctic and antarctic combined !

Day 2: Clouds (David Randell)-  Good informative talk…. more in the next post.

Posted in AGW, Climate Change, climate science, Science | Tagged , , , | 3 Comments

## Why wind threatens energy security

The IEA defines energy security as:Energy security refers to the uninterrupted availability of energy sources at an affordable price.”

Energy security then must ensure that at every instant generating capacity meets demand, otherwise we have blackouts. Moving from fossil fuels to wind power threatens energy security because it increases the likelihood of a  shortfall in supply. Such an event occurred last December (2012) at 5pm when with peak demand at 56 GW – All of UK ‘s wind turbines delivered essentially nothing (<0.1GW). Doubling the number of turbines to 10,000 would have made no difference whatsoever. Since then several coal stations have closed and energy security has diminished.

Detail of power demand and generation by different fuels. On December 12th at 5pm wind power dropped to 70MW or 0.01% of demand. Gas and coal reserve capacity compensated. The risk of blackouts increases as fossil reserves fall and wind increases.

The recent Channel 4 program “blackout” shows how quickly  society collapses during any extended blackout. DECC beware !

An over-reliance on wind power will increase the likelihood of blackouts. The only way out would be to store huge amounts of energy cost effectively and this is highly unlikely. There are just not  enough mountains in the UK for pumped hydro storage.

Posted in Energy, renewables, Science, Technology, wind farms | Tagged , | 5 Comments

## Peak power delivery to the UK

These graphs monitor the long term contribution from different fuels to the daily peak energy needs of the UK. The monitoring started in mid August 2013. There are  huge subsidies being paid by consumers to  wind power companies and landowners, while simultaneously government policy  penalizes  coal through  a carbon floor tax pushing up energy costs. Despite all this the lights would simply go out tomorrow without coal. All ~5000 UK  wind turbines produced less energy in August 2013 than  just the imported French nuclear power  - but at 4 times the cost !

Notes:

1. Pump is  hydro storage delivered during  the day. This is greater than direct UK hydro power.
2. France is power supplied  to the UK up to a maximum 2 GW  paid for  at market rates. It is all nuclear generated.
3. Dutch is power supplied from Holland up to a maximum 1 GW
4. Biomass and Solar power are negligible.

To place an updating  sidebar on your site simply add the following code

“<a href=”http://clivebest.com/rgraph/fuel-summary.html”><img src=”http://clivebest.com/rgraph/sum.png”></a>”

Posted in nuclear, renewables, Technology, wind farms | Tagged , | 4 Comments