“Even without man-made climate change we would expect the beginning of a new ice age no earlier than in 50.000 years from now”

The paper also claims that the Earth narrowly avoided the inception of a new ice age just before the Industrial Revolution, because CO2 levels were 40ppm above some hypothetical threshold of 240ppm. Of course humans were responsible for this (luckily this time) due to deforestation and land use change.

The leading hypothesis currently is that Ice Ages initiate when summer insolation reduces sufficiently so that it fails to melt back the previous winter snow. Ice then slowly accumulates leading to an increase in albedo as the northern ice sheets slowly grow. There are two causes for this effect, both of which interplay one with the other. Changes in the obliquity of the earth modulate summer insolation at both poles. Precession of the seasons, due to the precession of the earth’s axis, change the timing of the summer equinox. Combined together they then modulates the distance ‘R’ to the sun during summer months, simply because the earth has an elliptical orbit . The strength of this precession ‘forcing’ is amplified by at high eccentricity.

Until 800,000 year ago ice ages followed the 41,000 year obliquity cycle. Low obliquity reduced summer insolation at the poles and glaciers expanded. High obliquity reversed this initiating an interglacial. No-one knows for sure why this change happened, but it is usually assumed that the ice sheets became too large for obliquity alone to melt them back. They then also needed the help of the precession term acting on the expanded northern ice sheets. However, this does not explain why these insolation minima only work once ice sheets have reached some critical size. One attractive explanation to explain this is CO2 starvation . CO2 levels in the atmosphere naturally increase with the onset of an interglacial as a result of the warmer temperatures and an enhanced life cycle. These increases in CO2 act as a small positive feedback on temperatures.

Glaciation begin at high values of CO2, which in general then fall with reducing temperatures and reductions in biosphere activity. However during the Eemian CO2 levels remained above 260ppm for some 35,000 years into the last ice age. If CO2 plays any role in Ice Ages, it is just a supporting role as a feedback.

So when would the next ice age naturally begin had humans not burned any fossil fuels ? The Anglian interglacial some 400,000 years had similar orbital eccentricity to that during the Holocene. The preceding glaciation was also very severe like the that preceeding the Holocene.

The Anglian interglacial lasted about 25,000 years which is roughly twice as long as average. Cooling initiated on a reducing obliquity coinciding with a northern summer minimum. The Holocene interglacial has northern and summer hemispheres inverted but obliquity still follows almost the same pattern. The minimum to which Ganopolski refers to as a close call pre-industrial inception is really nothing of the sort, since obliquity was still too high. I believe cooling would naturally begin another glaciation before 10 thousand years from now as we approach minimum obliquity. At the latest it starts 15,000 years from now. So will anthropogenic global warming delay the onset of the next ice age for 100,000 years as the authors argue ?

Let’s assume that in the worst case we manage to double atmospheric CO2 levels before curbing carbon emissions (perhaps we have magic fusion reactors by then). Then quoting an acknowledged expert in Ocean Climate Chemistry – David Archer

“Dissolution into ocean water sequesters 70–80% of the CO2 release on a time scale of several hundred years. Chemical neutralization of CO2 by reaction with CaCO3 on the sea floor accounts for another 9–15% decrease in the atmospheric concentration on a time scale of 5.5–6.8 kyr. Reaction with CaCO3 on land accounts for another 3–8%, with a time scale of 8.2 kyr. The final equilibrium with CaCO3 leaves 7.5–8% of the CO2 release remaining in the atmosphere. The carbonate chemistry of the oceans in contact with CaCO3 will act to buffer atmospheric CO2 at this higher concentration until the entire fossil fuel CO2 release is consumed by weathering of basic igneous rocks on a time scale of 200 kyr.”

So after 15,000 years we end up with CO2 levels = 280+ (0.08)*280 = 302 ppm. The remaining anthropogenic CO2 forcing works out at only 0.4 W/m2, whereas the drop in summer insolation over the Arctic between now and the next obliquity minimum is 22W/m2 . That is 50 times larger!

Following both the Eemian and the Anglian consequent glaciations all began with CO2 levels well over 280ppm. There is no reason to suppose that the Holocene will be any different, assuming that CO2 levels peak below 600 ppm this century.

Perhaps in 10,000 years time we will have learned to control the earth’s climate to the advantage of all life through managed CO2 emissions.

There again perhaps not.

]]>**Analysis Method**

Equilibrium Climate Change or ECS is defined as the increase in global temperatures following a doubling of CO2, once the climate system has stabilised. Models can calculate ECS by running a step function for CO2 concentrations from say 280ppm to 560ppm and then plotting how the temperature responds with time. Each model gives a different value for ECS, and the spread in values represents in AR5 as an estimate of the uncertainty. I am going to use one of the simplest models, GISS Model II to investigate this lag effect of climate stabilisation which is mainly caused by the heat inertia response of the oceans to increased forcing.

After roughly 100 years the climate reaches a new stable state, and shows that GISS Model II gives a value for ECS of 4.4 C. The red curve is a fit to the temperature response curve which can be written in terms of temperature anomalies in the general form

In reality CO2 levels in the atmosphere have been slowly growing over the last 200 years by annual increments as recorded since 1950 by the Mauna Loa data. The direct radiative forcing from increased CO2 has been calculated by radiative transfer codes. My derivation of this formula is described here. A more precise parameterisation of that forcing is the well known formula

where is the initial CO2 concentration and C is the incremental value. A doubling of CO2 alone give a forcing of ~3.7 W/m2 which at equilibrium is balanced by a surface temperature rise of 1.1C by applying Stefan Boltzmann’s law.

Ignoring feedbacks and using results in ECS being ~ 1.1C. Higher values of ECS are due to net positive climate feedbacks, mainly from increased H2O. CMIP5 models give a large spread in predicted ECS values due to the different ways H2O and cloud feedbacks are handled. Can we measure ECS directly from the data ?

The problem with measuring ECS from the temperature data is that net forcing is increasing every year so we can never wait long enough for the climate to reach an equilibrium state. Given these constraints I adopt a different approach.

We treat the temperature record at any time as the response to the sum of previous discrete annual pulses of forcing. Each pulse causes a time dependent temperature response as shown in Figure 1. The resultant annual temperature for year n is then the integral of all previous responses up to that year.

Each pulse response is tracked through time and integrated to yield the overall instantaneous temperature at year N:

– Equation 1.

This procedure can then be repeated for various possible values of ECS and compared directly to the temperature data. Rather than using the CO2 forcing directly we use the ‘Total Anthropogenic’ AR5 forcing data as shown below, which turn out to be almost the same thing.

The equilibrium temperature response to an incremental forcing DS is , where f is calculated from each possible value of ECS by using:

where 3.7 is the direct forcing due to a doubling of CO2 (calculated from ) and f is the feedback parameter. This then allows to calculate the feedback parameter f corresponding to a particular value of ECS, and then use f to to calculate the impulse forcing response. The resultant values of f are as follows.

ECS |
f |

1.5 | 1.03 |

2.0 | 1.65 |

2.5 | 2.02 |

3.0 | 2.26 |

3.5 | 2.44 |

4.0 | 2.57 |

A perl script was written to integrate forward past temperature responses into a predicted annual temperature for various values of ECS by applying equation 1. The results are compared to the annual Hadcrut4.6 values.

It is instructive to look in more detail at the recent data as it then becomes obvious that high values and very low values of ECS are ruled out.

The best fit to the observed temperature distribution using this method is ECS = 2.5C. High values above 3.oC and very low values below 2.0C are ruled out. So my best estimate is

ECS = 2.5 ± 0.5C (95% probability)

The error is based on post 2000 temperature values. ECS=2.0 falls within just 12% of data point errors (0.05C) while ECS = 3.0 falls within 24%. This is to be compared with ECS=2.5C which falls within 84%. Both ECS=2.0 and ECS=3.0 are about 2 sigma from the mean average shown in black. By 2017 ECS=2.0 lies 0.15C below the mean and ECS=3.0 lies 0.26C above the mean. Therefore I estimate a 95% probability that ECS lies within this range.

If climate sensitivity is 2.5C then global temperatures can never rise more that 2.5C above pre-industrial levels so long as CO2 levels are kept below 560 ppm. This is a far more achievable goal than many activists are calling for since it requires only gradual reductions in CO2 emissions by 2100. This then gives us time to develop realistic alternatives, which I am convinced must have a strong nuclear base.

**References:**

Forster et al. Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models, J. Geophys. Res.,118 1139-1150, 2013

]]>A noticeable La Nina type signal is evident in November.

]]>Classical (5×5 Lat,Lon Grid) |
0.57C |

Icosahedral (2562 node, 3D triangular Grid) |
0.66C |

Spherical Triangulation (3D) |
0.72C |

The 3D methods tend to give higher values for recent years and accentuate the peaks and troughs. The reason for this is the way polar regions are handled. Spherical triangulation essentially interpolates nearby stations right across polar regions by forming large triangular areas. Icosahedral binning however treats polar regions identically to any other region on the earth’s surface.

I discovered earlier that the Spherical Triangulation method gives almost exactly the same result as Cowtan and Way, who use kriging to extrapolate into polar regions. Both methods essentially interpolate temperatures into poorly covered regions. Icosahedral binning avoids any interpolation, while treating polar regions in an unbiased way. It should really be compared to the official HadCRUT.4.6 version, which is based on a regular 5 deg. lat, lon grid.

It is often argued that HadCRUT4 has a coverage bias in the Arctic due to using fixed sized bins, which converge in area with latitude ( as cos(lat) ). However all other groups who attempt to correct for this, interpolate results or smoothed fits into regions without any measurements. Rightly or wrongly HadCRUT4 remains the only pure measurement based result. I argue that the methodology would be improved if they moved to using Icosahedral 3D binning rather than sticking to 2D based binning. This would still retain the advantage of being a measurement only result.

]]>

What is interesting is that the corners of each triangle connect together the 6 neighbouring triangles to form equal area hexagons. In this way you can also view the grid as a hexagonal tessellation of a sphere. Scattered over the surface of the sphere are temperature measurements, each of which lie within both a single triangle and three neighbouring hexagons.

The averaging method I am currently using takes each node on the grid and averages together all station temperature anomalies that are contained within the hexagon centred on that particular node. This results in an average temperature for each hexagon centred on each grid node. But of course these hexagons overlap with each other so that stations lying within one triangle contribute to the 3 neighbouring hexagons of which they form part. As a result there is a near neighbour averaging taking place at the same time. The example in Figure 2 shows a colour blending of 3 overlapping hexagons with primary blue, yellow and red shading. The overlapping triangles are colour mixed.

This effect can also be seen in the rendering of the October 2017 results. Note however that IDL reflects the hexagonal structure differently in the shading. It doesn’t blend each overlapping triangle but rather divides the shading contributions. White triangular areas signify no data coverage contribution.

Can we do better than this simply by increasing the grid density? Each additional level increases the number of nodes by a factor 4. So a level 5 grid now contains 10224 node points adding also a factor 4 increase in computation time. The results from level 4 and 5 grids are compared in Figure 4.

The smaller grid size works fine in the US where station density is very high, but elsewhere many empty grid cells start to appear. In addition there is a limiting factor on Ocean temperature data also causing empty grid cells. This is because HADSST3 has already been pre-binned onto 5×5° Lat, Lon bins. Mainly for this reason I have decided to limit the grid size to level 4.

Figure 5 compares three different integration methods to calculate monthly temperature averages: 1) Spherical Triangulation, 2) Icosahedral binning, 3) Regular 5° Lat, Lon binning. In general the results match up very well. The main difference is how the polar regions are handled.

Spherical Triangulation covers the poles by connecting large area triangles, while Icosahedral grids avoids any latitude binning bias. However, it is encouraging that three very different methods of averaging global temperature data give very similar results.

]]>

The Pacific releases vast amounts of acquired heat to the atmosphere roughly every 18 years as water sloshes back from Indonesia. If anything 1997 looks the stronger of the two.

Here is an animation of the period January 1997 to October 2017

]]>It shows just how small warming effects are on a local level compared to natural variation. I may get round to updating it to 2017.

]]>The global V3C/HadSST3 average is 0.69C which is up from the HadCRUT4.6 value for September (0.54C). The CRUTEM4 station data for October are not yet available. However any discrepancy will likely be rather small, as demonstrated below.

The annual average for 2017 based on the first 10 months is 0.74C which would make 2017 cooler than 2016 and roughly the same temperature as 2015.

I’ll calculate the HadCRUT4.6 equivalent value when the station data become available.

**Update:** As requested – Here is the temperature distribution viewed from the Southern Indian Ocean.

The temperature distribution in the Northern Hemisphere for September is shown below. All temperatures are relative to a 30 year norm from 1961-1990.

and for the Southern Hemisphere

which shows a La Nina type pattern.

]]>The brine is pumped up into huge evaporation ponds on the western edge of the Salar. The residue is processed into Lithium carbonate for export mainly to China. 30% of all phone, camera and electric car batteries originate from here. Lithium is the lightest metal and is easily ionised to charge and discharge anode/cathode voltage. Optimisation of electrolytes, cathode and anode materials has led to modern lithium ion batteries for cell phones and electric cars.

Recently Britain and France pledged to ban the sale of all petrol and diesel cars after 2040. This implies that if the world follows their lead somehow billions of new electric vehicles will be produced to replace them. There are two main problems with this idea. Firstly is there enough Lithium in the world to produce so many batteries? A recent estimate by Stanford University says this is not currently feasible.

The second problem is how to generate the estimated 30% more electrical energy needed to charge all these batteries. Generally speaking cars are needed during the day so consequently they really should be charged overnight. Only Nuclear energy can really fit the bill, because wind/solar would themselves need battery storage to overcome intermittency problems and this would defeat the object.

There is also an environmental problem with Lithium mining in the Salar. Mining companies have an agreement with the local indigenous people who ‘own’ the Salar, just to ‘extract’ water. However, if too much brine is pumped to the surface and evaporated, then the Salar themselves could become unstable, and also threaten the Flamingo reserve.

]]>