## Oxygen – provider of life.

Why does the air contain oxygen and where did it all originally come from? A new book ‘OXYGEN’ by Donald Canfield helps explain. No animal life from insects to fish to humans could  have evolved or survive before oxygen first appeared in the atmosphere or dissolved in the oceans. Oxygen also protects animal life from damaging UV radiation from the sun by forming ozone in the stratosphere. The build up of oxygen in the atmosphere eventually resulted in the cambrian explosion of plant and animal life. This remarkable breakthrough was preceded by the evolution of cyanobacteria which finally cracked photosynthesis – the complex process powered by solar photons using H2O and CO2 to produce carbon compounds for cells while expelling O2 as a by product.  The carbon fixation enzyme Rubisco is responsible for all  food and fossil fuels on earth and the evolution of all multi-cellular organisms.

The complex biochemical process behind photosynthesis  was evolved  by cyanbacteria from two previous “anoxygenic” processes in iron or sulfur environments. The cyanbacteria revolution  was important because now the only environment needed was water, air and sunlight so they rapidly spread all  around the world, and especially across all the oceans. Suddenly oxygen was being generated by photosynthesis whose imprint is clearly seen in the geological record beginning with the Great Oxidation about 2.3 billion years ago. However the long term build up of oxygen in the atmosphere is a very subtle effect, that is still not fully understood. Essentially 99.5% of emitted oxygen is consequently reabsorbed, but its relationship to CO2 levels is particularly interesting. Some interesting facts about photosynthesis that need to be explained are the following.

1. Current levels of photosynthesis on earth would deplete all CO2 in the atmosphere in just 9 years.
2. Photosynthesis in the Oceans depletes all available phosphorous needed by aquatic plants and algae in just 86 years.

Most of the CO2 absorbed by plants is soon liberated to the atmosphere when they die or are eaten by animals, while only a tiny amount of carbon is buried in sediments. Even by including this recycling effect we still find  CO2 depletion of the atmosphere takes a mere 13,000 years while phosphorous depletion takes only 29,000 years. So what are we doing wrong?

The incredible story is that these trapped sediments are not lost from the environment for ever because plate tectonics recycles material over very long timescales today. Subduction, mountain building and sea level change continuously re-exposes the raw materials for life through weathering. Plate tectonics is essential to re-cycle the raw materials for life on earth !

CO2 re-enters the atmosphere from the mantle through out-gassing of Volcanoes and also through deep ocean vents near mid ocean ridges.  CO2 is  removed from the atmosphere by weathering due to the abundance of water on the earth. Such weathering does not happen for example on Venus. The ‘natural’ carbon cycle essentially controls the temperature on earth because weathering by liquid water is a temperature dependent phenomenon.

The total content of Oxygen in the atmosphere is equal to the total buried carbon in the sediments. This results in the current 21% oxygen content. The total CO2 content in the atmosphere is instead fine tuned to the temperature of the earth. This is roughly how it seems to work.

Diagram of carbon balance of CO2 in atmosphere and plate tectonics on earth

CO2 enters the atmosphere from volcanoes and under-sea vents driven by plate tectonics, the result of convection from the hot interior energy of the earth. CO2 exits the atmosphere through weathering of rocks driven by the  climate of the earth and the oceans. If the atmospheric temperature gets too hot so the rate of weathering increases and CO2 is removed from the atmosphere. If temperatures get too cold weathering slows down causing  a build up of CO2 in the atmosphere. This eventually increases temperatures to their optimum level which coincides with liquid oceans.  CO2 acts as a negative feedback to keep the climate stable and the oceans cool and liquid. This may be the primary explanation for the faint sun paradox, although the water cycle of evaporation and cloud formation must also play a stabilizing role.

Once photosynthesis and multi-cellular life evolved, so this balance of CO2 levels must have shifted coincident with the rise in O2. Firstly  it now became possible  to bury underground large quantities of biotic carbon including fossil fuels. Secondly there was an enhanced formation of sedimentary rocks caused by the compression of dead  sea creatures. Thirdly the large amount of oxygen in the atmosphere now led to the formation of ozone in the upper atmosphere which  is also a strong greenhouse gas. The explosion in plant and animal life coincident with a huge rise in oxygen content occurred during  the Cambrian period about 540 million years ago. Since then the optimum temperature climate control for  CO2  has shifted because the carbon cycle now includes life.

Current interglacial climate conditions on Earth all have an  average  surface temperatures of ~288K with a stable CO2 concentration of around 300 ppm which is nearly 1000 times less that the oxygen content of the atmosphere.  Can we understand how these very low value of CO2 could arise from natural-bio temperature control ?

About a year ago I wrote a simple model of  radiative transfer  from the surface to space covering the dominant CO2 15 micron band. I found  that for a surface temperature of 288K the maximum radiative loss of heat in the atmosphere occurs at exactly 300ppm.  I cannot believe this can be a coincidence. CO2 levels are fine tuned such that the atmosphere cools at its maximum rate in accordance with the second law of thermodynamics.

Fig3: Change in atmospheric OLR with CO2 concentration for fixed surface temperaure 288K.

What is this result really saying ? The thermostat that controls the temperature of the earth is a mix of geochemistry and life. The earth lies within the habitable zone of our sun and  has maintained liquid oceans for 4.5 billion years while the sun brightened. Initially the natural carbon balance between the weathering sinks due to both water and CO2 was perfectly balanced with volcanic emissions from the earth’s crust to maintain the oceans. This was thanks to the internal energy. The evolution  of photosynthesis caused an explosion in life across the surface of the earth releasing huge amounts of free oxygen and sucking out CO2 as bio-carbon was drawn out of the atmosphere and buried in the ground. Life today needs carbon, water, oxygen and some essential minerals like phosphorous.  If we assume a constant rate of CO2 venting from volcanoes and under-sea vents that we can deduce the following. Too little CO2 in the atmosphere and cooler temperatures lead to both photosynthesis and the weathering of rocks to slow down thereby causing CO2 levels to increase and the climate to warm. Too much CO2 and consequent higher temperatures lead  to a reduction in  the long term weathering of rocks with  a consequent fall in CO2 levels causing the planet to cool. Underpinning everything is plate tectonics because carbon and the raw materials for life must eventually be recycled to maintain the balance. Such  balance will continue to stabilize the climate  so long as the earth’s internal energy continues to drives plate tectonics. Life is therefore probably safe for another billion years.

Posted in Climate Change, Oceans, Water Feedback | Tagged , , | 7 Comments

## Does the moon affect the Jet Stream ?

Evidence is presented showing that lunar tides do have a significant influence on the strength and position of the polar Jet Stream. The Jet Stream drives northern latitude weather patterns especially during winter months.

I was intrigued by a proposal made by an Italian meteorologist Roberto Madrigali that varying tidal forces during the lunar cycle change the position of meanders (Rosby waves) in the polar Jet Stream. He has also written a book on the subject. This winter saw stormy but mild weather in the UK with exceptionally cold weather over North America. Both were likely caused by large distortions in the Jet Stream. Does the moon’s ever-changing tidal force affect Rosby gravity waves in the polar Jet Stream? The hypothesis, put simply, is that atmospheric tides acting just below the stratosphere affect the flow of Jet Streams. Increased  tidal forces pull the Jet Stream to lower latitudes thereby inducing the mixing of polar air with tropical air. The result of such forcing  is an increase in waves and oscillations in the Jet Stream and lower pressure differences.

Jet Stream animation

One indicator of how contorted the Jet Stream becomes is the measurement of the difference in pressure between the Icelandic Low and the Azores High. There are two indices used to do this–one called the Arctic Oscillation (AO), which treats the flow over the entire Northern Hemisphere, and another called the North Atlantic Oscillation (NAO), which covers just the North Atlantic. The two are closely related. When these indices are strongly negative, the pressure difference between the Icelandic Low and the Azores High is low. This results in a weaker Jet Stream with large, meandering loops, allowing cold air to spill far  from the Arctic into the mid-latitudes.

Schematic of Arctic Oscillation

Severe UK winters such as those in 2009/2010 and 2010/2011 coincided with strong negative values of AO/NAO , whereas the mild but stormy winter of 2013/14 coincided with strong positive values of AO/NAO. The Jet Stream influence on Europe is stronger during the winter. This is also the time when solar heating of the atmosphere is diminished over the Arctic, so any possible lunar tidal effect will be enhanced.

To investigate further the hypothesis of a lunar influence on the Jet Stream I downloaded the data from NOAA and calculated the net tides for each day from 2000 until 2014 using  the JPL ephemeris. I am basing these calculations on the formulae derived in Understanding Tides.  Figure 1 shows the AO and PNA during the interval plotted together with the net solar/lunar tidal acceleration.

Arctic Osclilation (AO) and North Atlantic Oscilation (NAO) indices. Shown in Blue is the net lunar-solar tidal acceleration from 2001 until 2014.

It is seen that in general the AO and NAP agree with each other so we simply now concentrate on the AO data. The period of oscillation of both is indeed similar to the monthly change in lunar tides, but no obvious cause and effect stands out at this level. We now look in more detail at two years of data: 2003-2004.

AO compared to net tides and the lunar tractional component at 45 deg. 2002 until 2005. The lower green curve is the tractional component of  tides (see below)

Detailed look at the last 2 years of AO data

Intriguingly the oscillation period of the Jet Stream is also around one month. However there is only  a small 3% anti-correlation  between net tidal forces and AO. This gives just a hint of a tendency that when tidal forces are large the AO tends to go negative. The largest apparent “atmospheric tides” are those due to solar diurnal expansion. These are not gravitational tides. During northern winters such “solar heating” is diminished over the Arctic. Winters are the period when lunar gravitational tides could be expected to play more of a role in the positioning and strength of the Jet Stream, if such an effect  exists at all.

I therefore decided to look more closely into the latitude dependence of the lunar tidal forces and in particular at the horizontal component of the tide or traction. It is this component which can move water and air long distances during the tidal cycle. Its strength varies during the lunar month and during the 18.6 year precession cycle. I calculated the tractional force/unit mass or acceleration for 60N and 30N which covers the range in latitudes typical of the Jet Stream.

AO compared to the tractional component of tides for latitudes 60°N and 30°N. Note how the range of forces varies with the changing moon declination after the maximum in 2007.

What is particularly interesting here is the dependence of the tidal force on the 18.6 year precession cycle of the lunar orbit. The maximum standstill of the moon was around 2006-2007 when the declination angle reached a maximum of 28.6°. Since then it has been declining and is currently around its minimum value of 19.5°. A high declination angle actually reduces the tractional force most evident in the 30°N value shown in green above. There have been several papers reporting a link between the 18.6 year cycle and droughts across the US and central Asia. If the moon really is affecting the Jet Stream then this could be the explanation.

Winters 2010 and 2014

Now we look in detail at two winter periods. Firstly the harsh winter for the UK in 2009/2010 which corresponded  to low values of AO. Is there any evidence for a lunar tidal effect on the Jet Stream in winter ?

Winter 2010. Clear causal relationship between tidal traction 60N and the AO.

I find this result remarkable. It shows clear evidence of a relationship between the strength of the lunar tidal force and the positioning and strength of the Jet Stream. There is an underlying anti-correlation showing a reduction in AO with tractional force at 60N even matching some details. There must also be a stochastic component so the agreement cannot be expected to be exact. Finally we look at last winter 2013/14 which saw the Jet Stream meander further south bringing storms to the UK. Since the meanders of the Jet Stream reached down to lower latitudes, I calculated the tractional forces for 45N. The results are shown below.

Winter 2013/14 AO compared to tidal tractional acceleration at 45°N

Again there is clear evidence that high latitude lunar tides modulated the Jet Stream. The severe storms in the UK caused coastal flooding because they coincided with high spring tides. Now I suspect that probably those high spring tides may have also have been a major cause of  these very storms by also perturbing the Jet Stream!

Posted in Climate Change, climate science, Physics, Science | Tagged | 3 Comments

## The atmosphere

The earth’s atmosphere keeps the planet warmer than it would be otherwise. This is demonstrated by the much colder surface temperature of the moon at the same distance from the sun but without an atmosphere. This warming effect is called the greenhouse effect, but few people really understand what that actually means or how it works. There is so much contradictory information on the internet that I wanted to try and explain in words rather than equations how I see that the greenhouse effect works on earth.

Our atmosphere consists mainly of 78% nitrogen, 20% oxygen, 0.04% CO2 and roughly ~0.25% H2O (water vapour). H2O levels are concentrated in the lower atmosphere and vary on a daily basis, whereas the others components are nearly constant throughout the atmosphere. They are said to be “well mixed”. Recently  CO2 has been increasing due to human activity mainly since the industrial revolution. So far CO2 levels have risen from about 300ppm to 400ppm since 1750, and may reach 600ppm by 2100.

Density structure of the atmosphere
The earth’s gravity keeps the atmosphere trapped to the planet. Despite this there is still a tiny loss of molecules to space in the upper reaches of the atmosphere. For this reason all the light hydrogen molecules originally in the atmosphere have already escaped to space. Gravity creates an exponential pressure gradient with the highest pressures at sea level at 15 lb/sq inch (1013 hPa). As you move upwards, for example when driving up a mountain, so the air pressure decreases and your ears begin to pop. This  changes in “hydrostatic pressure” and the exponential distribution of pressure with height is easy to derive.

Pressure changes with height as DP/DZ = -ρ.g where rho is density and g is gravity which simply says that the change in pressure (force per unit area) moving up by a height DZ is equal to the change in “weight” of the air per unit area by moving to  height Z+dz . This is -ρ.G  So applying the gas laws we get.

$P = P_0 e^\frac{-RTZ}{g}$ where T is the ‘temperature’ of the atmosphere

Temperature structure of the atmosphere
When a gas expands it cools. This is because temperature is just the average kinetic energy of all the molecules in one volume, so the gas looses internal energy by expanding into a larger volume. We know this intuitively because compressing air to pump up a bicycle tyre warms the inner tube. So as air rises in the atmosphere to lower pressure so it cools and likewise as it falls down to higher pressure so it warms up. Gravity maintains the pressure difference with height in the atmosphere but it is motion of air masses (convection) which sets up a temperature gradient so that it is warmer near the surface and cooler higher up. But as we shall see this does not happen by itself because you need the energy of the sun to drive that convection and you also need greenhouse gases. Simple thermodynamics shows that there is a perfect balance between convection and temperature gradient called the adiabatic lapse rate. When the atmosphere is stable the air is perfectly still and the temperature falls off exactly at the adiabatic lapse rate. If the sun rises in the morning and heats the surface so it induces a steeper temperature gradient thereby inducing convection. The sun’s energy drives a global heat engine moving heat from hot to cold according to the 2nd law of thermodynamics.

Figure 1: Temperature profile of the atmosphere

Remember that the atmosphere is stable at the adiabatic lapse rate. The dry adiabatic lapse rate is -g/Cp where Cp is the specific heat of air at constant pressure. Adiabatic means that that no external heat enters the gas so the work done in expanding a unit volume of air that rises upwards comes from the internal energy of the gas. Imagine a mass of atmosphere m that rises up a height DH against gravity then the work done is mgDH which is equal to the loss in internal energy(heat) or -mCpDT so DH/DT = -g/Cp. When the atmosphere is exactly at the lapse rate air can move up or down without external work. During the day the surface warms up fast increasing the lapse rate above this thereby inducing convection. If the environmental lapse rate is below the DALR then air will fall otherwise it rises until the DALR is restored. When evaporation is included we get weather such as storms, thunderstorms and precipitation.

Effect of CO2
The earth’s surface radiates heat as IR into the atmosphere just like a black body. CO2 absorbs photons of very specific wavelengths through quantum excitation of vibrational and rotational lines. These are concentrated in the 15micron band (see Fig 1).

Figure 2: The fine structure of transition lines that make up the 15 micron CO2 absorption band.

This energy is quickly thermalised with surrounding air including other CO2 molecules. These quickly re-emit IR photons loosing vibrational energy according to the local thermalized temperature. As the temperature falls with increasing altitude so too does the relative emission of CO2 molecules. Multiple such re-emission steps “transfer” radiative energy in CO2 sensitive bands up through the atmosphere until the density falls sufficiently so that it escapes to space. This is because eventually there are so few CO2 molecules left higher up that the probability for absorption of a CO2 IR photon essentially falls to zero. During this process radiative energy is absorbed by the atmosphere tending to increase the temperature gradient because the density is highest near the surface. As this temperature gradient steepens away from the stable DALR so convection of air is induced to restore balance. IR radiation is the “energy source” that drives the convection heat engine that maintains the earth’s lapse rate.

Figure 3: Taken  from Richard Lindzen. Pure radiative equilibrium would be the temperature gradient if the atmosphere only relied on radiation to cool the surface (without any convection) and the surface temperature would be >20C warmer than today ! Thermodynamics drives the lapse rate towards the moist adiabatic lapse rate through convection and Latent heat of evaporation.

The greenhouse effect works as follows.

•  Solar energy during the day warms the surface.
• The surface radiates IR upwards into the atmosphere increasing the lapse rate by absorption of photons by CO2 ( and water) at different levels through radiative transfer.
• This absorption then drives convection and evaporation (latent heat) to restore the lapse rate back towards adiabatic stability.
• Most IR photons escape to space at higher and therefore colder altitudes.

The lapse rate and convection stops at the tropopause. Above that lies the stratosphere where temperatures first remain constant, and then increase because ozone absorbs sunlight to warm the upper atmosphere. What determines the height of the tropopause? The tropopause is the height where the net radiation loss to space exceeds the radiation absorbed from all lower levels. The greenhouse effect stops there and the atmosphere cools by radiation alone. Total radiation must balance incoming solar radiation so the average temperature is the “effective temperature” or about 225K. To look into more detail of how CO2 effects surface temperatures and how any increases will lead to global warming we need to study the wavelength dependence of the height where CO2 radiates to space. This then defines the effective emission height for CO2 in the atmosphere. The following is one way to calculate this height distribution.

The density of air with altitude is determined by barometric pressure. For a well-mixed gas the density of CO2 with height is determined by the overall concentration in ppm. Therefore at any altitude we know the number of CO2 molecules/m^3. The cross section for the absorption of IR photons of wavelength λ is given by the HITRAN database. Therefore we can calculate at which altitude a fraction = 0.5 of incident photons arriving from the TOA have been absorbed. Since absorption + transmission = 1 this height is exactly the same as the effective emission height for radiative transfer photons upwards of wavelength λ transmitted from all lower levels in the atmosphere and surface. The only criteria that determines the emission of IR photons of wavelength λ by CO2 molecules in all the optically thick layers below is the local thermalized temperature (Kirchoff’s law). This emission intensity is given by Boltzman’s distribution.

Now imagine a downward flux of IR photons originating from space. We assume a US standard atmosphere. Then for each wavelength using HITRAN we can calculate the height at which more than half of the incident photons are absorbed by CO2 molecules in the atmosphere. This is the same as the effective emission height for upwelling photons from lower atmospheric levels in local thermodynamic equilibrium. This is what you get

Fig 3: The CO2 emission height profile for 300ppm and for 600ppm smoothed with a resolution of 20 lines.

Then for each wavelength we can also calculate the emitted radiance and the result agrees almost perfectly with Nimbus spectra.

Fig 4: Calculated IR spectra for 300ppm and 600ppm using Planck spectra. Also shown are the curves for 289K and 220K

Furthermore by varying the concentration of CO2 you can calculate the change in OLR at the TOA for each wavelength. Integrating over wavelength gives you the net CO2 forcing as a function of CO2.Furthermore we can also allows “derive” the formula RF = 5.3 ln(C/C0) which had previously been a mystery to me (actually I got 6.0!). You can also calculate the net temperature effects of different CO2 concentrations on earth -ignoring all other feedbacks. The result is shown below.

Fi 1: Dependence of radiative forcing on CO2 concentration in the atmosphere. The red curve is a logarithmic fit.

Summary
The greenhouse effect depends on there being a lapse rate. This is stabilized at the (moist) adiabatic lapse rate through convection and evaporation. Convection is driven by solar heating of the surface and IR radiation absorption by the atmosphere. The lapse rate stops at the tropopause because this is where the radiative losses to space exceed the radiative absorption from below. The height of the tropopause effectively determines the surface temperature because radiative energy balance fixes the tropopause temperature at about 225K.   CO2 just affects a narrow band of wavelengths centered on 15 microns. Increasing CO2 levels affect mostly the side lines leading to a weak logarithmic dependence of forcing because the central lines are already saturated way up to the stratosphere. This CO2 effect essentially increases the tropopause height somewhat leading to slightly larger surface temperatures at equilibrium.  This is often called radiative forcing at the “top of the atmosphere” but really it is because the rise in the tropopause causes an energy imbalance until surface IR increases at the higher temperature to rebalance once again outgoing IR with incoming solar radiation.

12/6 update: corrected according to comment of Eli below.
13/6 update: corrected atmospheric pressure at sea level – see Ro below.

Posted in AGW, Climate Change, climate science, Physics, Science | | 17 Comments

## The straw that broke the camel’s back

One of the great mysteries of climate science is the origin of the 100,000 year glacial cycles on Earth which have occurred  over the last million years.  These are interspersed with regular “short” interglacial periods coincident  with a maximum in the eccentricity of the earth’s orbit. What exactly is it that triggers the collapse of the northern ice sheets to initiate an interglacial followed by  a slow return to another 90,000 year long ice age? To put this into human terms. Homo Sapiens evolved during the last but one ice age 200,000 years ago and then migrated from Africa during the last ice age. All of human civilization has occurred only within  the current warm interglacial period. We seem to flourish in the warm spells then migrate and adapt to survive the cold spells.

5 million years of benthic foram ?O16 data. The blue curve is a fit to Milankovitch harmonic data. – see here

Some 800,000 years ago there was a switch from a regular 41,000 year cycle of glaciations to the current 100,000 year cycle. This seems to have happened because after a 3 million year gradual cooling trend obliquity alone was insufficient to end an ice age. A threshold had been exceeded for the maximum in summer insolation to melt back the ever-larger ice sheets.  Something else was now needed to tip the balance so that increased insolation could trigger an albedo meltdown of the ice sheets. The sawtooth- shaped collapse of the ice sheets ending the last glacial cycle are evidence that insolation changes now needed some external trigger to end an ice age.  Are tidal forces enhanced by maximum eccentricity the trigger that is needed?

Changes in summer insolation in the Arctic  vary by nearly 100W/m2 with changing orbital variations of the earth. The last peak on the right is the LGM, but note that there are two other similar peaks within the last ice age which failed to end the glacial period.

Summer insolation for different northern latitudes. Note how the total average insolation is much greater at the north pole than that at the equator. However the albedo of ice is ~0.8 so just 20% is absorbed. Less ice cover decreases net albedo leading to positive feedback and ice sheet collapse.

The seminal work on long term solar system dynamics has been done over decades by the French group at Observatoire de Paris. - Laskar et al. on which the above curves are based. The 41,000 year cycle for glaciations for 3 million years  can be understood as the net increase in insolation reaching both polar regions as seen in the following calculation.

Comparison of N-S pole maximum  insolation with precession and the  difference in total insolation for both poles. Incredibly the integrated total energy absorbed at the N-Pole follows exactly the  41,000 y cycle as does the maximum N-S asymmetry. This is the cause of the glacial cycles prior to 800,000 years ago. Why then did glaciations switch to a 100,000 year  cycle ?

This shows a regular net increase in  polar insolation with a 41,000 year cycle of inclination AND a North-South asymmetry offset by seasonal 23,000 year precession. The 100,000 year eccentricity cycle merely modulates the precession term and would have no effect at all if the earth’s orbit was circular. The most recent maximum in eccentricity was small (0.02) compared to previous maxima. This is because there is also a 400,000 year cycle modulating eccentricity maxima which can reach as high as 0.058.  An interesting observation is that those ice ages which end with a small maximum eccentricity, like the last one,  tend to have a much sharper sawtooth shape. It seems to be harder to tip them over the threshold needed for a runaway meltback with decreasing albedo. So what is that extra push needed to trigger an interglacial ?

The Moon’s orbit around the Earth is elliptical, with a substantial average eccentricity (compared to other Solar System bodies) averaging 0.055.  This value varies both during the lunar month and during a 206 day cycle due to varying tidal forces of the Sun’s gravitational field. This increases the eccentricity when the orbit’s major axis is aligned with the Sun-Earth vector and when the Moon is full or new. The combined effects of orbital eccentricity and the Sun’s tides result in a substantial difference in the apparent size and brightness of the Moon at perigee and apogee. Extreme values for perigee and apogee distance occur when perigee or apogee passage occurs close to a new or full Moon, especially when this occurs in the months near to Earth’s perihelion as this is when the Sun’s tidal effects are strongest. The current minimum distance of approach of the moon is 364,095 km. During the last 5000 years the distance of the Moon’s perigee has varied from 356,355 to 370,399 km.

During the last glacial maximum (LGM) the eccentricity of the earth’s orbit round the sun was 0.02 compared to the current value of 0.12. This reduces the distance of closest approach of the earth to the sun (perigee) by 1% or  a million km. As a result the maximum solar tide acting on the earth at perihelion was about 3% stronger than now because tides vary as 1/R^3. This then also causes the moon’s effective orbit to have a larger variation in eccentricity due to the stronger variation in the solar tides. We can estimate how large this effect is as follows.

Variations in eccentricity of the moon’s orbit around the Earth-Moon barycentre.

The moon’s eccentricity varies with distance from the sun. This is due to changes in solar tidal effects as the earth-moon system orbits the sun. Every 31 days the moon reaches its minimum distance to the sun and every 206 days the semi-major axis of the moon’s orbit around the earth aligns with the ecliptic plane so that the earth, moon and sun are aligned.  During LGM with solar tides increased  at perihelion by ~ 3% the eccentricity of the moon must increase.

At the LGM the earth/moon’s eccentricity around the sun increased to 0.02 so the perihelion distance of the earth to the sun was reduced at perihelion by about 1%. However this still leads to about a 3% increases in the perihelion solar tide on earth. These tides also act on the moon’s orbit increasing the maximum lunar eccentricity and reducing the shortest distance of approach to the earth by  about 7%. Combining together both the perihelion lunar tide with the perihelion solar tide we can estimate how much larger perihelion spring tides would have been on earth 20,000 years ago. The total increase in “perihelion” spring tidal forces were potentially 24% greater than today (21% lunar and 3% solar).

So what ?

Summer insolation in the polar regions at LGM was a maximum due to optimum alignments of inclination and precession. However such maxima had already happened twice before during the last glaciation but were insufficient to melt back the ice sheets. This third time sea level had reached ~ 100m lower than now in the Arctic Ocean, and (as pointed out by Chiefio) scouring of the sea bed shows that it touched the sea bed at depths which are now up to 1000m in places.

Bathymetry of the Arctic Ocean curtesy NOAA. The 100m contour shows the sea level 20,000y ago and the 1000m contour shows the maximum depth of sea ice.

The Bering Straights were closed and there remained  just a narrow deep channel of ocean that connected the Arctic to the North Atlantic. The Arctic was essentially thermally isolated with a minimal AMOC. Strong tides would have increased regular flows of water in and out of the North Atlantic. In addition the rise and fall of the tides acted to lift and shift the thick sea ice off the ground. Both of these processes help  the strong summer insolation to melt back the ice through physical movement.

Perhaps more importantly the large ice sheets over North America and Europe are themselves subject to structural stresses caused by the regular  tidal  forces.  We also have seen previously how maximum tractional forces in the Arctic coincide with summer months every 50 years or so. Perigean spring tides occur when the earth, moon and sun all align with  the earth at perigee with the sun. At such times we get the largest tides on earth. These too also vary over  longer periods of  1823 years as discussed in the paper by Keeling and Whorf.

Evidence of long term variations of 1823 years in lunar tides . Keeling and Whorf.

So what are the conditions needed to end a glaciation?

• A maximum of insolation at northern latitudes in summer months.
• A maximum tidal force acting on northern ice sheets
• Maximum tidal streams to lift and shift Arctic  sea ice and mix with the North Atlantic

The proposal is that stronger tides are needed to tip the balance so that insolation can begin a collapse of the ices sheets accelerated through albedo feedback and rising sea levels. Are tides the last straw that breaks the camel’s back ?

Note: This post has  been influenced  by Chiefio’s new post: Arctic Flushing and Interglacial Melt Pulses which is well worth reading.

Note2: I am still looking for accurate calculations of ‘Milankovitch’ cycles of the lunar orbit. My estimates are  based on extrapolating the last 10,000 years of ephemeris data

| Tagged , , | 3 Comments

## Live Interface monitoring Wind Power output

I wanted a very  simple live grid  monitor similar to  gridwatch but less complex.  There is also a standalone version,  plus a 24 hour updating graph of the last 24 hours, and a 30 day running monitor. Update 8/5/14: I have now added together Hydro,Pumped Storage and Biomass to make a new ‘renewable’ category: ‘Hydro/Bio’.

Posted in coal, Energy, nuclear, renewables, Science, Technology, wind farms | 30 Comments

## Performance of UK wind power during winter 2013/2014

Headline: The UK fleet of >5000 wind turbines provided 6.6% of peak electrical power requirements this winter.

Winter 2013/14 has been characterized  by strong westerly  gales. The UK has invested heavily in new wind farms resulting in a total fleet of 10GW of installed capacity from over 5000 turbines. Such windy conditions are favorable to higher load capacity. Wind also takes priority over other fuels on the grid.  So how has our fleet of wind farms fared this winter ?

Fuel source meeting peak demand averaged from 21 August 2013 until 15 April 2014

I have been monitoring the national grid statistics for electricity generation every hour from late September 2013. Demand for electricity must be met by supply in real time, so what really matters is the daily peak load on the grid. If demand cannot be met at peak demand then blackouts will occur. For each day over the last 7 months I have calculated the time of peak electricity demand and which fuels were used on the grid to provide the power demand. On average wind provided 6.6% of peak demand.

However, when we look in detail at the daily fluctuations in wind we see a different story to that usually reported by the industry and the media. It is true that wind power records have been  broken this winter during extended windy periods. As a result wind power has occasionally reached up to 14% of UK total demand. What is never reported though is that on other days there has essentially been zero wind power available at peak demand. This fact remains the real show stopper for wind power if we aim  to maintain a modern technological society.

Graph of percentage contribution from wind power to the peak power demands of the UK. Maximum power output of 14% represents 7GW, which is a load factor of 70%. Minimum power output was 0.2GW – load factor of 2%.

This is the unsolved problem with wind power. Without any energy storage solution, wind cannot displace any fossil fuel plants because they are still needed in order to balance the grid. Wind energy produced at night is really paid for by consumers at double peak time rates but is then thrown away. This is the real scandal. Nuclear on the other hand has an inverse problem. Nuclear Power is a continuous process which is  on both night and day. This means that nuclear cannot  be used to balance wind. That is why all motorways have street lighting all night in Belgium, because it costs nothing since  their nuclear energy is essentially free at night. The difference with wind is that nuclear is 100% reliable and we can plan to do something with that power. This makes nuclear a far more important energy source.

Now we look in more detail at the demand and supply this winter. The next graph covers the period from September to March. The fuels used to generate peak electricity are plotted for each day. Wind has priority in that all available power is fed to the grid ahead of other fuels apart from nuclear. The fuel labelled “imported and pumped” is the sum of hydro power and and imported energy from France and Holland via the interconnection links. This winter it has been a one way traffic from them to us. France relies on Nuclear while Holland is building more coal power stations.

Summary of peak daily power requirements and energy sources for the UK from September 2013 until April 2014

The seven day oscillation is of course the variation between weekdays and weekends. Now we look in more detail at the high daily demand period of about 50GW during December and January with a noticeable strong dip over the Christmas break. The next plot is a daily additive stacked bar graph.

It is interesting to note  how the time of day for peak demand changes on Christmas day to lunchtime, whereas normally the peak demand always occurs around 5pm. Presumably this is simply because everyone is indoors cooking Christmas lunch.

Detailed breakdown of Fuel sources for the Christmas period. During this period the UK experienced some severe winter storms and flooding.

There is no doubt that wind energy does make a noticeable contribution to the grid – but one has to ask at what cost and what benefit ? Wind has increased roughly 2% compared to  last winter but on the other hand we are now spending  £6 billion per year in subsidies. The subsidies can only increase with future expansion, similar to those in Germany where annual energy prices are on average costing each household an extra €300 in subsidies. Doubling the number of turbines in the UK would increase the average yield to 12% of peak load, but would still do nothing to improve energy security. That is because sometimes there would still be < 1% of wind energy available at 5pm on the coldest day of the year, so not a single conventional plant could close without the risk of blackouts increasing.

Independent of energy security, there are also serious environmental objections to a massive extension of wind farms in and around our small island.  They are destroying some ancient peat and moorlands. They are ruining some of our best countryside  and seascape views. Meanwhile the renewables lobby are pushing for ever more wind farms partly because they are risk free investments underwritten at our expense.  The only low carbon strategy that makes any sense today  is a rapid expansion of nuclear power, certainly not wind power. Economies of scale say we should instead be building 5 identical Hinkley Point C type stations all in parallel, thereby bringing on line 15GW of new reliable energy  by 2020. Furthermore we could then also recharge everyone’s electric car overnight and light all our motorways for free!  DECC chief scientist David Mackay has suggested instead using future electric car batteries to power the grid when there is no wind . Imagine waking up for work some mornings to find that your car battery is dead after a becalmed night! I somehow expect government cars will be exempted !

Relying on wind energy  is like choosing to sail round the world rather than take the QE2. Sailing is great fun but the journey time is out of your control. You can be becalmed for days on end or be hit by a storm at 3am when you need to sleep.  I am amazed that seemingly intelligent people can be so deluded about “green”  energy. It really is as much a scam to defraud tax payers, than it is a remedy for climate change!

Posted in Energy, nuclear, renewables, Science, Technology, wind farms | Tagged | 23 Comments

## The strange case of TCR and ECS

In this post we consider the strange coincidence that the net forcing used by CMIP5 models is essentially the same as CO2 forcing alone. This allows us to derive a value of TCR(Transient Climate Response) just from observational data.  Measuring ECS(EquilibriumClimate Sensitivity) however requires modeling information. We use the average CMIP5 forcing and a model derived “hysteresis” function in order to determine ECS from temperature data. The resulting energy imbalance calculated using these values of ECS and TCR is found to be the same as that derived by other methods.

The net climate forcing is mainly due to changes in anthropogenic GHGs and in aerosols. Something like 20-40% of aerosols are of anthropogenic origin. Aerosols have 3 main effects:

1. They scatter incoming solar radiation cooling the earth.
2. They (e.g. black carbon) absorb both incoming solar radiation and surface IR radiation
3. They help seed clouds formation – net cooling effect.

Energy imbalance $Q = F -\lambda\Delta{T}$  where $\lambda$ is the aerosol feedback. Models trade off aerosols against Climate Sensitivity to match observed temperatures. Aerosols are essentially the tuning parameter that match GCMs in hindcasts to previous surface temperatures. AR5 admits that they have  “low confidence” in the aerosol-cloud interaction, and the estimated uncertainties are that the net effect of aerosols could even be zero. However, if aerosol forcing is reduced then model sensitivities would be far too high.  I argued in the previous post that climate sensitivity should be defined as the measured temperature change for a measured doubling of CO2. Instead IPCC has defined it as the simulated change in temperature due to CO2 forcing alone, excluding other GHGs and aerosols. The amazing fact however is  that it doesn’t matter! In order for the models to agree with observed temperature rises since 1850 there is a near perfect cancelation between other GHGs and aerosols! Figure 1 shows net CMIP5 forcings compared to a CO2 only forcing.

Fig1: Comparison of a pure CO2 GHG forcing and the CMIP5 averaged forcings used to hindcast past temperature dependency since 1850.

So therefore it doesn’t really matter whether we use GCM models to derive a value for TCR or simply fit the temperature data instead. Let’s do that and derive a value for TCR using

DT = $\lambda$DS and
DS = 5.3ln(C02/290) where 290 is the CO2 value in 1850.
so DT = $\lambda$*5.3ln(CO2/290)

For CO2 I take the Mauna Loa data smoothly interpolated back to a value of 290 in 1850. We then fit the temperature data to a ln(CO2/290) term. The result is shown below

A fit of the temperature anomaly data to lambda*5.3ln(CO2/290)

This gives a climate response value $\lambda$ = 0.47 degC/Wm2     therefore

TCR = $\lambda$*3.7 = 1.7C

A  fit to the temperature data which includes a 60 year natural oscillation, possibly linked to AMO (see recent post by Bob Tisdale) , gives a slightly different result.

Fig 3: Overall fit to 164 years of global temperature data (HADCRUT4)

A part of the rapid warming from 1970 to 2000 can be seen as potentially due to the upturn in this oscillation. The CO2 component now has a lower climate response with a $\lambda$ = 0.41 degC/Wm2  and

TCR = $\lambda$*3.7 = 1.5C

Figure 1 shows that the effective average forcing from all CMIP5 models has essentially been the same as that from CO2 forcing alone. This means we can derive TCR as defined by IPCC through this coincidence. This remains true now even if the ratio of aerosols to other GHGs were to change in the future. These two  analysis essentially measure a value:

TCR = 1.6 ± 0.2 C   where the error is an estimate of the spread in fits.

Equilibrium Climate Response (ECS)

“ECS is defined as the change in global mean temperature, T2x, that results when the climate system, or a climate model, attains a new equilibrium with the forcing change F2x resulting from a doubling of the atmospheric CO2 concentration.” It is the temperature reached after the earth has restored energy balance following a doubling of CO2.  The observed global temperatures since 1850 are instantaneous measurements while the earth is “warming”. The cause of the delay is because the oceans have a huge thermal capacity. One way to estimate ECS is to “measure” the change in heat content of the oceans $\Delta{Q}$. Then

$ECS = \frac{F_{2x}\Delta{T}}{\Delta{F}-\Delta{Q}}$

However there is another way to do it by “measuring” instead the response of the earth to a sudden increase in forcing. I used an old GISS model to measure that inertia from a model  run which instantaneously doubles CO2. The temperature response is shown in figure 4 where the red curve is a fit I made to a $(1-e^{\frac{t}{\tau}})$ term.

Fig 4: Response temperature curve from a pulse doubling of CO2 in 1958 and fit described i the text

$T(t) = T_0 + \Delta{T_0}(1-e^\frac{-t}{15})$

This provides a method to derive ECS from the temperature data once the net forcing is known.

where $\Delta{T}_{0}$ is the equilibrium temperature response to a change in forcing $\Delta{S}$.

To calculate the CO2 forcing  I take a yearly increment of

$\Delta{S} = 5.3 log (\frac{C}{C_0})$  ,     where  C and C0 are the values before and after the yearly pulse. All values are calculated from seasonally averaged Mauna Loa data smoothed back to an initial concentration of 280ppm in 1750.

Each pulse is tracked through time and integrated into the overall transient temperature change using:

$\Delta{T}(t) = \sum_{k=1}^N (\Delta{T}_{0}(1 - e^\frac{-t}{15}))$

$\Delta{T}_{0}$ was calculated based on different values of ECS.  The results are compared to the observed HADCRUT4 anomaly measurements in Figure 4. The publication of AR5 report allows us to update CMIP5 forcings up to 2013 based on this graph.

The data was extended from 2005 to increase forcing and agree with the data – black curve from AR5. The final net forcing in 2013 is 2.2 W/m2. The code that calculates the temperature for different values of ECS  is available here. Figure 4 shows the temperature response calculated from the model using AR5 forcing for different  values of ECS.

Comparison of H4 to ECS values ranging from 1.5-4C. The thinck black line is the 5 year running average of anomaly data

Now looking in more detail at recent temperatures where the cumulative effect of past forcing is strongest, we see how unusual the current hiatus in warming appears.

Detailed comparison of ECS with H4 temperature anomalies details since 1960. Thick black line is an  FFT smoothing through the temperature anomaly data with a 5 year filter.

Values of ECS > 3 or ECS < 2 are ruled out by the data. The most likely value for ECS consistent with the recent data is apparently slightly less than  2.5C. The longer the hiatus continues the lower the estimate for ECS.

The overall result from this analysis is ECS = 2.3 ± 0.5 C.  The error is really asymmetric so it is more like  +0.5 and -0.3

Let’s see if all this works out as being consistent with the value of TCR that we measured before, and isolate the energy imbalance $\Delta{Q}$

$\frac{ECS}{TCR} = \frac{\Delta{F}}{\Delta{F}-\Delta{Q}}$

$\Delta{Q} = \Delta{F}(1 - \frac{TCR}{ECS})$

= 0.7 ± 0.5 W/m2

This is consistent with other values for $\Delta{Q}$.

In summary we have shown that there has been a remarkable approximate agreement between pure CO2 forcing and net CMIP5 forcing. This has allowed us to fit the Hadcrut4 temperature anomaly data to derive a value of TCR = 1.6 ±0.2C. The equilibrium climate sensitivity has been measured by using a model derived value for ocean temperature response to forcing of the form $\Delta{T}(t) = \sum_{k=1}^N (\Delta{T}_{0}(1 - e^\frac{-t}{15}))$. By integrating each annual pulse of  CMIP5 model forcings, we have compared different values for ECS to the Hadcrut4 anomaly data. This hysteresis effect becomes stronger over time so the current hiatus in warming strongly distinguishes between different values of ECS. Values greater than 3C are ruled out as are values < 2C. The best estimate  for ECS based on this method is 2.3 ± 0.5. The values measured values of TCR and ECS are for a total net forcing of 2.2W/m2 with an energy imbalance of 0.7 ± 0.5.

Posted in AGW, Climate Change, climate science, Science | | 23 Comments

## Is TCR a measurable quantity?

Until yesterday I had assumed that climate sensitivity was the measured temperature rise after a doubling of CO2 in the atmosphere from 280ppm in the pre-industrial era to 560ppm as a result of human activity.

$\Delta{T_{cr}} = \lambda\Delta{S}$  where  $\Delta{S} = S_{560} - S_{280}$   and    $\lambda$ is the climate response

The discussion on Ed Hawkins blog concerning Lewis & Croc’s criticism of the AR5 climate sensitivity analysis has highlighted that the official IPCC’s definition of TCR is purely model based and not a directly observable quantity. Piers Foster writes

“Lewis & Crok perform their own evaluation of climate sensitivity, placing more weight on studies using “observational data” than estimates of climate sensitivity based on climate model analysis”. “Here we illustrate the effect of the data quality issues and assumptions made in these “observational” approaches and demonstrate that these methods do not necessarily produce more robust estimates of climate sensitivity.”

The IPCC definition of TCR is : “Transient climate response (TCR) is defined as the average temperature response over a twenty-year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year[Randel et al. 2007]“.

This is a model derived value calculated by drip feeding  CO2 into the atmosphere while keeping all other variables constant. It is only be the temperature rise since pre-industrial times when CO2 levels reach 560 ppm if nothing else changes. Lewis & Croc then try to untangle TCR from the observed temperatures by unfolding  model derived forcings (CO2,NO2,aerosols etc.)  and find that TCR lies in the range 1-2C with a most likely value of 1.35C. They are then criticised for doing this because they “rely” too much on observations !

For a similar reason I am also concerned about the reasoning behind why RCP8.5 emissions scenario results in a forcing of 8.5 W/m2 by 2100. This emission scenario “business as usual” results in CO2 concentrations reaching about 900 ppm by 2100. I assume that models have been used to calculate that this scenario results in a final net forcing of 8.5 W/m2. However those same models implicitly must have built in climate sensitivity in order to derive that forcing. The emission scenario not only covers CO2 but also other anthropogenic GHGs (methane, NCO, CFC)

In other words RCP8.5 has a built in feedback which can be calculated as follows.

1) feedback explicit $\Delta{T}_{0} = (\Delta{S_0} + F\Delta{T}_{0})G_{0}$

2) no feedback $\Delta{T}_{0} = (\Delta{S})G_{0}$

In case 1 $\Delta{S_0} = 5.3 log (\frac{C}{C_0}) = 6.19$
In case 2 $\Delta{S} =8.5$

Since the temperature rise must be the same in both cases
$\frac{\Delta{S_0}}{\Delta{S}} = \frac{3.75}{3.75-F} = \frac{8.5}{6.19}$

Therefore F = 1.02 W/m2/deg.C

Or a built in “anthropogenic” booster to the above definition of TCR of about 50%. In fact if you look at the AR5 model forcings you can see that indeed the other GHGs add currently calculated to add ~0.9 W/m2 to the 1.8 W/m2 from increased CO2. However this now introduces model dependent “anthropogenic” feedback.

Radiative Forcings from AR5

The root of the problem lies in the entanglement of models and  observations in the definition of TCR.  In my opinion it would be much simpler and cleaner to define TCR as a purely measurable quantity rather than one solely based on model simulations.

“Transient climate response (TCR) is defined as the measured average temperature response over a twenty-year period centered on the observed CO2 doubling.”

This definition TCR(E) can be measured by experiment. It is simply the average temperature rise when CO2 levels reach 560ppm. It can also be essentially measured today – see A Fit to Global Temperature Data. This definition removes the non-CO2 anthropogenic effects (CH4, NO2,CFC etc.) and avoids getting trapped by the model centric view. These effects are essentially anthropogenic feedbacks in a sense similar similar to climate feedbacks – e.g. increased H2O.

In all other branches of physics models make predictions and experiments then test the models. Why should climate science be different?

The emission scenarios are based on  socio-economic modeling based on current energy sources. Nearly all of our energy still comes from burning fossil fuels, while farming and transport depend on oil. This is also reflected in the signature of CH4 and NO2 emissions associated with CO2 emissions. Therefore CO2 levels is still a good measure of anthropogenic effects.

Let’s suppose that in the next 50 years there is a breakthrough in zero-carbon energy – say thorium or fusion reactors. CO2 emissions would begin to fall and so too would the balance between other GH gases and CO2. As far as I can see none of this is reflected in the RCP scenarios.

Ed Hawkins says that climate science is an observational science – like astronomy. That seems to be about right. However in astronomy observations are the drivers of progress – for example the discovery of pulsars, dark matter and dark energy. In climate science the modelers are in control and observations play second fiddle. A prime example of this is the definition of TCR.

Posted in AGW, Climate Change, climate science, Science | Tagged , , | 4 Comments

## Do clouds control climate?

Clouds have a net average cooling effect on the earth’s climate. Climate models assume that changes in cloud cover are a feedback response to CO2 warming. Is this assumption valid? Following a study with Euan Mearns showing a strong correlation in UK temperatures with clouds, we  looked at the global effects of clouds by developing a combined cloud and CO2 forcing model to sudy how variations in both cloud cover [8] and CO2 [14] data affect global temperature anomalies between 1983 and 2008. The model as described below gives a good fit to HADCRUT4 data with a Transient Climate Response (TCR )= 1.6±0.3°C. The 17-year hiatus in warming can then be explained as resulting from a stabilization in global cloud cover since 1998.  An excel spreadsheet implementing the model as described below can be downloaded from http://clivebest.com/GCC

Best fit(Acalc)  to data(H4)  using  TCR=1.4C

A basic uncertainty for climate science is in understanding the net effect of clouds on the radiative balance of the earth [3].  Clouds regulate solar heating by increasing the planet’s albedo while simultaneously absorbing infrared (IR) from the surface. This interplay between albedo and greenhouse effect of clouds is complex and varies with latitude and with season. The net radiative forcing from cloudy regions is So(1-?) –F, where F is the outgoing IR and ? is cloud albedo. On a global scale the Earth Radiation Budget Experiment (ERBE) measurements have shown a net cooling of around -13 watts/m2, which is four times that expected from a doubling of CO2 alone [3].  However, more recent measurements from the Clouds and the Earth’s Radiant Energy System CERES [5] show that the net average cooling effect of clouds is larger (-21 W/m2) (Figure 1b). It is often assumed that changes to cloud cover are a feedback to CO2 forcing rather than an independent phenomenon. A change in climate can induce cloud changes which then feedback into the initial climate change. This effect is built into most Climate models, which then result in a mean cloud feedback of between 0-2 W/m2/°C [6]. Radiative forcing from increasing CO2 levels is rather well understood [7], but its direct impact on cloud cover is unclear. Feedbacks cannot be too large compared to the Planck response as otherwise they soon become unstable $\Delta{T} = \frac{\Delta{T_0}}{(1-f)}$ as f approaches 1.

Global cloud cover variations measured by a number of satellites under the guidance of the International Satellite Cloud Climatology Project (ISCCP) are subject to uncertainty linked to data acquisition methods [10], and viewing biases [11]. However, we have found previously [12] that using sunshine hours at surface as an inverse-proxy for cloud cover confirms the ISCCP results over the UK. In private correspondence, NASA have also provided assurance that data acquisition and corrections are now reliable and that the ISCCP data are therefore robust.

Figure 1a showing the ISCCP global averaged monthly cloud cover from July 1983 to Dec 2008 over-laid in blue with Hadcrut4 monthly anomaly data. The fall in cloud cover coincides with a rapid rise in temperatures from 1983-1999. Thereafter the temperature and cloud trends have both flattened. The CO2 forcing from 1998 to 2008 increases by a further ~0.3 W/m2 which is evidence that changes in clouds are not a direct feedback to CO2 forcing.

Fig 1b: CERES measured data on global cloud forcing. Reflected short wave radiation reduces surface heating by ~44 watts/m2 which is offset by cloud absorption of outgoing IR therefby increasing the greenhouse effect from clouds by ~26watts/m2. This results in a net cooling effect from cloudy skies globally of -22 watts/m2. This figure is used to define the NCF = 0.91. [5]

Figure 1a shows the ISCCP global averaged cloud cover [8] compared to Hadcrut 4 global temperature anomalies [4]. Until 1998 cloud cover decreased in line with increasing CO2 levels, which may support the existence of a CO2 feedback. However, since 1998 both cloud cover and temperatures have remained flat while CO2 forcing has continued to rise. This is evidence that cloud cover does not depend simply on CO2 forcing alone and may itself be a major natural driver for climate change. There is no direct evidence that cloud cover varies in response to CO2 and the ISCCP data discussed here is cyclic in nature which cannot be explained by unidirectional CO2 forcing. We have therefore developed a model that treats clouds and CO2 forcing independently and separately. Mearns and Best [10] have reported evidence that changes in cloud cover can explain approximately 40% of the UK surface temperature changes since 1956 especially during summer months (June, July, August)[10]. We now apply essentially the same model on a global scale using data from the ISCCP [9] that we downloaded from the US National Oceanic and Atmospheric Administration (NOAA) web site [ref] since the NASA web site has been disabled [ref] referenced to measured surface temperature data from CRU-Hadley (HADCRUT4)[8]. ISCCP cloud data is available beyond 2008 but is not yet in the public domain.

Cloud forcing model. We define the net cloud-forcing factor (NCF) as the resultant balance between albedo and Green House (GH) effects for clouds. In effect NCF is used to describe the ratio of the combined forcing (cloud transmissibility and GH effect) of clouds relative to that for clear skies. Effectively (1-NCF) is the net cooling factor of clouds with respect to clear skies. Radiative energy balance is then given by

$(1-CC)\times{S_0} + CC\times{NCF}\times{S_0} = \epsilon \sigma\ T^4$

where, S0 for clear skies is taken as a global average 240 W/m2 [13]. Therefore for each month, m, the incoming net insolation is

$S(m) = (1-CC(m))S_0 + CC(m) \times NCF \times S_0$

The calculated temperature change for Tcalc (m) is then given by

$\Delta{T(m)} = \frac{(S(m)-S(m-1))}{3.5}$

where 3.5 Wm-1°C-1 is the Planck response  DS/DT for 288K and is the increase in IR radiation for a 1oC rise in surface temperature. We initialize the model by normalising the first data point Tcalc(July 1983) = Thcrut(July 1983) and then calculate all subsequent  monthly temperatures based only on the measured changes in ISCCP Cloud Cover (CC).

$T_{calc}(m) = T_0(m-1) + \Delta{T(m)}$

We fix NCF = 0.91 as measured by the CERES for global net cloud forcing.

The CO2 radiative forcing model: The change in CO2 forcing for month (m) is calculated using the formula [7]

$S(m) = CS \times 5.3\ln(\frac{CO_2(m)}{CO_2(m-1)})$

where ?S  is the monthly change in radiative forcing,CO2(y)/ is the concentration of CO2 in the atmosphere for month y, and CS is a factor representing climate sensitivity. CO2 values are the measured monthly Mauna-Loa data [14]. The model with CS=1.0 then corresponds to an equilibrium climate sensitivity (ECS) of 1.1°C. However, when the model is applied to contemporaneous temperature data, CS corresponds instead to the transient climate response (TCR). Climate models with net positive feedbacks yield larger values of ECS of between 2 to 5°C [15].  The model value of CS is to be determined empirically from the data.

We apply the model to calculate global temperature anomalies from variance in global cloud cover after normalising the start point (July 1983) to the measured global average temperature and then compare model output to measurements of actual temperature variance as recorded by HadCRUT4. Our criterion for goodness of fit between the model and HadCRUT4 is based on the use of ?2 per degree of freedom (?2/df). For ?2 we take a measurement error of 0.1°C for the monthly anomalies.  The ?2 results found by varying CS values with NCF fixed at 0.91 are shown in Figure 2b. A minimum in ?2 is found for CS = 1.45 corresponding to TCR = 1.6 °C.  The error on CS is determined by how much variation is needed to shift  ?2/df by 1 ?.

Fig 2 a) Results of the model calculations (Tcalc) for the best fit value of CS =1.45 (TCR=1.6 °C) compared to monthly Hadcrut4 anomaly data.
b) Variation of ?2 per degree of freedom calculated between the predicted and the measured anomalies calculated for different CS values taking NCF=0.91 as measured by CERES.

Clouds and CO2 alone cannot explain all the variations in monthly global temperatures. It is known that explosive volcanic eruptions and ENSO[E1]  also have transient effects on global temperatures, and for this reason it is no surprise that the minimum ?2/df > 1.0. However the main trend is well reproduced by the model as shown in Figure 2a which compares the best-fit value of CS to the real measured data. The general warming trend until 1998 can mostly be explained by the fall in cloud cover during that period. The flattening off in temperature since 1998 coincides with a leveling off in global cloud cover. To explain observed warming over the full period by a CO2 dependent term alone with clear skies (NCF=1) would require TCR = 2.2 °C resulting in an approximate linear increase of 0.7 °C over this time period. Examining the yearly change in cloud forcing shows that it increased from 225.5 W/m2 in 1984 to 262.2 W/m2 in 1999, or an increase in forcing of ~0.7 watts/m2. CO2 forcing with TCR = 1.6 °C increased by 0.54 W/m2 over the same time period.  This result demonstrates that more than half of the rapid warming observed in the 1980s and 1990s can be explained by a decrease in cloud cover. Since 1999 net cloud forcing has remained approximately constant (-0.2 W/m2), while CO2 forcing has increased by a further 0.58 W/m2.

Results for the summer cloud analysis for a) Northern Hemisphere with model TCR=1.0C and NCF=0.9 and b) Southern Hemisphere with model TCR = 1.65C and NCF=0.91. The Hadcrut4 data and the model data for each year are the averaged results for June, July and August in case a) and for December, January and February for case b). In the latter case the year is assigned to that of December. There is a clear difference in dependence with CS for the Northern and Southern Hemispheres. Changes in cloud cover have a greater impact in the northern hemisphere than in the southern hemisphere. This affects the best fit values for CS for each hemisphere.

There are marked seasonal variations in cloud cover for each hemisphere – particularly in the southern hemisphere. In order to isolate differences between the long-term effects of clouds in each hemisphere we have made summer averages of temperature and cloud cover (June, July, August (JJA) for Northern Hemisphere (NH) and December, January, February (DJF) for Southern Hemisphere (SH)) and then compared the model with the hemispheric HadCRUT4 anomaly data. For the average summer hemispheric insolation we take a value of S0=312 W/m2 which is 240 W/ m2 corrected for the angle of the sun for summer months. The results of this analysis are shown in Figure 3.  There is a clear difference between the northern hemisphere and southern hemisphere. The response to cloud forcing in the northern hemisphere is stronger with fixed NCF=0.91, and leads to a lower c2 fitted value for CS (TCR = 1.0 ± 0.3°C). The southern hemisphere shows a smaller cloud forcing response with correspondingly larger values for CS (TCR = 1.65 ± 0.3°C). This difference is most likely due to dominance of oceans in the southern hemisphere. By studying each hemisphere separately and by eliminating as far as possible seasonal effects, the global result is confirmed. These results demonstrate that over half the warming observed between 1983 and 1999 is due to a reduction in cloud cover mainly effecting the northern hemisphere. The apparent slowdown in warming observed since 1999 coincides with a stabilization of global cloud cover. In an analysis of cloud and temperature variance in the UK, Mearns and Best [12] reach a similar conclusion which is that approximately 50% of net warming since 1956 is due to a net reduction in cloud cover. However, in that study NCF was estimated empirically to be 0.54, significantly lower than the CERES value of 0.91 used here. A lower NCF factor means that clouds are having a larger effect and the difference between the global and UK results may reflect latitude and the fact that UK data are land based only.

In conclusion, natural cyclic change in global cloud cover has a greater impact on global average temperatures than CO2. There is little evidence of a direct feedback relationship between clouds and CO2. Based on satellite measurements of cloud cover (ISCCP), net cloud forcing (CERES) and CO2 levels (KEELING) we developed a model for predicting global temperatures. This results in a best-fit value for TCR = 1.4 ± 0.3. Summer cloud forcing has a larger effect in the northern hemisphere resulting in a lower TCR = 1.0 ± 0.3. Natural phenomena must influence clouds although the details remain unclear, although the CLOUD experiment has given hints that increased fluxes of cosmic rays may increase cloud seeding [19].  In conclusion, the gradual reduction in net cloud cover explains over 50% of global warming observed during the 80s and 90s, and the hiatus in warming since 1998 coincides with a stabilization of cloud forcing.

References

1. Randall, D. A. Cloud Feedbacks. Frontiers in the Science of Climate Modeling (2006).

2. Randall, D. A. & Wood, R. A. Climate Models and Their Evaluation. (Cambridge Univ. Press: Cambridge [u.a.], 2007).

3. V. Ramanathan, R.DCess, E.F. Harrison, P.Minnis, B.R. Barkstrom, E. Ahmad, D. Hartmann, Cloud-Radiative Forcing and Climate: Results from the Earth Radiation Budget Experiment, Science, Vol 243, P 57, 1989

4. Jones, P. D., D. H. Lister, T. J. Osborn, C. Harpham, M. Salmon, and C. P. Morice (2012), Hemispheric and large-scale land surface air temperature variations: An extensive revision and an update to 2010, J. Geophys. Res., 117, D05127,

5. Richard P. Allan, Combining satellite data and models to estimate cloud radiative effects at the surface and in the atmosphere, RMetS Meteorol. Appl. 18: 324–333, 2011

6. Bony, S. et al. How Well Do We Understand and Evaluate Climate Change Feedback Processes? Journal of Climate 19, 3445–3482 (2006).

7. Myhre, G., E.J. Highwood, K.P.Shine,, F.Stordal, New Estimate of Radiative ForcingDur to Well Mixed Greenhouse Gases, Geophys. Ress. Lett. 25, 2715-2718, 1998

8. Monthly averaged ISCCP cloud data Data derived from file MnCldAmt.nc, Catalog. (2009) http://www.ncdc.noaa.gov/thredds/catalog/isccp/catalog.html

9. Morice, C. P., J. J. Kennedy, N. A. Rayner, and P. D. Jones (2012), Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 dataset, J. Geophys. Res., 117,

10. Rossow, W. B. & Schiffer, R. A. Advances in Understanding Clouds from ISCCP. Bulletin of the American Meteorological Society 80, 2261–2287 (1999).

11. Evan, A. T.; A. K. Heidinger, and D. J. Vimont (2007). Arguments against a physical long-term trend in global ISCCP cloud amounts. Geophy. Ress. Lett 34 (L04701)

13. K.E. Trenberth, J.T. Fasullo & J. Kiehl, Earth’s Global Energy Budget, Bulletin of the American Meteorological Society (2009)

14. Keeling, C. D. et al. Atmospheric carbon dioxide variations at Mauna Loa Observatory, Hawaii. Tellus 28, 538–551 (1976).

15. Solomon, S. et al. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. (Cambridge Univ. Press: Cambridge [u.a.], 2007).

16. J. Kirby et al. Role of sulphuric acid, ammonia and galactic cosmic rays in atmospheric aerosol nucleation, Nature 476, 429–433 (2011).

Note: I have posting this only now because after a long review process the paper was finally rejected. I am beginning to despair of any outsider ever getting anything published in a climate science journal!

Posted in AGW, Climate Change, climate science, Oceans, Science | Tagged , , | 21 Comments

## Tidal effects in polar regions

The tidal force acting on the oceans generates tidal currents due to the tractional component parallel to the surface. This is because horizontally there is no net gravitational force acting to counteract motion. This tractional force reaches a maximum at around 45 degrees to the central tidal bulge. Chiefio has a nice article describing how the declination of the moon during spring tides can effect induced currents in the polar regions.

Fig 1: Diagram showing how the tractional tidal component at the poles depends strongly on the lunar declination angle.

I have been using the JPL ephemeris and the French INPOP10 ephemeris to investigate how this has varied over the last 2000 years. I calculate the maximum net annual tide based on the positions of the sun and moon, the maximum tractional component at 65N (arctic circle) and what I call the ‘melting index’. The melting index is meant to represent an extra ice melt influence of tidal motion working with summer insolation. It is zero during the arctic winter and proportional to average daily insolation times tidal acceleration in summer.

Fig 2: Recent daily tidal components. The lunar tide is shown in blue, the solar tide in red. Green shows the tractional acceleration at 65N.

The tractional component at 65N typically varies between zero and half the net tidal force. The daily variations in tides show a rich structure of spring and neap tides enhanced by perihelion of the earth and lunar orbits. However are there long term changes in these cycles affecting climate?

Next I looked at the long term dependence by calculating for each year the maximum for each tidal component. Figure 3 shows the annual maximum tides and  their tractional component at 65N. There is a regular oscillation of ~8.8 years in the tractional force with the precession of the lunar orbit.

Fig 3: Maximum annual tides and their tractional force exerted at 65N

However if we instead find the maxima of the tractional component at 65N independently to the maximum annual tide then there is a far more stable dependency. The following plot shows  the maximum tide and the maximum tractional acceleration calculated independently for each year. Superimposed in red is what I call the ‘melt index’ which is defined as the tractional acceleration weighted by its offset (in days) from midsummer maximum insolation (June 21).

MI = 1.0 – ABS(date-June21)/(April1-June21)

If date < April  or date > August  MI=0

Fig 4: The black curve shows the annual maximum tide. The blue curve shows independently the maximum tractional tide at 65N. Both are essentially constant with time. However the melt index shown in red shows a regular ~40 year cycle showing coincidence of maximum tide and high summer insolation.

The maximum traction at the poles is now also nearly constant but there is a definite cyclical effect on the timing with the seasons in they occur in as picked up by the melt index. If we assume that large tides combined with high insolation enhances the arctic ice summer melting then this follows a roughly 40 year cycle.

Fig 5: Zoom in of last 500 years

This graph is a zoom in on the last of the period 1600 – 2100.  If the tides influence the summer melt then we would expect to see a roughly 40 year regular variation  in ice extent.

Conclusions. I find no evidence for  variations in inter-annual variations in the maximum  tides on earth  over  a 2000 year period using either JPL ephemeris or INPOP10. The maximum tides however show an 8.8 year variation in their tractional forces at the arctic circle (65N). However if instead we calculate the maximum tractional force independently then I also find no long term variation.  The only variation which may effect polar climates is  seen in the timing of the tractional force with the seasons. The occurrence of maximum tides  in the arctic coincident with maximum solar insolation does  have a ~40 years cycle. The tidal currents thereby induced may effect the summer melting of sea ice.

Next I will look at far longer timescales into the influence of Milankovitch  cycles on the earth’s orbital eccentricity which directly affects tides.

| Tagged , , | 2 Comments