Tides and Storms

Energy flows from the tropics to the poles. The tropics absorb most of the sun’s radiation and drives heat flow to polar regions where radiative cooling dominates. In winter the Hadley cell moves heat to ~30N via huge convective currents transporting  latent heat from the tropics northwards via a band of powerful thunderstorms . The reverse happens in the northern summer when the tropical Hadley cell  moves southwards towards  the now winter southern hemisphere and it reverses circulation.  David Randall explains this well in his book ‘Atmosphere, Clouds and Climate’ (from which the first 3 figures below are taken).

Hadley

The rising branch of  tropical thunderstorms (ITCZ) is located about 10 deg from the equator in whichever is the summer hemisphere. The release of latent heat warms the troposphere to the moist adiabatic lapse rate. The high warm air moves meridionally and cools by radiating heat, so becoming more dense. It then descends and twists eastward due to the rapidly increasing coriolis forces with latitude. A counter rotating Ferrel cell is driven by the mechanical energy of storms 60 -30 deg and balance mass flow. The JET stream is caused by the coriolis force acting on the descending Hadley circulation accelerated by zonal wind effects. It is concentrated about 12 km up due to lapse rate temperature gradients. The coriolis component of angular momentum M for the earth rotating with angular velocity \Omega  at latitude \phi and zonal wind u is:

M = (\Omega \cos{\phi} + u)a\cos{\phi}

u = 0 at the equator wheras u = 11m/s at 30N

The Jet stream strengthens as the polar night begins. This is closely related to winter storms as the temperature gradient between the tropics and the winter pole increases.  This “thermal wind” in mid latitudes is caused by horizontal temperature gradients which increase strongly in winter. Hydrostatic balance vertically becomes unstable

\frac{\partial P}{\partial z} = -\frac{pg}{RT}

Wind changes rapidly with height when surface temp changes rapidly in the horizontal direction. The Jet stream becomes stronger

temp-gradient

 

If T changes rapidly horizontally then the so thermal winds  change strongly with height. This is termed to be ‘baryclonic’. At 30N in winter the Jet Stream winds starts to increase massively with height. This is because below the Jet Stream the surface temperature gradient increases rapidly towards the ‘night-time’ poles. So the steeper the temperature gradient  the stronger the Jet stream becomes.

zonal-wind

Given a strong Jet Stream and large temperature gradient the conditions are now critical for the formation of big storms. A strong poleward decrease in temperature causes ‘baryclonic instability’. Such storms are technically called ‘baryclonic waves’ and are composed of  cold fronts as polar air moves south under warm tropical air, and warm fronts when simultaneously warm tropical air rises north above cold polar air, all driven by Coriolis forces. Each front causes a rapid change in temperature on the ground. These storms are then accelerated by the release of gravitational potential energy as dense cold air falls downwards feeding kinetic energy to the storm causing strong winds.  When conditions are right for baryclonic instability any small external disturbance will be amplified to trigger the formation of a storm. There is clear evidence that strong spring tides are one important trigger of such storms. Tides provide an asymmetric disturbance which act over large northern zones through  large tractional tidal forces on the atmosphere. An example  of this was the storm on Jan 4-6 last winter.

Formation of storm which hit UK on 5-6 Jan 2014. A strong wave of tractional tides sweeps through the area.

Formation of storm which hit UK on 5-6 Jan 2014. A strong wave of tractional tides sweeps through the area. The new storm seems to have been triggered by a strong kink in the Jet Stream dragging warm air up in the north west Atlantic. By the 4th January this second intense storm is forming fast off over Newfoundland. The tidal forces are very strong and the Jet Stream starts to kink. The previous low pressure system seems to be consumed by  the next as it  grows fast .

In a growing perturbation cold air slides under warm tropical air. Cold air moves down and causes a front while warm air on the tropical side moves upwards  causing a warm front. Warm air rises and cold air descends transporting thermal energy upwards and the center of mass of the total air column descends releasing gravitational energy into kinetic energy.

Hadley cells do not penetrate into middle latitudes. Likewise Baroclonic waves do not penetrate into the tropics. The energy that is transported from the tropics by Hadley cells is ‘handed over’ to such baroclonic storms to finish the job by moving the heat onwards to the poles. The Jet stream position defines this boundary between tropical air and polar air. Storms form on the northern side of the Jet Stream at the eastern edge of the Atlantic. The path of these storms is determined by the kinks (Rosby waves) in the Jet Stream.  As the earth rotates an ever changing gravitational field of  tractional forces sweep across the Atlantic ocean from west to east about every 12 hours.  The effect is similar to a bar magnet sweeping across  a sheet of paper covered in iron filings.  These two videos show how strongly tractional tidal forces can vary strongly during the lunar month and with yearly changes in lunar declination. The first is updated 2 hourly for August this year and the second shows the direct tide for a fixed position during 2006 when the moon was at its 18.6 year maximum declination.

 

The southward swirling tidal force acting on the Jet Stream is about 10 metric tons per km. This gravitational force is varying strongly in strength and position during the lunar month. There are two possible effects of tides.

  1. They can distort the Jet Stream causing kinks whcih change the path of mid latitude storms.
  2. They can seed winter storms by disturbing baroclonic instability at maximum tidal forces. Therefore for lunar tides to effect mid latitude weather we need the following conditions.
  • A sharp horizontal temperature gradient leading to strong Jet Stream and Baryclonic instability.
  • Increasing spring tides with strong tractional forces at high latitudes.
  • This triggers anticyclonic flow leading to cold/warm fronts and storms moving eastwards.
  • These storms are guided across the Atlantic guided by Jet Stream.
  • The Jet Stream itself  is distorted by changing tidal tweeks.
Jet stream flow 3-6 January 2014

Jet stream flow 3-6 January 2014

Their strength and position is enhances by growing instability within changing atmospheric tides over coming days. This tractional tidal force field also affects Rosby waves in the Jet Stream.

Last winter the UK experienced a series of severe storms. The Jet Stream was locked into a pattern whereby the eastern US was extremely cold while warmer air resided over the western atlantic. Storms were spun off the boundary of very cold air over Newfoundland and warmer tropical air in  the western atlantic. Each major storm coincided with maximum tides bringing coastal flooding to the UK as large waves coincided with high tides.

So there is some direct evidence that atmospheric tides can trigger storms.  Nearly all last winter’s storms coincided with maximum spring tides. This cannot just be a coincidence. It would be relatively easy to include tidal forces into GCM forecast models. The moon is just one more external force acting on the circulation of the atmosphere. I strongly suspect that medium range weather forecasts would improve significantly if the dynamics of tides were properly taken into account.

Posted in climate science, Physics, Science | Tagged , , , | Leave a comment

Logic of Gavin

Gavin Schmidt (@climateofgavin) has been defending the detection and attribution chapter in AR5 against criticisms made by Judith Curry and others.  His technique is to dismiss such criticism either because it covers a slightly different time scale or else because the authors  fail to understand the IPCC ‘fingerprinting’ studies.  He avoids addressing the growing evidence of a 60 year natural oceanic heat cycle or that its effects on warming are likely to be non zero between  1950 – 2005. This is despite the fact that a downturn in this cycle can also explain the  16 year long pause in warming since 1998.  The IPCC fingerprinting studies described in chapter 10 of AR5 seem to be rather opaque and even a bit like ‘black magic’. This is because at its core  the attribution always fall back on ‘expert’ assessments, whose quantitative reasoning is not documented. In particular, why is it that expert assessments of  ‘natural variation’ in the temperature data are  found to be essentially zero?  Why then  is  the observed warming from 1940 to 2000 only 0.42C  when compared to 0.6C from 1951 to 2005 on which the attribution studies are based?

Key examples of Gavin’s logic to any counter-argument are as follows:

- In reply to Judith Curry’s post 50-50 attribution.

Watch the pea under the thimble here. The IPCC statements were from a relatively long period (i.e. 1950 to 2005/2010). Judith jumps to assessing shorter trends (i.e. from 1980) and shorter periods obviously have the potential to have a higher component of internal variability. The whole point about looking at longer periods is that internal oscillations have a smaller contribution. Since she is arguing that the AMO/PDO have potentially multi-decadal periods, then she should be supportive of using multi-decadal periods (i.e. 50, 60 years or more) for the attribution.

Well I am not sure she was actually saying that. She was using a hypothetical short time period just to make her point clearer. Gavin jumped on that to then dismiss the rest of her points. Her main argument was indeed for the 1951-2010 period.

Here however,  Gavin appears to be clutching at straws to avoid the conclusion of Tsung and Zhou that the AMO oscillation is real.

What is the evidence that all 60-80yr variability is natural? Variations in forcings (in particularly aerosols, and maybe solar) can easily project onto this timescale and so any separation of forced vs. internal variability is really difficult based on statistical arguments alone (see also Mann et al, 2014). Indeed, it is the attribution exercise that helps you conclude what the magnitude of any internal oscillations might be. Note that if we were only looking at the global mean temperature, there would be quite a lot of wiggle room for different contributions. Looking deeper into different variables and spatial patterns is what allows for a more precise result. -

The assessment that internal variability is zero is just based on “expert assessment”. So if you or I can see a clear 60 year oscillation in the temperature data, we are simply deluded because we are not “experts”. Only experts can interpret the ‘fingerprint’ of global warming.

- Second example:  does not the following seem to be more like downright fiddling?

Judith’s argument misstates how forcing fingerprints from GCMs are used in attribution studies. Notably, they are scaled to get the best fit to the observations (along with the other terms). If the models all had sensitivities of either 1ºC or 6ºC, the attribution to anthropogenic changes would be the same as long as the pattern of change was robust. What would change would be the scaling – less than one would imply a better fit with a lower sensitivity (or smaller forcing), and vice versa (see figure 10.4).

So the ‘experts’ can scale up or down the anthropogenic forcings in the models so as to exactly match the data.  Therefore because the models use stochastic internal variation while excluding multi-decadal variation, the ‘expert opinion’ of the modellers will be that natural variation averages to zero. In other words when  you now scale zero by any factor whatsoever you still always get  zero !

Third example –  Now lets look at Fig 10.4

 

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Fig 10.5 from AR5. ANT is the net anthropogenic forcing. I do not understand how the ANT errors get smaller after adding GHG and OA together !

Both myself and Paul Mathews queried in Oct 2013 how it was possible for the ANT forcing to have small error bars while its component parts were much larger. Namely GHG has an error of 0.4 C and OA (aerosols) has an error of 0.35 C, while ANT has an error of just 0.1 C. The normal way to combine the sum of errors is to take their RMS summed value which would have resulted in an ANT error of 0.5 C.

This was Gavin’s response.

gavin says: 14 Oct 2013 at 1:01 PM
“Just for completeness, and to preempt any confusion, this post from Paul Matthews, is a typical example of what happens when people are so convinced by their prior beliefs that they stop paying attention to what is actually being done. Specifically, Matthews is confusing the estimates of radiative forcing since 1750 with a completely separate calculation of the best fits to the response for 1951-2010. Even if the time periods were commensurate, it still wouldn’t be correct because (as explained above), the attribution statements are based on fingerprint matching of the anthropogenic pattern in toto, not the somewhat overlapping patterns for GHGs and aerosols independently. Here is a simply example of how it works. Let’s say that models predict that the response to greenhouse gases is A+b and to aerosols is -(A+c). The “A” part is a common response to both, while the ‘b’ and ‘c’ components are smaller in magnitude and reflect the specific details of the physics and response. The aerosol pattern is negative (i.e. cooling). The total response is expected to be roughly X*(A+b)-Y*(A+c) (i.e. some factor X for the GHGs and some factor Y for the aerosols). This is equivalent to (X-Y)*A + some smaller terms. Thus if the real world pattern is d*A + e (again with ‘e’ being smaller in magnitude), an attribution study will conclude that (X-Y) ~= d. Now since ‘b’ and ‘c’ and ‘e’ are relatively small, the errors in determining X and Y independently are higher. This is completely different to the situation where you try and determine X and Y from the bottom up without going via the fingerprints (A+b or A+c) or observations (A+d) at all. 

This is a remarkable statement (apart from the time-scale rebuff). In the above argument Gavin claims that Anthropogenic indeed means GHG + Aerosols, but that the two are inversely proportional. So apart from some minor terms, Aerosols = -const*GHG . That really is the same thing as saying

ANT = GHG – fudge factor*GHG

ANT = (1- (fudge factor))*GHG = all ‘observed’ warming  –  eliminating the need for any natural component at all.

Fudge Factor is the year to year tuning needed to make  CMIP5 hindcasts agree with the temperature data.

Now look at what  Chapter 7 of AR5  actually has to say about aerosols.

‘Aerosols dominate the uncertainty in the total anthropogenic radiative forcing. A complete understanding of past and future climate change requires a thorough assessment of aerosol-cloud-radiation interactions.

Or at what a real aerosol expert has to say

‘The IPCC is effectively saying that the cooling influence from aerosols is slightly weaker than previously estimated and that our understanding has improved.

Aerosol radiative forcing estimates.
The other values refer to different ways of calculating the impact and it is these numbers that inform the overall value in the report. The satellite based value refers to studies where satellite measurements of aerosol properties are used in conjunction with climate models; they are not wholly measurement based. In terms of studies using climate models on their own, the IPCC used a subset of climate models for their radiative forcing assessment, choosing those that had a “more complete and consistent treatment of aerosol-cloud interactions”.

The satellite based central value of -0.85 W/m2 is less negative than the central value from the climate models, which means the models indicate more cooling than the satellite based estimate. Compared to the subset of climate models that the IPCC used for their radiative forcing judgement, there is little overlap between their ranges also.

If we examine more details from the climate models, we see that there are large differences between models in terms of what types of aerosol it considers to be important. For example some models say that dust is a major contributor to the global aerosol burden, while others disagree. These are important details as climate models can sometimes broadly agree in terms of the radiative forcing estimate they provide but for very different reasons. Black carbon is another species that can contribute to varying degrees in different models, which is important as it warms the atmosphere; how a model represents black carbon is going to have a strong influence on the reported cooling. Nitrate is a potentially important species that often isn’t even included in climate models.

The current state of understanding of aerosols suggests that they’ve exerted a cooling influence on our climate, which has offset some of the warming expected from the increase in greenhouse gases in our atmosphere. Improving this understanding will be crucial for assessing both past and future climate change.

So these experts emphasize the uncertainties in the net forcings of aerosols and clouds. There is little evidence of any proportional anthropogenic cooling with CO2 emissions. Furthermore the models are still exaggerating the cooling effects of aerosols.

Finally here is a classic Gavin put down .

I have tried to follow the proposed logic of Judith’s points here, but unfortunately each one of these arguments is either based on a misunderstanding, an unfamiliarity with what is actually being done or is a red herring associated with shorter-term variability. If Judith is interested in why her arguments are not convincing to others, perhaps this can give her some clues.

This simply dismisses any evidence of a short term variation that is visibly present in the data, because it goes against the ‘expert fingerprint assessment’. Nor is the AR5 attribution time-scale anyway near long enough that natural variation averages to zero. This can simply be seen in the figure below.

Figure 1. A Fit to HADCRUT4 temperature anomaly data

Figure 1. A Fit to HADCRUT4 temperature anomaly data

Looks very much like a 60 year oscillation to me – but then again I am no ‘expert’ !

quotes above taken from these posts

  1. Natural versus Anthropogenic
  2. IPCC attribution statements redux: A response to Judith Curry
  3. The 50 50 argument
  4. The IPCC AR5 Attribution Statement

 

 

Posted in AGW, Climate Change, climate science, Institiutions, IPCC | Tagged , , | 6 Comments

IPCC AR5 Attribution Statement is wrong

The headline statement on anthropogenic in the Summary for Policy Makers taken from chapter 10 of WG1 reads as follows.

It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human-induced contribution to warming is similar to the observed warming over this period. -

This statement is false because 1951-2010 includes some natural warming from the 60y AMO heat cycle reported by  Tung and Zhou. Fig 10.5 in AR5 shows the  natural internal component as being 0.0±0.05 C. This is wrong because the natural temperature cycle does not average to zero over this period. This is discussed in Natural versus Anthropogenic . My corrected Fig 10.5 is shown below.

Updated Fig 10.5 This shows the attribution of warming from 1950 to 2010 where natural internal variation is measured by comparing  1940 to 2010 as the correct measure of the observed warming  because 1940 and 2010 are peaks of the  natural variation.

The updated Fig 10.5 showing the attribution of warming from 1950 to 2010 where natural internal variation has been  measured by comparing to the 1940 to 2010 period. This represents the net measure of  observed warming because both 1940 and 2010 are peaks of the natural variation which subtracts out when taking the difference. For details see Natural versus Anthropogenic

The residual natural warming present in the 1950-2010 period is 0.2±0.05 C while the observed anthropogenic component is 0.45±0.05 as measured between 1940 to 2010. Both 1940 and 2010 are peaks of the AMO signal which subtract out when taking the difference.  This is why the ‘observed’ global warming trend should be measured from 1940 to 2010 and not from 1950 to 2010.

CMIP5 values for both GHG and the net Anthropogenic signal are now about 50% higher then the observed warming. The correct attribution for the observed warming from 1951 to 2010 is  therefore 75% anthropogenic and 25% natural variability.  CMIP5  sensitivity to CO2 is very likely  50% too strong and overestimates the observed warming once the natural variation is properly included.

A more impartial version of the attribution statement would be

It is very likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together, while about one third was due to natural variation. The best estimate of the human-induced contribution to warming is about 50% greater than the observed warming over this period. 

The IPCC will live to regret their selective bias by analysing only the period 1950-2010 .

Posted in AGW, Climate Change, climate science, IPCC, Science | Leave a comment