# Tides and Storms

Energy flows from the tropics to the poles. The tropics absorb most of the sun’s radiation and drives heat flow to polar regions where radiative cooling dominates. In winter the Hadley cell moves heat to ~30N via huge convective currents transporting  latent heat from the tropics northwards via a band of powerful thunderstorms . The reverse happens in the northern summer when the tropical Hadley cell  moves southwards towards  the now winter southern hemisphere and it reverses circulation.  David Randall explains this well in his book ‘Atmosphere, Clouds and Climate’ (from which the first 3 figures below are taken).

The rising branch of  tropical thunderstorms (ITCZ) is located about 10 deg from the equator in whichever is the summer hemisphere. The release of latent heat warms the troposphere to the moist adiabatic lapse rate. The high warm air moves meridionally and cools by radiating heat, so becoming more dense. It then descends and twists eastward due to the rapidly increasing coriolis forces with latitude. A counter rotating Ferrel cell is driven by the mechanical energy of storms 60 -30 deg and balance mass flow. The JET stream is caused by the coriolis force acting on the descending Hadley circulation accelerated by zonal wind effects. It is concentrated about 12 km up due to lapse rate temperature gradients. The coriolis component of angular momentum M for the earth rotating with angular velocity $\Omega$  at latitude $\phi$ and zonal wind u is:

$M = (\Omega \cos{\phi} + u)a\cos{\phi}$

u = 0 at the equator wheras u = 11m/s at 30N

The Jet stream strengthens as the polar night begins. This is closely related to winter storms as the temperature gradient between the tropics and the winter pole increases.  This “thermal wind” in mid latitudes is caused by horizontal temperature gradients which increase strongly in winter. Hydrostatic balance vertically becomes unstable

$\frac{\partial P}{\partial z} = -\frac{pg}{RT}$

Wind changes rapidly with height when surface temp changes rapidly in the horizontal direction. The Jet stream becomes stronger

If T changes rapidly horizontally then the thermal winds increase strongly with height. This state is termed to be ‘baryclonic’. At 30N in winter the Jet Stream winds starts to increase massively with height. This is because below the Jet Stream the surface temperature gradient increases rapidly towards the ‘night-time’ pole. So the steeper the temperature gradient  the stronger the Jet stream becomes.

Given a strong Jet Stream and large temperature gradient, conditions are now critical for the formation of big storms. A strong poleward decrease in temperature causes ‘baryclonic instability’. Such storms are technically called ‘baryclonic waves’ and are composed of  cold fronts as polar air moves south under warm tropical air, and warm fronts when simultaneously warm tropical air rises north above cold polar air, all driven by Coriolis forces. Each front causes a rapid change in temperature on the ground. These storms are then accelerated by the release of gravitational potential energy as dense cold air falls downwards feeding kinetic energy to the storm causing strong winds.  When conditions are right for baryclonic instability any small external disturbance will be amplified to trigger the formation of a storm. There is clear evidence that strong spring tides are one important trigger of such storms. Tides provide an asymmetric disturbance which act over large northern zones through  changing tractional tidal forces acting on the atmosphere. An example  of this was the storm on Jan 4-6 last winter.

Formation of storm which hit UK on 5-6 Jan 2014. A strong wave of tractional tides sweeps through the area. The new storm seems to have been triggered by a strong kink in the Jet Stream dragging warm air up in the north west Atlantic. By the 4th January this second intense storm is forming fast off over Newfoundland. The tidal forces are very strong and the Jet Stream starts to kink. The previous low pressure system seems to be consumed by  the next as it  grows fast .

In a growing perturbation cold air slides under warm tropical air. Cold air moves down causing a front while warm air on the tropical side moves upwards  causing a warm front. Warm air rises and cold air descends transporting thermal energy upwards and the center of mass of the total air column descends releasing gravitational energy into kinetic energy. This forms a low pressure vortex at the centre.

Hadley cells do not penetrate into middle latitudes. Likewise Baroclonic waves do not penetrate into the tropics. The energy  transported from the tropics by Hadley cells is ‘handed over’ to such baroclonic storms to finish the job by moving the heat onwards to the poles. The Jet stream position defines this boundary between tropical air and polar air. Storms form on the northern side of the Jet Stream at the eastern edge of the Atlantic. The path of these storms is determined by the kinks (Rosby waves) in the Jet Stream.  As the earth rotates, an ever changing gravitational field of  tractional forces sweep across the Atlantic ocean from west to east about every 12 hours.  The effect is similar to a bar magnet sweeping across  a sheet of paper covered in iron filings.  These two videos show how strongly tractional tidal forces can vary both during the lunar month and yearly with gradual changes in lunar declination. The first is updated 2 hourly for August this year and the second shows the direct tide for a fixed position during 2006 when the moon was at its 18.6 year maximum declination.

The southward swirling tidal force acting on the Jet Stream amounts to about 10 metric tons per km. This gravitational force is varying strongly in strength and position during the lunar month. There are two possible effects of tides.

1. They can distort the Jet Stream causing kinks which change the path of mid latitude storms.
2. They can seed winter storms by disturbing baroclonic instability at maximum tidal forces. Therefore for lunar tides to effect mid latitude weather we need the following conditions.
• A sharp horizontal temperature gradient leads to a strong Jet Stream and Baryclonic instability.
• Increasing spring tides with strong tractional forces at high latitudes can  trigger anticyclonic flow leading to cold/warm fronts and storms moving eastwards.
• These storms are guided across the Atlantic by the Jet Stream.
• The Jet Stream itself  is distorted by the same changing tidal ‘tweaks’.

Jet stream flow 3-6 January 2014

The strength and position  of storms is enhances by growing instability and changing atmospheric tides over coming days. The same  tractional tidal force field also modifies Rosby waves in the Jet Stream.

Last winter the UK experienced a series of severe storms. The Jet Stream was locked into a pattern whereby the eastern US was extremely cold while warmer air resided over the western Atlantic.  Storms were spun off the boundary between very cold air over Newfoundland and warmer tropical air moving up over the western Atlantic. Each major storm coincided with a maximum tide bringing coastal flooding to the UK as large waves were also enhanced by storm surges.

So there is direct evidence that atmospheric tides can trigger storms.  Nearly all  of last winter’s storms coincided with maximum spring tides. This cannot just be a coincidence. It would be relatively easy to include such tidal forces into GCM weather forecast models. The moon is just one more external force acting on the circulation of the atmosphere. I strongly suspect that medium range weather forecasts would improve significantly if the dynamics of tides were properly taken into account.

This entry was posted in climate science, Physics, Science and tagged , , , . Bookmark the permalink.

### 3 Responses to Tides and Storms

1. DrO says:

Dear Mr Best

I am not sure why/how you imagine that any of this would be “easy to incorporate in GCM’s”. In fact, GCM’s are, almost surely, intractable for reasons I explain at the end.

Before that, even if we were dealing with a deterministic system, adding support in GCM’s for the type of phenomenon discussed here would be a substantial problem.
Just one of the big difficulties is that weather phenomenon of this sort would require much higher mesh resolutions in the numerical solvers of the Partial
Differential Equations (PDE’s). A few of the many issues following from this include:

1) Even the best current GCM’s tend to have mesh sizes (in the horizontal) of 100km or more. To represent weather phenomenon of this sort, the resolution (in the horizontal) may need to be reduced to 10’s of km’s, or less. This would immediately create big problems, such as:

a) The computational expense skyrockets. In 3-D, for example, a doubling of mesh resolution would (approximately) increase the computational cost by a factor of 8.
As you can see, increasing the resolution by a factor of 10 could easily require an increase of computational expense in the 100’s or 1,000’s more (e.g. you would need the equivalent of a 1,000 super computers for each one used at the lower resolutions).

b) All GCM’s use (many and huge) assumptions to reduce the complexity of the problem to something a super computer can handle. One of the types of simplifications is to treat the core pressure PDE’s in a form that is referred to as “elliptical”. TO handle weather phenomenon of the sort discussed here, almost surely the models
would need to be extended to support parabolic/hyperbolic character in the PDE’s. Those are very much more difficult problems to solve.

c) Many GCM’s rely on a special method for solving the PDE’s in the horizontal, commonly referred to as “Spectral Methods” (SM’s), very similar to Finite Element
Methods (FEM’s). The key advantage of SM’s is that they are “efficient” on low resolution grids (i.e. large mesh elements). Put differently, they are good’ish when spatial discretisation is a few large blocks.

To support these types of weather phenomenon, the mesh resolution must necessarily increase, which necessarily means that the discretisation will have many small blocks.

A key weakness of SM’s is that numerical errors tend to be worst/propagate etc near the boundaries of the mesh blocks. That is, simply by increasing mesh resolution one introduces not only much more computational effort, but with SM’s, much more numerical error.

2) Similar issues exist in the vertical, though many models tend to use Finite Difference (FD) in the vertical, so the “boundary error” problem is not quite as bad as with SM’s, but the resolution may require discretising into 100’s of meters, not several km’s, thus massively increasing the computational expense.

… the list goes on.

Beyond this, all you need is just one volcano (or solar something, etc) to make a complete dog’s breakfast of the modelling

Of course, none of that is really important in the big scheme of things, since the entire climate modelling problem is almost surely intractable (at least in the
sense of how, for example, the UN/IPCC would like believe their models to be “meaningful”).

Your own discussion hit the nail on the head in the part where you state that small local changes can have large effects. The climate is NOT deterministic. We don’t actually know how aperiodic/stochastic it may be, since nobody has actually done the analysis (nor can they).

Nevertheless, it seems to be a near certainty that the climate is aperiodic … as soon as you have that, deterministic PDE’s are meaningless (e.g. pretty much all of the GCM’s in current use).

Even worse, and the entire point of non-linear dynamics (the topic which covers aperiodic systems, chaos, fractals, etc) is that if small changes have a big effect, then those problems are often necessarily intractable, even if you had managed to obtain the “correct” PDE’s, since there would be always small perturbations and errors introduced by inaccurate initial/boundary conditions, inaccurate parameterisation, numerically induced inaccuracies and so on, and so on … which then grow/explode … and thus the model answer and the correct answer diverge massively.

Thus, even if you managed to obtain the “correct” equations (and had millions more super-computers), you could still not forecast in any practical or meaningful sense (see the Appendices to Note 1 here (http://www.thebajors.com/climategames.htm)).

• Clive Best says:

DrO,

Everything you write is more or less correct. However, the best we can do is to simply base atmospheric circulation on the Navier-Stokes and some basic conservation equations.

I know little of the details of GCM models but I do have experience of working on Monte Carlo calculations to solve PDEs in particle physics and MHD codes in Fusion. Yes it is a basic requirement to have a stable physical model to describe the world. Often that is an idealization and for sure weather shows chaotic properties. Weather forecasts begin to diverge after only 5-6 days based just on small changes in initial conditions.

There probably is a fundamental limit to determinism in meteorology which no amount of computer power can ever solve. The timescale limit for accurate forecasts is perhaps 1 week.

However the moon is perfectly predictable. The tides can be calculated accurately years in advance. If indeed the Jet stream and winter storms respond to changing atmospheric tides, then adding their effect should be easy.

• DrO says:

I think you may have missed the point entirely. I am not sure if your answer implies a misunderstanding of “stability”.

While there are many dynamics that we can model/predict reliably, that in itself is neither a necessary or sufficient condition that we can do so for all dynamics. Indeed, it is a natural consequence of the structure of the universe that many types of dynamics defy predictability, as a kind of raison d’etre (this is written into the fabric of the space-time continuum).

Let’s put aside for the moment issues relating to computational methods (e.g. Monte Carlo, Spectral, FD, etc), as those are purely technical/mechanical issues (though non-trivial in nature).

Instead focus on the “model(s)”. Notice that applying a numerical methods (e.g. Monte Carlo) to solve a model is NOT the model, it is just the “solving” of the model.

As such, there are distinctions between the properties of models, and the properties of numerical methods used to solve the models.

Why do models need to be “solved”? The most common method for deriving models is via “conservation” equations (e.g. energy- or mass-balances, etc), and if the model is dynamic, then this almost always leads to Differential Equations (DE’s), or equivalently in some cases, Integral Equations (IE’s). PDE’s and IE’s need to be solved to get the “answer”. This is much like someone telling you that the “model/dynamic” of your investment process is 5% growth, then you have to the “solve” that model to determine what your bank balance will be at the end of some period. The solution step is trivially simple here, but if you got the model wrong, it makes no difference how good your solution step is.

Let suppose a model is derived to represent various heat and mass flows via a system of (Partial) Differential Equations (PDE’s). Some of which will look a little like Navier-Stokes, but not all, and I provide an additional comment Re Navier-Stokes at the bottom (it can’t actually be solved, see the PS).

What do we know about the stability of such a model?

To begin with, we should know something about the stability of the actual dynamic being modelled.

Roughly speaking, dynamical systems in the universe can be categorised in to two types: Fundamentally Stable Phenomenon (FSP), and (what a surprise) Fundamentally Unstable Phenomenon (FUP).

FSP’s including many matters, for example, modelling the trajectory of a football or artillery shell. We can model/solve those problems “as close as you like”. The destination of the football varies smoothly and predictably with small changes in the “throw”, etc. It is a “stable” problem (ignoring the case of a big puff of wind, etc.).

However, there are dynamical systems that are “naturally unstable”.

For example, the financial markets, the climate, certain types of planetary problems (e.g. the “many body problem”), are all examples of FUP’s. Notice that FUP’s need not be giant systems, a lava lamp is also an FUP. The simplest machine that I can think of that is an FUP is what is called a Hele-Shaw cell. I can send you a pedestrian version of some notes I have on the simplest Hele-Shaw cell, if you request it.

I trust we can agree that, say, the lava lamp is “unstable”. Indeed, it is actually “stably unstable”. This property is called aperiodicity (e.g. not quite periodic).

By far the simplest demonstrations of FUP’s is via the “kneading dough” thought experiment, as detailed in the Appendix of Note 1 here (http://www.thebajors.com/climategames.htm). I urge you to work through that thought experiment.

Another simple illustration of an FUP process is the Logistic equation, which is sometimes used to model populations and so-called “predator-prey” dynamics. The Appendix of Note 1 at (http://www.thebajors.com/climategames.htm) includes some discussion of the Logistic equation, and there is spreadsheet provided there also. I urge you to play with that spreadsheet and observe the impact of small changes in the initial conditions, and small changes in its parameter (lambda).

That experiment will easily demonstrate both (natural) instability due to changes in the initial conditions, and (natural) instability due to changes in parameterisation.

The point for the moment is this: If the dynamic is naturally unstable, then any reasonable model must somehow capture/reflect that instability. I trust we agree that modelling an unstable system with a “stable” model is a failure (e.g. somewhat like modelling a clock with a quadratic equation is a failure).

The kneading dough thought experiment, lava lamps, Hele-Shaw cell, or the Logistic equation are each deterministic (i.e. there is no statistical/stochastic element, as such). As such, one might be lulled into a false of security in that they could be dealt with in a manner similar to the football trajectory problem … big mistake.

There is NO (practical) method for predicting the outcomes of any of those FUP/deterministic systems. I use the word “practical” here, since there is generally at least a tiny forecasting horizon over which some (tolerable) forecast might exist. Though by tiny I mean so tiny as to be of no practical value. For example, a “perfect” climate forecast for a 1-day horizon is of little value for a 100-year forecast, if by day 2 of the forecast it has fundamentally departed from reality, etc.

FUP’s have additional properties. For example, the cost of improving the forecast even a little increases exponentially. Consider that typically weather forecasts are (sort of) OK’ish for 1 or 2 days (7 day forecasts are typically rubbish). At the moment, Trillions are spent on weather monitoring and forecasting. If you wanted to improve the weather forecast to, say 3-days, (assuming it was actually possible), the costs would be 10’s or 100’s times greater compared to the cost of the 2 day, etc.

In short, FUP’s have exponentially increasing departures in the forecast/model compared to reality.

This defiance of “forecastibility” is written into the fabric of the cosmos, and it makes no difference how many math geeks or supercomputers you through at the problem for a vast range of settings/issues.

For complex systems such as the climate, where there are many “moving parts”, many of which are FUP’s, the interaction of the FUP’s completely blows up any hope of (sensible) modelling. Just take volcanoes for one. There is no way to predict volcanoes (and notice that one must be able to predict not only the date of the eruption, but also the scale and composition, etc. … an impossibility of impossibilities). Thus, even if you could somehow overcome the impossibilities in the other FUP components, this one alone completely makes a dog’s breakfast of your forecast.

Even if the climate had just one FUP, what do we know about it? For example, a casual inspection of temperature histories appears aperiodic. That is, there are sort of patterns that sort of repeat, but not quite. Has anyone actually performed any tests to determine if those data histories imply aperiodicity, and if so, any parameterisation of those aperiodicities. For example, has anyone bothered to measure, say, the fractal dimension of, well, anything, in connection with the climate? I am not aware of any such tests, and I would imagine those types of tests would be near impossible anyway due to various technical requirements for such. That is, it hasn’t been done, nor can it be done, and so no model is possible, since you cannot calibrate the instability.

In short, the natural state of the planet is unstable. Thus, any modelling of a naturally unstable system must be via models that support the same type of instability(s). Moreover, the models’ instabilities must be correctly parameterised to the observed instabilities. This is generally impossible, but even worse here, since almost surely that parameterisation is non-stationary (i.e. different types/modes of instability as time and events vary).

The long and the short of it is this: It is NOT possible to model/predict the planet’s climate with any degree of reliability, and certainly not with the degree of reliability required for the sort’s of decisions the IPCC/UN et al wish to make.

Yes, we can always make predictions of some sort, but to what end? I am pretty certain that forecasting the planet’s temp 100-years out will be bounded by +/100C … so what? What sort of forecast would be appropriate to insist that the entire planet undertake the greatest upheaval in socio-economic structure in recorded history? The IPCC insists that a 2C change is already catastrophic, so the forecast reliability must surely be much better than 2C. Since volcanoes on their own can and do cause 3 – 5 C changes, immediately that one FUP completely destroys the IPCC end-game.

That the moon, tides, etc are predictable is a complete red-herring, or a misunderstanding of what stability means. Stable physical models are only meaningful for stable physical systems. Try predicting a lava lamp … a very much “simpler” problem compared to the climate, but no less impossible to predict.

Alternatively, why not just role dice, throw a dart, or ?

Finally, I have written a sequence of notes regarding the modelling of financial markets, where pretty much the same types of problems exist. The notes are fairly pedestrian, but they do have some equations and require at least a little math-awareness. They demonstrate the differences between deterministic, stochastic, periodic, and aperiodic issues in (I think) a straight forward manner. I you would like a copy, let me know via a reply to this post … though it would be a requirement that you would not be permitted to distribute those notes in anyway.

PS. Re Navier-Stokes

In fact, we (humans) cannot solve Navier-Stokes in its full splendour. There are many cases we can solve, when many simplifying assumptions are applied. If the simplified versions are still sufficiently close to reality, then OK. In many cases, the simplifying steps allow the solution, but to a now different problem compared to the target dynamic.

Indeed, we still don’t know basic mathematical properties of Navier-Stokes, and that is why you can pick up millions of dollars in prizes and awards if you could show certain basic truths about Navier-Stokes, such as its “existence and smoothness” properties.