The Met Office tell us that September was the driest since records began 104 years ago. Last summer was ‘the hottest ever recorded’ in Australia. These extreme records hit the headlines implying that global warming is to blame. However just how likely is it that one extreme weather record or another will be broken due to pure chance? Barometer in today’s Spectator shows how to do the calculation and the results are surprising. I have simply extended the same argument to include Australia and the US.
In the following we consider 3 countries and their regions – The UK, US and Australia. The regions for the UK are England, Northern Ireland, Wales and Scotland. Similarly Australia has 6 states and the US has 50 states. That gives a total of 63 different regions if we also include the whole country itself.
Lets take 4 records that can be broken : hottest, coldest, wettest and driest. During a single year each record can be set yearly, monthly, or seasonally. That equates to 17 different time periods. Therefore for the UK there are a total of 5x4x17=340 records that can be set during the current year 2014. For Australia there are 6x4x17=408 records and for the US there are a staggering 51x4x17=3468 records.
Now lets assume that all weather records go back 104 years. The probability of a single record being broken in any year is simply 0.0096. So the probability the record will stand during the current year is 0.9904
What is the probability that any of the UK weather records will be broken during the current year ? That is
There is a 96.3% chance that at least one Met Office record will be broken this year in the UK!
For Australia there is a probability or 98.1% chance that a record will be broken and for the US there is a probability
or essentially a 100% chance that a record will be broken!
So in 2014 we can be absolutely certain that at least one state in the US will have its hottest/coldest/driest/wettest month/season/year ever recorded! Such records are totally meaningless.
The U.S. has 50 states. You write 52 and used 53 in the equation.
I see the extra 1 was the country itself, but it should still be 51 instead of 53.
Oops – you are correct. I will correct it ! However it still makes no difference to the probability being 100% to more than one part in a million.
Taken in insolation (as often seen in press releases / headlines) such records are of little or no use for understanding climate change. But looking at the distribution of such records across time and space can contribute useful information on how fast and what ways climate is changing. I would not say that such records are “totally meaningless” (although reporting of them may be).
This type of probability analysis for time-series data is of little meaning, as explained below. It is even more meaningless when considered in a spatially distributed context, again see below. Unfortunately, much of science and mathematics cannot be (sensibly) shoehorned into a sound bit or 140 characters. Or put differently:
“For every complicated problem there is always a simple solution, but usually it’s wrong”
However, before that, the bigger question of tying “extreme events” to Climate Change (CC), and especially to the question of anthropogenic CC is probably even less meaningful purely due to the “scientific distance from the facts”. If there is CC of the “IPCC sort”, then it is due to energy imbalance (i.e. more heat arrives than leaves the planet). Extreme weather events cannot be tied easily to energy imbalances. For example, if there is now more energy near the surface of the planet compared to previously, then should there more hurricanes or fewer? No easy answer here, since a few highly energetic hurricanes may count for less compared to a many low energy hurricanes. Of course, as it is not possible to measure (the energies etc) in such events, not direct connection to (IPCC-style) arguments is possible.
Unfortunately, extreme events are much easier to glamorise and take out of context, and so they will continue to be a routine trick by climate fanatics to compel their ideological agenda on the masses. This is also in large measure due to humans having “short memories”, and being more impressed by flashing lights and loud noises, than facts.
A few of the many possible side notes:
1) In 2007, the IPCC et al predicted that the Arctic would be ice free by 2013. For those who live in North America, 2013 was the year of the “polar vortex”, and one of the coldest years in many years. Quite a few boats/ships were “caught out” summering in the Arctic, and found themselves frozen in place in the Arctic.
It is sad that IPCC et al won’t accept or announce when their predictions are contradicted by facts, as is so often the case (many spectacular examples).
2) Although the trend in Arctic sea ice is downward, the trend in Antarctic sea ice is (astonishingly) upward, setting new records in each of the past several years.
I don’t see those “extreme” events appearing anywhere in the main stream fanaticism.
3) There are many other factors left out of the “stories”, such as there are more forest fires because careful forestry management over decades has kept forest fires to a much smaller level compared to natural. Thus, and this is now admitted by the US Forest Service etc, the forests have a huge build up of fuel that would not have accumulated with natural fires.
Of course with populations doubling over the past 50 or so years, and wealth of the planet increasing, more and more people are living in places that are “in harms way”, and there are more of them.
Some of the “omissions” would be comical if not such distortions. For example, some of the farmers in California fighting droughts had to do so since “environmentalists” had diverted much water to “save” what they claimed was some endangered species of fish. Put differently, the “extreme” event faced by the “drought” is indeed “anthropogenic”.
As for the (over) simplified probability analysis:
Pure sample probability analysis is not meaningful when there are dynamics or other factors. If you must use this type of analysis, at the very least examine “differences”. For example, based on the purely “number of permutations” style analysis as provided in the article (e.g. comparable to examining the probability of rolling a particular number with dice), compare the number of extremes that actually occur compared to those predicted by the “model”.
Notice also how those “difference” vary when altering sampling frequency (e.g. using daily vs. weekly etc data).
However, to do this “correctly” requires many additional considerations. For example, it is clear that there is in uptrend in temperature (not just in the last 150 years, but in fact from end of the last ice age, etc etc). This means that even a “difference” analysis would require “de-trending” the data … at this point you are, well, screwed. There is no meaningful way to de-trend temperature histories. For instance, assuming a linear trend for simplicity, which end points do you take to establish the trend? Choosing different end points would result in different trends, so the entire analysis “implodes” (see Note 1 here http://www.thebajors.com/climategames.htm).
Also, when dealing with “proper” temperature and other weather data, the statistical properties are rather different compared to some “global average”. In many cities, a typical day range of temps can be easily 10C over a 24 hour period. IPCC-style analysis sees variations of less than 1C over 100 years in many cases. That is, extreme events in high-frequency local/raw data are (probably) more likely, purely due to the fundamentally different “type” of data.
It is crucial to determine the statistical distributions … but this turns out to be, at least partially, if not fully, meaningless, since the distributions are non-stationary. Indeed, these process are almost aperiodic (at least in part) and so pure statistical/probability analysis is meaningless anyway.
Even if we could get past all that, we would then need to consider the measures in some holistic manner. For example, how are extreme temp highs related to extreme temp lows. How are those extremes related to humidity/rain etc. extremes.
Even if we could get past all that, there is then the fatal complication of spatial issues. Almost any locale can exhibit extreme events due to purely local events/forces or indeed “flukes”. As such, there is a near certainty of being able to report “some extreme somewhere” almost every day.
Since the IPCC et al consider any extreme as proof of their position, the cherry picking allows them to “report” the “sky is falling” pretty much every day.
It is of some interest that the IPCC et al have in recent years also started to “claim” other types of extremes as “proof” of AGW, such increase in violence/wars, increase in the death rate of the elderly, etc. … it seems that the more the real data contradicts the IPCC et al, the weirder their claims become.
2014 hasn’t broken one out of 340 records though. So far, with three months left to play, it’s broken twenty seven;
UK wettest winter
UK driest september
England wettest winter
England wettest January
Wales wettest Winter
Scotland wettest winter
UK most rainy days January
UK most rainy days February
UK most rainy days Winter
UK fewest rainy days September
England most rainy days January
England most rainy days Winter
Wales most rainy days February
Wales most rainy days Winter
Scotland most rainy days Winter
NI most rainy days February
NI most rainy days Winter
Scotland warmest spring
Uk highest average spring low
Scotland highest Spring average low
Scotland highest May average low
Scotland highest June average low
NI highest May average low
UK fewest frosty days winter
England fewest frosty days winter
Wales fewest frosty days winter
Scotland fewest frosty days Spring
http://www.metoffice.gov.uk/climate/uk/summaries/datasets
And the chance of that is less than one in a million
Hang on a minute! Now you (and the Met Office) have exploded the number of possible records that can be set by at least a factor 2. You have added.
min temp
max temp
average min temp
average max temp
no. of frosty days
no of rainy days (not total rainfall)
There are two records for each (highest and lowest) So now we have at least 1088 possible records to be set. Therefore you would now expect about 10 records to be set each year by random chance. So if we really have 27 set in 2014 that would indeed appear to be unusual – I admit.
However the probability of that happening by chance is still about 5% – not 1 in a million !
Clive,
Useful analysis, thanks.
Here is a statement of the bleedin’ obvious –
Surely it’s only unusual if the records you choose to analyse are chosen after the event not before. So, if you know that frosts are performing “well” this year then retrospectively including them only demonstrates one’s skill at choosing (a bit like the “hand picked” apples in the stella artois cider ad).
If, on the other hand, you declare your metrics beforehand or use “all” metrics retrospectively (how does one define “all” metrics, exactly?) you can build a case.
Basically, I don’t buy Tom’s argument on it’s face.
>However the probability of that happening by chance is still about 5% – not 1 in a million !
Care to show your working?
First problem is that your list of records does not seem to be quite correct.
e.g. Most days of frost = 2010 Least days of frost = 2002
Several of the records are simple correlations – So wettest Winter=(wettest Jan = wettest Feb = most rainy days etc). They are all related to the same thing – last winters stormy weather
So I make it that only about 10 records are ‘genuine’ records and that is about normal.
Expectation would be 10 ± 4 records per year. A 3 sigma outlier would be about 5% probability or ~22 records set in one year.
Clive. I looked at your stats and thought, no way, they have to be wrong. But I checked them and found they were in fact correct. A good piece of work that deserves wider publicity.
“Now lets assume that all weather records go back 104 years. The probability of a single record being broken in any year is simply 0.0096.”
why?
It is nothing to do with chance whatsoever. Sept 2014 was guaranteed to be very warm because of the strength of the solar signal through the month. It was forecast many months earlier, from the planetary ordering of solar activity. As was the heat wave from around July 22nd 2014.