ACORN2 is the latest BOM version of station data in Australia. You can plot the average temperature anomaly data for December using their trend tracker. Shown below the graph is the average temperature for the baseline period 1961-1990. The value is 27.2C. Ed Hawkins used the anomaly data and simply added on this average temperature to produce this plot.

I have analysed ACORN1 data before so I downloaded the latest ACORN2 version and expected to be able to use the same software to check the result. However the format has changed dramatically. ACORN1 had daily Tmax and Tmin values in a single file, one for each station covering the full measurement period. These have now been split into two separate files Tmax and Tmin and the metadata is no longer directly available ( I found it in their Javascript!). Other changes which caused me difficulties were:
- Different timescales for Tmax and Tmin
- Missing values are now simply left blank whereas in ACORN1 they were set to -9999.9
- Sometimes ( eg. Merriden) there are up to 17 days of consecutive data simply missing .
- The start dates for the two files are quite often different which means for example we have only a tmin and a missing tmax .
Having resolved all these problems I then calculated the anomalies for December and the area averaged temperatures for Australia. I got good agreement for the temperature anomalies.

However I got wildly different values for the Average December temperature. I got 23.8C as the area averaged temperature for Australia between for 1961-1990 instead of their 27.2C.
So obviously I must be wrong – or am I? I take the daily average temperature to be (Tmax+Tmin)/2 and then average this value in each of the 12 months between 1961 and 1990 and for each of the 112 stations. My values then agree almost perfectly with various travel/tourist website averages for expected monthly temperatures. For example my monthly average values for Sydney are:
22.965, 23.093, 21.857, 19.548, 16.531, 14.016, 13.068, 14.176, 16.347, 18.671, 20.297, 22.175
Compared with https://www.holiday-weather.com/sydney/averages/
My area averaged value for Australia uses a triangulation method so I thought that maybe I had screwed that up, so to check I recalculated everything using the CRU 5×5 grid method, and I basically got exactly the same result. So I think the difference is instead the following.
Everyone uses monthly average temperatures – GHCN, Berkeley, CRU etc to calculate anomalies whereas I am using the daily values. So my 1961-1990 averages are based on the 30 year average daily temperature for each station during any particular month. These are then spatially integrated to give the normal climatology. So why are they not the same as ACORN or GHCN?
My suspicion is that ACORN, GHCN, Berkeley etc. uses the monthly values over the 30 years period. So the minimum temperature is the lowest Average temperature and NOT the lowest Minimum (night-time) temperature for a given month. This could explain why they get higher values.


Finally here are my trends for December. The increased temperatures in December are mostly because the minimum (night-time) temperatures have been rising much faster. There is not much change in the average maximum temperatures up until 2018.

P.S. I have no time to check all this as I will soon be on a plane to Australia !
Thanks. An interesting analysis, Clive.
I think Ed Hawkins is showing mean maximum temperatures. If you look at the BoM Max Anomaly plot, it says, top right,
Average (1961-90) 28.6 °C
If you switch to mean anomaly, it says
Average (1961-90) 21.8 °C
I think he added the av max figure.
I don’t think using the triangulation method (or any other simple integration method) for absolute temperatures is wise. They are just too inhomogeneous.
Nick,
BOM have just updated their average value. It previously gave 27.2C but now says 21.8C. So I was right and I think they noticed their mistake!
You’re right they triangulation is not a good method for land only averages. However it doesn’ t make a lot of difference to the end result if you use a regular grid or an equal area grid instead.
Thanks to Clive for this well done head post.
Btw, the monthly 1961-1990 average I compute out of Sydney’s GHCN daily station
ASN00066062 -33.8607 151.2050 39.0 SYDNEY (OBSERVATORY HILL) 94768
is as follows, in °C:
22.58
22.74
21.59
19.20
16.14
13.62
12.76
13.81
15.92
18.16
19.81
21.80
which essentially agrees with mine.
So my concern is really about the definition of how to calculate the average max, min, and mean temperatures for any given month. Should you a) first form monthly average Tmin and Tmax for each station and then as a second step average these monthly averages over a reference 30y period ? Or should you b) average Tmax, Tmin and the mean ( Tmax+ Tmin) /2 for all those days that fall within any particular month over the reference 30 year period. In the first case the sum is divided by 30 ( or for the number of years data is available). In the second case the sum is divided by the total number of days of available data which falls within the given month.
Apparently these two methods give radically different answers.
I favour method 2.
The anomalies though are the same for both methods.
Clive
I hope I don’t misunderstand you.
Does method 2 not mean that you (a) compute a global baseline wrt a reference period, then (b) subtract the global monthly averages from each station’s monthly average and (c) average these differences within grid cells?
I compute a baseline for each station separately, provided it has the data necessary to do, and then the monthly anomalies which are averaged in the cells.
A global baseline wrt the common reference period is computed but never used internally, it is no more than a hint for interested readers.