Australian Bureau of Meteorology (BOM) hourly temperature data

I have always wondered exactly how those nice homogenised monthly temperature averages for some 7000 weather stations within CRUTEM and GHCN were actually produced. This is the first part of my investigation.

A vast amount of raw temperature data has now become available from the Australian Bureau of Meteorology. In this first post we looks at the hourly temperature measurements from 112 weather stations  from 1950 to 2015. This data was provided by Ron Graf  who had acquired it from BOM with the original intention of studying climate change effects on diurnal temperatures, contrasting inland sites with coastal sites. The distribution of available stations are shown below.

There 8 measurements per day at larger stations, while at many other stations only after 1986 when measurements were automated. Before 1986 these remote stations had just 2 manual readings per day taken at 9am and at 15pm. The volume of data is still huge with around 250,000 readings for large stations like Sydney. I have processed all the station data as follows.

  1. Daily averages were calculated over all hourly readings per day and the minimum and maximum temperatures recorded. In some early cases only two readings per day are available typically 9am and 3pm, thereby biasing up minimum temperatures. An example of this can be seen in Leonora. After around 1988 this effect disappears. This change in timing of the daily temperature measurements can also bias the mean temperature because it restricts the range of temperatures over which the average is calculated.

Figure 2 Diurnal temperature range for the inland station at Leonora, showing a bias before 1988 caused by only 2 temperature measurements/day at 9am and 3pm. After 1988 there are 8 measurements/day one every 3 hours. Note however an apparent decrease in range after 2010.

2. Monthly averages for each station were calculated by averaging together all available daily averages in that month. The monthly maximum and minimum temperatures were taken as the highest and lowest recorded daily extremes. Most stations have some problem of missing measurements and in the worst cases a few stations are missing several years of data. To handle this problem all missing daily, monthly and yearly values are assigned a NAN value, which means ‘not a value’. As such they are excluded from all calculations of averages and temperature extremes.

3. Yearly averages were then calculated from the daily averages in exactly the same way as for monthly averages. Likewise, the yearly maximum and minimum temperatures correspond to absolute daily extreme values for each year. The date and time of each extreme was also recorded. An example for Cairns is shown below.

Average Monthly temperatures for Cairns overlayed by the annual average temperature and the yearly minimum and maximum temperatures for each year

Temperature Anomalies
In order to calculate temperature anomalies at each station I first had to calculate the 12 monthly averages over a 30 year period from 1961-1990. These are the so-called ‘normal’ climate expectation values for that station each month in the year. Temperature anomalies for any given month in the 75-year series are then just the +- deviations from this normal value. So for the case of Sydney Airport then we get the following result, which appears indeed to show a small warming of about 1C between 1955 and 2015.

Figure 5: Monthly (blue) and Annual (red) temperature anomalies for Sydney Airport

However, most of the  other stations show no effect at all or even a cooling trend – for example Boulia Airport.

Figure 6. Temperature anomalies for Boulia Airport.

Furthermore you would be hard pushed to detect any real difference between the 1950s and the 2010s in diurnal temperatures range even in Sydney. Anomalies pick out and accentuate tiny differences, but they are also very good at showing up any biases within the data.

Comparison of diurnal temperatures for Sydney between the 1950s and 2010s. The black trace is the diurnal range which is relatively small because Sydney is  maritime.

Lets go ahead anyway and calculate the annual averaged anomaly over the whole of Australia. To do this I use a 2-d triangulation in Lat,Lon to form an area weighted average for each month and year. Here is the result

Averaged Annual anomaly

There is no warming ! At  first sight this result disagrees with BOMs result based on the ACORN-SAT data. So what  is wrong? Firstly it demonstrates just how sensitive anomalies are to any small underlying biases.  To compare this trend with standard results  I took the GHCN V3 data from which I extracted all the Australian stations and then performed exactly the same analysis on them as I did for the hourly data. This is how they compare.

Comparison of Annual anomalies for Australia calculated from GHCNv3 and Hourly data using exactly the same software

The detailed yearly structure is identical but the trend is totally different. So what am I doing wrong. One obvious bias is the change in the number of hourly readings per day around 1986, so as a result I convinced myself that we couldn’t use the hourly data for climate change studies. However, while I was writing this post I had a brainwave. Since we know that the bias is caused by an increase in measurement times after 1986, why not just simply remove it by restricting  the analysis  to just the two original readings per day – 9am and 15pm. This means  discarding about 70% of all the hourly measurement data, but it solves the  problem. So here is the new result.

The green points are the BOM anomalies derived from using just the 9am and 15pm measurement points.

The hourly derived annual anomalies now almost exactly agree with those from the Australian GHCN V3 values. So the mystery is solved. The cooling trend after 1986 was simply due to  extra sampling times being added  following the automation of data recording. Basically, it was the minimum daily temperatures that fell after measurements at 3am were introduced,  and as a consequence that biased downwards the monthly anomalies after 1986.

So what exactly does homogenisation do? What it is supposed to do is to detect shifts in temperature data caused either by either physical movements of the weather station to a different location, or by a change in instrumentation. Once detected the relevant temperatures are supposed to be corrected by shifting them up or down by a fixed amount to compensate. The other ‘black magic’ correction is  intended to detect any systematic ‘errors’ in trends relative to those of nearby stations and then ‘correct’ for them. This second process is far from clear, especially in a place like Australia where the next nearest station can be located up to 1000km away. Let’s look at a couple of examples.


1. Winton

Winton Anomalies as calculated by the 9am-3pm method compared to NASA-GISS homogenised data.


Winton  clearly has a kink in temperatures around 1976 and perhaps another around 2000. So how is it possible that GHCN converts these into a linear rise in temperatures of about 1.5C over the same period?   There are at least three levels to this puzzle. A) the raw temperature measurements. B) the adjusted (first level homogenisation) by the local Met Office which passes the monthly averages to GHCN & CRUTEM4. C) A second level regional homogenisation by GHCN or by CRU to derive their station normals and temperature anomalies.

3. Amberley

This is an interesting case which was first described by Dr Marohasy in 2014 [1], and resulted in a twitter exchange with Gavin Schmidt. This is how NASA turn a fairly flat temperature distribution with at least one small  systematic shift into a warming pattern.

Amberley temperature anomalies (9am & 3pm) compared to NASA-GIS


There trend is flat until 1990 with perhaps a small net rise < 1C at most. The correction algorithm appears to have annulled at least one shift . The homogenisation algorithm is supposedly based on comparing trends  with those at neighbouring stations but in Australia those distances can be huge.

There are stations which show a warming trend with the 8 readings per day resolution data, but these tend to be mainly large cities on the coast. For example Sydney, Melbourne, Perth and Adelaide.

Perhaps these large cities responsible for homogenising  warming trends in other stations or does the algorithm itself has a built in warming assumption? Let’s look at the most isolated stations in Australia  – Alice Springs. Here is the result from  the raw BOM temperature data (9am & 3pm only).

Average temperatures and anomalies for Alice Springs


There is a systematic kink between 1974 and 1983, but otherwise the trend shows an overall increase of < 1C. The nearest stations are about 1000km away, so it is difficult to believe that these could be capable of ‘increasing’ the trend. Yet this is exactly what GHCN seems to do.

Note how the kink has simply been blanked out. It is hard to believe that any objective homogenisation procedure should always end up increasing temperature trends . Why does the inverse never happen? Is there some built-in assumption that a warming trend is the norm.


The available hourly temperature measurements from the 112 stations have been processed into daily, monthly and yearly temperature averages, maxima and minima. Temperature normal for each station has been derived based on a baseline of 1960-1991. Many stations show a bias due to a step change in the number and timing of temperature measurements over 70 years. Before ~1986 measurements were carried out manually and remote stations had just 2 measurements,  at 9am and 3pm. Thereafter there were 8 automatic measurements per day. This bias affects diurnal range trends and the daily Tav trends. The large stations were unaffected because they had always had ~8 measurements per day.

The average temperature anomaly for Australia calculated from the mean of all daily measurements shows no warming trend. However, if you simply use only the continuous 9am and 3pm readings for all stations then the result is very similar to that of GHCN  and the official BOM trend on their web site. We will look at the  ACORN-SAT data in the next post.

[1] See:


This entry was posted in AGW, Australia, climate science. Bookmark the permalink.

12 Responses to Australian Bureau of Meteorology (BOM) hourly temperature data

  1. Lance Wallace says:

    Why is your “range” mostly negative?

    • Clive Best says:

      Sorry – I should have explained that I took Min-Max so as to have a negative “range” only for convenience in plotting . So really it is -range !

  2. Nick Stokes says:

    ” To handle this problem all missing daily, monthly and yearly values are assigned a NAN value, which means ‘not a value’. As such they are excluded from all calculations of averages and temperature extremes.”
    No, this is the wrong thing to do for averages. I talk about that here and in linked posts. The problem is that there can be a big bias if the missing data is say predominantly winter or summer. If July is missing, you’ll get an artificially warm year.

    The right thing to do is infilling. Replace the missing data by its expected value. That doesn’t compensate it for the fact that it is missing, but avoids the bias. The actual effect of simply omitting a month (NaN) is to replace it with the annual average (for that year). The expected value (normal) for the month is much better.

    I think this error is probably responsible for a lot of the discrepancy. It probably explains why Sydney (none missing) warms while Boulia (probably lots missing) cools.

    • Clive Best says:


      The usual case is missing days. So in this case I think it is OK to calculate the monthly average over only those days that have measurements. It doesn’t make sense to infill missing days

      Your argument applies to missing months when making the annual average. Here I think you are correct that any missing months will skew the annual average because of seasonal variations. I would prefer to extend months either side to make an average. However if more than say 3 months are missing then it is best to mark the year as missing rather than invent a value which looks half reasonable.

  3. Nick Stokes says:

    i>”3. Amberley”
    I did an analysis of Amberley here. You can find enough stations within 100 km. The break in 1980 is very marked. I show there how you can pin it down to August 1980, 1.4C, and if you make that adjustment, Amberley matches its neighbours very well.

    Here are the unadjusted temperatures in the surrounding decade, with trends:

  4. Nick Stokes says:

    “It is hard to believe that any objective homogenisation procedure should always end up increasing temperature trends . Why does the inverse never happen?”
    It does happen. Here (from here) is a map of Australia showing GHCNM stations that have reported since 2000. Stations where the GCHN adjustment (shown by GISS) made a positive difference to the lifetime trend are shown as pink; negative shown cyan, zero (no adjustment) shown yellow.

    • Ron Graf says:

      Nick, I hope you must agree that the most needed adjustment is to account for non-climate warming at the sensor, UHI and micro-climate. Therefore it makes no sense that most adjustments warm on top of a bias that is already warming. What is happening, IMO, is that UHI and micro-climate are gradual processes that don’t show up in the plots. But when a station is corrected for years of growing non climate influence the change is abrupt, moving to the city outskirts or airport. This gets adjusted with a warming factor to keep the plot from breaking. And then the process of non-climate warming grows up all over again, getting recorded as even more historical warming.

      • Nick Stokes says:

        There may be some such effect. But you can’t ignore sudden changes that have a clear effect that can be corrected. Something has to be done. You can’t just say, well, it might be balanced by a slower warming effect. You don’t even know the sign of that possible effect.

        It isn’t even true that “most adjustments warm”. I showed the effect on the global average here. If you look at global trends to present since the mid-sixties or later, the adjustments have a cooling effect. They warm (slightly) trends to present starting from before that time.

        • Ron Graf says:

          Nick says: “Something has to be done.”

          Thus the Hippocratic oath – do no harm.

          The reason for the drop must be identified and validated before it is adjusted for. If we know the landscape was changing for the warmer I am thinking the drop in temperature is proof of that and quantifies the growth of non-climate influence into the record, not an error to get erased away.

  5. excellent clive research on the error to follow only the thermal statistics! The weather station data should be controlled with the dynamics of the atmosphere because every variation in more or less,must be confirmed by the corresponding air mass (JS) present on the vertical of the meteorological station. every thermal variation is caused by a precise persistence of the JS, detect a decrease or a thermal increase, without observing the dynamics of the atmosphere, I perform only statistical research,not considering the real cause of thermal variation, and the negative consequence of tracking errors. As rightly said by clive , thermal measurements have changed over time, two manual measurements a day to automatic digital measurement several times a day. Errors in reading or data exchange, they can give serious errors of statistical reading. The weather station is not just statistics, but a tool for thermal reading of the air masses present on the vertical of the meteorological station. Every thermal variation of the Planet is directly connected to the JS, so we must know the causes that have changed the thermal measurements to avoid even statistical detection errors. Reading errors, non-standard weather stations, reading data added later can alter the study of natural climate changes, favoring the wrong idea of ??an anthropic global warming.

  6. A C Osborn says:

    I do not understand how taking the Temperature at 9 am does NOT capture the 3 am low temperature.
    They are not taking the current temperature at 9 am, they are taking the Minimum achieved during the previous night.
    Are they not using Max/Min Thermometers?

    You also need to be very careful using the Max & Min temperatures shown by the Automatic stations, as they record and use “Spikes” due to not averaging over a time period and have been shown to have Cut-offs of minimum temperatures
    See Jennifers posts on these subjects.

    • Clive Best says:

      I think they have both measurements at 9am. The Min-Max will record the highest temperature of yesterday and the coldest temperature today (this morning at 3am). In addition the instantaneous temperature at 9am is recorded. If you request the hourly temperatures then you get the instantaneous measurements including as a minimum 3pm. After 1986 you get 3am 6am 9am 12am 3pm 6pm 9pm and 12pm

      The Min-Max recording is that used by ACORN-SAT , GHCN and CRUTEM. In this case the daily average temp is (Tmac+Tmin)/2 and the monthly average is the average of the daily values in the month.

Leave a Reply