In this post I summarise the results of calculating temperature anomalies using weather station data from GHCN versions 1, 3(uncorrected) and 3(corrected). I compare these results against three of the main climate groups. Updated :23/5 – a small correction to station offsets (see comment)
- Calculating global land temperature anomalies from GHCN V3C using the standard algorithom adopted by CRU reproduces all the results of CRUTEM4, NCDC and Berkely Earth.
- Exactly the same calculation when applied to the uncorrected GHCN V3U (raw) results in ~0.3C less warming since the 19th century. This same result is confirmed when applied to the earlier GHCN V1 data. The conclusion is that weather station correction and data homogenisation have increased apparent global warming on land by ~30%.
- All the independent climate groups essentially use the same core set of station data and adopt variations of the same methodology. Their results differ only very slightly once correction for normalisation period is applied. The results are not independent confirmations of global warming. Instead they are basically checking each others’ software.
- An alternative methdology based on individual station offsets to a regional norm uses all available station data. It also results in a
lowerslightly lower temperature trend even when applied to the corrected data. It agrees instead with Hadcrut4
To investigate temperature anomaly values, I have calculated temperature anomalies for the three station datasets: GHCN V1, V3U (uncorrected) and V3C (corrected), in two different ways. Firstly I use the orthodox method based on subtracting seasonal monthly normals, and secondly I use regional station offsets as described in the previous post. The results are then compared with the published data from three of the main ‘independent’ research groups: CRUTEM4, NCDC and BEST.
I start by using the orthodox method based on station anomalies derived from monthly normals calculated on a fixed 30 year period. Then we compare to my offset method described earlier. The figure below shows the GHCN V3C(corrected) global temperature anomaly result compared to CRUTEM4. They are essentially the same. This demonstrates I am calculating them correctly.
I selected only stations recording at least 10 of the 30 year period. This reduces the total station count to 5855. This is summarised for V1 and V3 below
CRUTEM4 uses 5549 stations, the vast majority of which are identical to GHCN. This is because CRU originally provided their station data to GHCN V1.
Comparison of Uncorrected data with Corrected data.
Now we look at differences beween the uncorrected station data V3U and the corrected homogenised data as provided in V3C. The next plot compares the uncorrected V3U anomalies calculated in exactly the same way as CRUTEM4. This is the equivalent plot to that shown above.
There is a clear systematic difference between the V3U uncorrected and CRUTEM4. The effect of correcting the historic temperature measurements has been to increase the net observed global warming signal. We can look at this in more detail by comparing all V1 and V3 anomalies. Exactly the same processing software is used for all three.
V1 and V3U give practically the same result, but V3C shows stronger warming. The conclusion is that the data correction and homogenisation procedure has increased the apparent “global warming” by 0.3C from 1850 to the present. I am not saying the corrections are wrong and there may be very good reasons why the corrections have been applied, but the end result has been to increase warming.
Effects of changing normalisation period.
To investigate whether there is any change in trends if we select an earlier 30 year normalisation period I recalculated the V3C anomailes based on a 1941-1960 period and compared them to the standard result.
The trend is the same and the only effect is to shift the ‘anomalies’ up by about 0.2C. This is simple to understand as the zero reference line is set 30 years earlier on a rising trend. The absolute values of temperature anomalies have no meaning since they are always relative to some reference time period. The good news though is that the arbitrarily selected time period does not affect the trend.
Comparison to other Datasets
I now compare the V3C global anomalies both to NCDC and to Berekely Earth (BEST) global time series. NCDC base their anomalies on normals calculated over the 20th century. They also exclude data before 1880. BEST use the period 1951-1980 to define their anomalies and as far as I understand apply their own data homogenisation and linear interpolation procedures. To derive the BEST yearly anomalies I simply averaged their published monthly series. I use the V3C results normalised to 1941-1960 to be consistent with BEST/NCDC. The figure below compares all three datasets.
They are essentially identical. The conclusions then are simple:
- They all use the same corrected station data.
- They all use the same anomaly definitions.
- Different interpolation algorithms make practically no difference to the end result.
- BEST and NCDC are not really independent checks on Global Surface temperatures anomalies.
Systematic differences in the definition of anomalies.
Does an average annual global land surface temperature actually exist? If so, is it increasing? Such a global surface temperature cannot be measured directly because the number and distribution of stations is always changing with time. However we can measure the changes in temperature for each location once the far larger seasonal variations have been subtracted. The orthodox method to do this is to define monthly temperature anomalies on a station by station basis relative to a ‘normal’ 30 year period 1961-1990. These normal values are the average temperatures for each month within the selected period. The anomaly for each month is then the measured temperature minus the monthly ‘normal’. These are all gridded together on a 5×5 degree base, area-averaged weighted by cos(lat) to form first a global monthly average and then an annual average. This procedure assumes that all stations react in the same way to an external (CO2) forcing and that all locations behave in synchrony. All the climate groups (NASA GISS, NCDC, CRUTEM, BEST) basically follow the same procedure, and since they all use GHCN V3C stations they essentially all get the same result.
However this orthodox method necessarily causes stations with poor coverage within the reference period to be dropped. Efforts to avoid this rely on interpolation or derive values from near neighbours in the same grid. BEST uses such interpolation techniques. This then generates fake data which might affect the final results slightly. NCDC use a longer time base for normals covering the 20th century but this is itself weighted to 1960’s period because that is where the station population peaks.
Is the orthodox method correct? Is it the only way to eliminate sampling biases? Is there group thinking happening ? For this reason I have tried to define another grid-based anomaly using local station offsets as described in the previous post. I first compare the results of using the two methods on GHCNV1 data.
There is very little difference between the offset anomalies and the standard anomalies. The only exception is before 1900 where the offsets values are slightly warmer. This is exactly where many stations are rejected. The anomaly method uses all available stations whereas the orthodox drops many stations in this period if they fail to continue into 1961-1990. The main difference is that anomalies are calculated per grid cell using offsets rather than per station. Next we compare the two methods applied to V3C and compared to CRUTEM4. CRUTEM4 uses the orthodox method.
There is now a stronger difference in trends. The net warming from the 19th century is reduced to
~0.8 1.0C with the offset method. This value is more similar to SST measurements such as HADSST3. The offset method uses all 7280 stations whereas the traditional one uses only 5855 stations. Remarkably though it agrees very well with Hadcrut4 which includes SST as shown in the next figure.
Is it possible that the standard anomaly method artificially increases warming trends? Why should the 30% land surface air temperatures increase much faster than the dominant 70% ocean air temperatures? Such a discrepency in trends could not possibly continue indefinitely or the temperature difference beween land and ocean would forever grow. Maybe it is just a coincidence that the ‘offset’ method almost exactly matches Hadcrut4 rather than CRUTEM4. Perhaps it gives biased results. Otherwise the orthodox anomaly calculation itself may be biased and exaggerates land warming trends as compared to Ocean temperatures.