A Study of Global Land Temperature Anomalies

In this post I summarise the results of calculating temperature anomalies using  weather station data from GHCN versions 1, 3(uncorrected) and 3(corrected). I compare these results against three of the main climate groups.

Overview

  1. Calculating global land temperature anomalies from GHCN  V3C using the  standard algorithom adopted by CRU  reproduces all the results of CRUTEM4, NCDC and Berkely Earth.
  2. Exactly the same  calculation when applied to  the uncorrected GHCN V3U (raw) results in ~0.3C less warming since the 19th century. This same result is confirmed when applied to the earlier GHCN V1 data. The conclusion is that weather station correction and data homogenisation have increased  apparent global warming on land by ~30%.
  3. All the independent climate groups essentially use the same core set of station  data and adopt variations of the same methodology. Their results differ only very slightly once correction for  normalisation period is applied.  The results are  not independent confirmations of global warming. Instead they are  basically checking  each others’ software.
  4. An alternative methdology based on individual station offsets to a regional norm uses all available station data. It also results in a lower temperature trend even when applied to the corrected data. It agrees instead with Hadcrut4

Station Anomalies

To investigate temperature anomaly values, I have calculated temperature anomalies for the three station datasets: GHCN V1, V3U (uncorrected) and V3C (corrected),  in two different ways. Firstly I use the orthodox method based on subtracting seasonal monthly normals, and secondly I use regional station offsets  as described in the previous post. The results are  then compared with the published data from three of the main ‘independent’ research groups:  CRUTEM4, NCDC and BEST.

I start by  using the orthodox method based on station anomalies derived from monthly normals calculated on a fixed 30 year period. Then we compare to my offset method described earlier. The figure below shows the GHCN V3C(corrected) global temperature anomaly result compared to CRUTEM4. They are essentially the same. This demonstrates I am calculating them correctly.

Comaprison of  V3 corrected global temperaure anomalies calculated in the same 30 year period 1961-1991 as CRUTEM4

Comaprison of V3 corrected global temperaure anomalies calculated in the same 30 year period 1961-1991 as CRUTEM4

I selected only stations recording at least 10 of the 30 year period. This reduces the total station count to 5855.  This is summarised for V1 and V3 below

dataset Total After normalisation
V1 6039 2618
V3 7280 5855
CRUTEM4 ? 5549

CRUTEM4 uses 5549 stations, the vast majority of which are identical to GHCN. This is because CRU originally provided their station data to GHCN V1.

Comparison of Uncorrected data with Corrected data.

Now we look at differences beween the uncorrected station data V3U and the corrected homogenised data as provided in V3C. The next plot compares  the  uncorrected V3U anomalies calculated in exactly the same way as CRUTEM4. This is the equivalent plot to that shown above.

V3 raw Anomalies calculated exactly the same way as CRUTEM4

V3U (raw uncorrected) anomalies calculated exactly the same way as CRUTEM4

There is a clear systematic difference between the V3U uncorrected and CRUTEM4. The effect of correcting the historic temperature measurements  has been to increase the net observed global warming signal.  We can look at this in more detail by comparing all V1 and  V3  anomalies. Exactly the same processing software is used for all three.

Comparison of V1, V3 uncorrected and V3 corrected. V1 and V3U agree with each other while V3C has cooled the past and warmed the pressent.

Comparison of V1 (green), V3U (uncorrected) and V3C (corrected). V1 and V3U essentially agree with each other whereas V3C has an increased temperature anomaly trend particularly in the past. The curves are smoothed moving averages

V1 and V3U give practically the same result, but V3C shows stronger warming. The conclusion is that the data correction and homogenisation procedure has increased the apparent “global warming” by 0.3C from 1850 to the present. I am not saying the corrections are wrong and there may be very good reasons why the corrections have been applied, but the end result has been  to increase warming.

Effects of changing normalisation period.

To investigate whether there is any change in trends if we select an earlier  30 year normalisation period I recalculated the V3C anomailes based on a 1941-1960 period and compared them to the standard result.

Compare normalisation period 1941-1960 (green) with standard 1961-1990(blue)

Compare normalisation period 1941-1960 (green) with standard 1961-1990(blue)

The trend is the same and the only effect is to shift the ‘anomalies’ up by about 0.2C. This is simple to understand as the zero reference line is set 30 years earlier on a rising trend. The absolute values of temperature anomalies have no meaning since they are always relative to some reference time period.  The good news though is that the arbitrarily selected time period does not affect the trend.

Comparison to other Datasets

I now compare the V3C global anomalies both to NCDC and to Berekely Earth (BEST) global time series. NCDC base their anomalies on normals calculated over the 20th century. They also exclude data before 1880. BEST use the period 1951-1980 to define their anomalies and as far as I understand apply their own data homogenisation and linear interpolation procedures. To derive the BEST yearly anomalies I simply averaged their published monthly series. I use the V3C results normalised to 1941-1960 to be consistent with BEST/NCDC. The figure below compares all three datasets.

Almost an exact match between NCDC, BEST and V3C (1941-1960)

An almost  exact match between NCDC, BEST and the calculated V3C (1941-1960)

They are essentially identical. The conclusions then are simple:

  • They all use the same corrected station data.
  • They all use the same anomaly definitions.
  • Different interpolation algorithms make practically no difference to the end result.
  • BEST and NCDC are not really independent checks on Global Surface temperatures anomalies.

 Systematic differences in the definition of anomalies.

Does an average annual global land surface temperature actually exist? If so, is it increasing? Such a global surface temperature cannot be measured directly because the number and distribution of stations is always changing with time.  However  we can measure the changes in temperature for each location once the far larger seasonal variations have been subtracted.  The orthodox method to do this is to define monthly temperature anomalies on a station by station basis relative to a ‘normal’ 30 year period 1961-1990. These normal values are the average temperatures for each month within the selected period. The anomaly for each month is then the measured temperature minus the monthly ‘normal’. These are all gridded together on a 5×5 degree base, area-averaged weighted by cos(lat) to form first a global monthly average and then an annual average. This procedure assumes that all stations react in the same way to an external (CO2) forcing and that all locations behave in synchrony. All the climate groups (NASA GISS, NCDC, CRUTEM, BEST) basically follow the same procedure, and since they all use GHCN V3C stations they essentially all get the same result.

However this orthodox method necessarily causes stations with poor coverage within the reference period to be dropped. Efforts to avoid this rely on interpolation or derive values from near neighbours in the same grid. BEST uses such  interpolation techniques. This then generates fake data which might affect the final results slightly. NCDC use a longer time base for normals covering the 20th century but this is itself weighted to  1960’s period because that is where the station population peaks.

Is the orthodox method correct? Is it the only way to eliminate sampling biases?  Is there group thinking happening ? For this reason I have tried to define another grid-based anomaly using local station offsets as described in the previous post.  I first compare the results of using the two methods on  GHCNV1 data.

Compare offsets-normals-jones

There is very little difference between the offset anomalies and the standard anomalies. The only exception is before 1900 where the offsets values are slightly warmer. This is exactly where many stations are rejected. The anomaly method uses all available stations whereas the orthodox drops many stations in this period if they fail to continue into 1961-1990. The main difference is that anomalies are calculated per grid cell using offsets rather than per station.  Next we compare  the two methods applied to V3C and compared to CRUTEM4. CRUTEM4 uses the orthodox method.

Compare different methods of calculating temperature anomalies. The offset method uses the seasonal mean offset from the grid average .of each station

Compare different methods of calculating temperature anomalies. The offset method uses the seasonal mean offset from the grid average of each station

There is now a stronger difference in trends. The net warming from the 19th century is reduced to ~0.8C with the offset method. This value is  more similar to  SST measurements such as HADSST3.  The offset method uses all 7280 stations whereas the traditional one uses only 5855 stations. Remarkably though it agrees very well with Hadcrut4 which includes SST as shown in the next figure.

V3c-offsets-Hadrcut4Is it possible that the standard anomaly method artificially increases  warming trends? Why should the 30% land surface air temperatures increase much faster than the dominant 70% ocean air temperatures? Such a discrepency in trends could not possibly continue indefinitely or the temperature difference beween land and ocean would forever grow. Maybe it is just a coincidence that the ‘offset’ method almost exactly matches Hadcrut4 rather than CRUTEM4. Perhaps it gives biased results. Otherwise the orthodox anomaly calculation itself may be biased and exaggerates land warming trends as compared to Ocean temperatures.

Posted in AGW, Climate Change, climate science, Institiutions, IPCC, Science | Tagged , , | 25 Comments

Untangling Global Temperatures

There are systematic problems in defining global “temperatures”. This post looks more into this problem by comparing uncorrected GHCN weather station data with CRUTEM using different methodologies. This is an ongoing study.

DoubleV3

Comparisons of uncorrected GHCN V3 with CRUTEM4 and uncorrected GHCN V1 with a contemporary CRUTEM (Jones 88) .

I have calculated new temperature anomaly data from all stations in GHCN V1 and GHCN V3 raw data. I have applied the linear offset correction for each  stations and for each month, to avoid sampling bias as discussed in the previous post as originally proposed by Tamino. This avoids both biases from differing seasonal responses and differing altitudes of individual stations within a single grid cell. The basic problem facing all analyses of global temperature data are because we only have an evolving set of stations and measurements with time.  For this reason temperature anomalies for each station are traditionally  used rather than raw temperatures.  An anomaly is the difference between the measured temperature for one month and a ‘normal’ monthly value which is usually pre-calculated based on a fixed 30 year period. However if we calculate such anomalies at the station level, then we must discard all stations with zero or few values within the fixed period. I wanted to avoid this and use ALL available data. GHCN V3 contains  7280 stations whereas CRUTEM4 has 5549 stations which include 628 new ones from the Arctic.

To make progress, I assume that there exists a true average temperature for each month in a grid cell – T_r . We can estimate T_r by subtracting the net offsets of those stations present each month. Then we calculate normals based on the monthly T_r values and use these to derive grid anomalies.  This  new method also avoids discarding stations which do not fall within the  ‘normal 30 year window’ by correcting their offsets for cells in which they do appear. I also want to avoid linear interpolation of station data to months and years when they do not have data.

Each station is characterised by a monthly offset \mu_i  from the grid (regional) average. This remains constant in time because, for example,  it is due to its altitude. We  first calculate all the monthly average temperatures T_{av} . Then for each station we derive the offset for each month from the particular grid cell average in which it appears. This is our first estimate for \mu_i . There are 12 such offsets – one for each month in the year.

\mu_i =\frac{1}{N_t}\sum_{time}{T_i -T_{av}}

You can then iterate again using the new offsets to derive a new set of offsets. In reality the second iteration changes the end result only a very small amount.  Then to estimate the true grid average temperature for a given month T_r we get

T_r = \frac{1}{N_{grid}} \sum_{s}{T_{av} - \mu_s}

So we average the seasonal temperatures per month in each grid cell across the full time range to get ‘normal’ temperatures. You can also select a fixed 30 year range for the normals but it makes little difference. We then calculate  one anomaly per month and per grid cell. These anomalies are first area averaged and then annually averaged to get yearly global temperature anomalies. The results are shown below for V3 compared to CRUTEM4 and V1 compared to a contemporary version of CRUTEM (Jones 1988)

V3-offsets-Crutem4

There is fairly close agreement except before 1920 where V3 is warmer and post 1998 where V3 is cooler. Now look at V1 compared to a 1988 version of CRUTEM made by Phil Jones in 1988.

V1-offsets-Jones

In general there is good agreement between CRUTEM and V1 except before 1900. In both cases the net warming since the late 19th century is about 0.2C less than that observed by CRUTEM.

Now I can already hear objections from Nick Stokes to my methodology because he will likely argue that you should average station ‘anomalies’ in each cell and not first ‘temperatures’. Another objection could be that my normals are over a longer time period instead of the standard 1961-1990. Let’s look at these in turn.

In principal I could first derive anomalies for each station by using the offsets and fixing the monthly Tm at  one particular year such as 1975. The station anomaly would then be   (Tmes + offset) – Tm. I can  look into this if I find time,  but I don’t believe it can make much difference.

My main objective was to use all (raw) station data independent of time span to answer one question. The problem with a fixed 30-year normalisation period is that a significant number of stations have no data to define such  ‘normals’.  One can imagine using the offsets to define station normals based on the grid average temperature, but this generates fake data. There is another philosophical reason why this may be wrong.

Suppose that for some reason only winters have been gradually warmed. By defining seasonal normals within  a recent fixed time span you risk skewing winter months by reducing natural seasonal variation.

My conclusion thus far is that there are small but significant differences in temperature trends depending on the definition of temperature anomalies and on data correction/homogenization.  The overall warming trend varies  by about 0.2C depending on how it is defined.

 

Posted in AGW, Climate Change, climate science, Science | Tagged , | 13 Comments

Sampling biases in global temperature anomalies

Nick Stokes points out some fundamental problems with determining trends in surface temperatures. This is due to the changing distribution of stations within a grid cell with time. Consider a 5×5 degree grid cell which contains a 2 level plateau above a flat plain at sea level – as shown below. Temperature falls like -6.5C per 1000m in height so  the real temperatures at different locations will be  as shown. Therefore the correct average surface temperaure for that grid would be something like (3*20+2*14+7)/6 or about 16C. What you actually measure will depend on where your stations are located. Grid1Since the number of stations and their location is constantly changing  with time there is little hope of measuring any underlying trend of average temperature  in that cell. You might even argue that an average surface temperature, in this context, is a meaningless concept.

The mainstream answer to this problem is to use temperature anomalies instead. To do this we must define a monthly ‘normal’ temperature for each station over a 30 year period e.g. 1961-1990. Then in a second step we subtract these ‘normals’ from the measured temperatures to get DT or the ‘anomaly’ for that month. Then we average those values over the grid instead to get the average anomaly for that measurement month compared to 1961-1990. Next we can average over all months and all grid cells to derive the global annual temperature anomaly.  The sampling bias has not really disappeared but has been partly subtracted. There is still an  assumption that all stations react in synchrony to warming (or cooling) uniformly within a cell. This procedure introduces a new problem for those stations which have insufficient data  defined within the selected 30 year period, and this can invalidate some of the most valuable older stations. Are there other ways to approach this problem?

For the GHCN V1 and GHCN V3(uncorrected) datasets I wanted to use all stations so took a naive approach. I simply used monthly normals defined per grid cell rather than per station over the entire period.

V1-V3-C4-compare

Compare annual anomalies calculated per monthly grid cell. After 1900 the agreement of GHCN V3 with CRUTEM4 is good. The original GHCN1 data is shifted warmer than GHCN3 by up to 0.1C. This difference is real.

A novel approach to this problem was proposed first by Tamino, but then refined by RomanM and Nick Stokes. I will hopefully simplify their ideas without too much linear algebra. Corrections are welcome.

Each station is characterised by a fixed offset \mu_i  from the grid average. This remains constant in time because, for example,  it is due to its altitude. We can estimate \mu_i  by first calculating all the monthly average temperatures T_{av} for the particular grid cell in which it appears. Then by definition for any of the monthly averages

T_{av} = \frac{1}{N_{stations}} \sum{T_i - \mu_i}

so now in a second step, by averaging over all the ‘offsets’ for a given station we can estimate \mu_i .

\mu_i =\frac{1}{N_t}\sum_{time}{T_i -T_{av}}

So having found the set of all station ‘offsets’ in the database we can calculate temperature anomalies using all available stations in any month. I still think the anomalies  have to be normalised to some standard year, but at least the bias due to a changing set of stations will be reduced, especially in  the important early years.

P.S. I will try this out when time permits.

 

Posted in AGW, Climate Change, climate science, Science | Tagged | 13 Comments