## The Global Warming Hiatus

Global surface temperatures have essentially remained static since 1998 – a record el Niño year. The hiatus in land surface warming is real, unexpected, and puzzling. Recent anomaly data are shown below.

Comparison of CRUTEM4, CRUTEM3 and GHCN V3C. Error bars are those quoted for CRUTEM3.

Until 2010  CRUTEM3 was the IPCC reference land temperature data, and was used for the IPCC 4th assessment report in 2007. It is still updated and shows 1998 as the warmest year with no warming trend since then. GHCN V3C is  in agreement with that conclusion. CRUTEM4 was released in 2010, and the main difference to CRUTEM3 was the addition of 628 stations near the Arctic where warming has been strongest. GISS also added a significant number of new Arctic stations. This sampling effect alone has moved the land temperature anomaly to slightly warmer values post 2000.

The blue dots show locations of the 176 stations that were removed in the transition from CRUTEM3 to CRUTEM4. The red dots shows the location of the  628 stations that were added

CRUTEM4 now shows 2010 as the hottest year and a small 0.05C/decade warming trend since 1998. In addition there also seems to have been some recent corrections made to station data enhancing this trend since  I first made the comparison in 2010 as shown below.

Fig 2: Detailed comparison of temperature anomaly results from CRUTEM4 and CRUTEM3 as of 2010

Cowtan and Way(2013) also  claimed that adding yet more arctic coverage would confirm some continued warming. As I understand it they essentially fill all empty Arctic grid cells with values interpolated from satellite data. However, do they do they also do the  same in the Antarctic or the Sahara? However, since then  this claim has essentially evaporated as shown below in their updated version V2. The figure below shows their latest result based on CRUTEM4 and  compares to  the other main temperature series.

Other main tempertaure indexes compared to V3C. Cowton and Way Version 2

The growing evidence of an unexpected hiatus in surface warming, now lasting 16 years clearly caused concern in the climate science community. AR5 skirted round the issue by essentially deciding that  such 15 year pauses would happen occasionally due to natural variability. Of course a cynic might argue that natural variability has been tuned in the models exactly for that purpose. Phenomena like ENSO are not fully understood and cannot be predicted.

Fig 2: Comparison of CMIP5 models and observed temperature trends.

However even with natural variability included only 5% of CMIP5 model ensembles  can reproduce such a pause. Furthermore the statistics behind the all important AR5 attribution statement depends on natural variability being essentially random. If not, then a proportion of the observed warming since 1950 would then be due to natural cycles such as AMO/PDO. In this case model predictions of anthropogenic warming would be too high to explain observed anthropogenic warming and the attribution statement would be wrong.

Already most of the CMIP5 model predictions for future warming are on the high side when compared to the exisiting combined land/ocean surface temperatures (HadCrut4).

CMIP5 model ensemble compared to obeservations (Hadcrut4)

A new proposal to explain the hiatus is that much of the excess heat from TOA radiation imbalance has instead been stored in the deep ocean and will reappear later to warm the surface. However this proposal  implies that there are indeed natural ocean driven climate cycles. If so then how much of the rapid warming from 1970-2000 was really due to an upturn in these cycles? The next few years will be interesting.

29/5: updated to use C&W V2 anomalies.

Posted in AGW, Climate Change, climate science, Institiutions, IPCC, Science | Tagged , , | 14 Comments

## A Study of Global Land Temperature Anomalies

In this post I summarise the results of calculating temperature anomalies using  weather station data from GHCN versions 1, 3(uncorrected) and 3(corrected). I compare these results against three of the main climate groups. Updated :23/5 – a small correction to station offsets (see comment)

### Overview

1. Calculating global land temperature anomalies from GHCN  V3C using the  standard algorithom adopted by CRU  reproduces all the results of CRUTEM4, NCDC and Berkely Earth.
2. Exactly the same  calculation when applied to  the uncorrected GHCN V3U (raw) results in ~0.3C less warming since the 19th century. This same result is confirmed when applied to the earlier GHCN V1 data. The conclusion is that weather station correction and data homogenisation have increased  apparent global warming on land by ~30%.
3. All the independent climate groups essentially use the same core set of station  data and adopt variations of the same methodology. Their results differ only very slightly once correction for  normalisation period is applied.  The results are  not independent confirmations of global warming. Instead they are  basically checking  each others’ software.
4. An alternative methdology based on individual station offsets to a regional norm uses all available station data. It also results in a lower slightly lower temperature trend even when applied to the corrected data. It agrees instead with Hadcrut4

### Station Anomalies

To investigate temperature anomaly values, I have calculated temperature anomalies for the three station datasets: GHCN V1, V3U (uncorrected) and V3C (corrected),  in two different ways. Firstly I use the orthodox method based on subtracting seasonal monthly normals, and secondly I use regional station offsets  as described in the previous post. The results are  then compared with the published data from three of the main ‘independent’ research groups:  CRUTEM4, NCDC and BEST.

I start by  using the orthodox method based on station anomalies derived from monthly normals calculated on a fixed 30 year period. Then we compare to my offset method described earlier. The figure below shows the GHCN V3C(corrected) global temperature anomaly result compared to CRUTEM4. They are essentially the same. This demonstrates I am calculating them correctly.

Comaprison of V3 corrected global temperaure anomalies calculated in the same 30 year period 1961-1991 as CRUTEM4

I selected only stations recording at least 10 of the 30 year period. This reduces the total station count to 5855.  This is summarised for V1 and V3 below

 dataset Total After normalisation V1 6039 2618 V3 7280 5855 CRUTEM4 ? 5549

CRUTEM4 uses 5549 stations, the vast majority of which are identical to GHCN. This is because CRU originally provided their station data to GHCN V1.

### Comparison of Uncorrected data with Corrected data.

Now we look at differences beween the uncorrected station data V3U and the corrected homogenised data as provided in V3C. The next plot compares  the  uncorrected V3U anomalies calculated in exactly the same way as CRUTEM4. This is the equivalent plot to that shown above.

V3U (raw uncorrected) anomalies calculated exactly the same way as CRUTEM4

There is a clear systematic difference between the V3U uncorrected and CRUTEM4. The effect of correcting the historic temperature measurements  has been to increase the net observed global warming signal.  We can look at this in more detail by comparing all V1 and  V3  anomalies. Exactly the same processing software is used for all three.

Comparison of V1 (green), V3U (uncorrected) and V3C (corrected). V1 and V3U essentially agree with each other whereas V3C has an increased temperature anomaly trend particularly in the past. The curves are smoothed moving averages

V1 and V3U give practically the same result, but V3C shows stronger warming. The conclusion is that the data correction and homogenisation procedure has increased the apparent “global warming” by 0.3C from 1850 to the present. I am not saying the corrections are wrong and there may be very good reasons why the corrections have been applied, but the end result has been  to increase warming.

### Effects of changing normalisation period.

To investigate whether there is any change in trends if we select an earlier  30 year normalisation period I recalculated the V3C anomailes based on a 1941-1960 period and compared them to the standard result.

Compare normalisation period 1941-1960 (green) with standard 1961-1990(blue)

The trend is the same and the only effect is to shift the ‘anomalies’ up by about 0.2C. This is simple to understand as the zero reference line is set 30 years earlier on a rising trend. The absolute values of temperature anomalies have no meaning since they are always relative to some reference time period.  The good news though is that the arbitrarily selected time period does not affect the trend.

### Comparison to other Datasets

I now compare the V3C global anomalies both to NCDC and to Berekely Earth (BEST) global time series. NCDC base their anomalies on normals calculated over the 20th century. They also exclude data before 1880. BEST use the period 1951-1980 to define their anomalies and as far as I understand apply their own data homogenisation and linear interpolation procedures. To derive the BEST yearly anomalies I simply averaged their published monthly series. I use the V3C results normalised to 1941-1960 to be consistent with BEST/NCDC. The figure below compares all three datasets.

An almost  exact match between NCDC, BEST and the calculated V3C (1941-1960)

They are essentially identical. The conclusions then are simple:

• They all use the same corrected station data.
• They all use the same anomaly definitions.
• Different interpolation algorithms make practically no difference to the end result.
• BEST and NCDC are not really independent checks on Global Surface temperatures anomalies.

### Systematic differences in the definition of anomalies.

Does an average annual global land surface temperature actually exist? If so, is it increasing? Such a global surface temperature cannot be measured directly because the number and distribution of stations is always changing with time.  However  we can measure the changes in temperature for each location once the far larger seasonal variations have been subtracted.  The orthodox method to do this is to define monthly temperature anomalies on a station by station basis relative to a ‘normal’ 30 year period 1961-1990. These normal values are the average temperatures for each month within the selected period. The anomaly for each month is then the measured temperature minus the monthly ‘normal’. These are all gridded together on a 5×5 degree base, area-averaged weighted by cos(lat) to form first a global monthly average and then an annual average. This procedure assumes that all stations react in the same way to an external (CO2) forcing and that all locations behave in synchrony. All the climate groups (NASA GISS, NCDC, CRUTEM, BEST) basically follow the same procedure, and since they all use GHCN V3C stations they essentially all get the same result.

However this orthodox method necessarily causes stations with poor coverage within the reference period to be dropped. Efforts to avoid this rely on interpolation or derive values from near neighbours in the same grid. BEST uses such  interpolation techniques. This then generates fake data which might affect the final results slightly. NCDC use a longer time base for normals covering the 20th century but this is itself weighted to  1960’s period because that is where the station population peaks.

Is the orthodox method correct? Is it the only way to eliminate sampling biases?  Is there group thinking happening ? For this reason I have tried to define another grid-based anomaly using local station offsets as described in the previous post.  I first compare the results of using the two methods on  GHCNV1 data.

There is very little difference between the offset anomalies and the standard anomalies. The only exception is before 1900 where the offsets values are slightly warmer. This is exactly where many stations are rejected. The anomaly method uses all available stations whereas the orthodox drops many stations in this period if they fail to continue into 1961-1990. The main difference is that anomalies are calculated per grid cell using offsets rather than per station.  Next we compare  the two methods applied to V3C and compared to CRUTEM4. CRUTEM4 uses the orthodox method.

Compare different methods of calculating temperature anomalies. The offset method uses the seasonal mean offset from the grid average .of each station. UPDATE: corrected  23/5

There is now a stronger difference in trends. The net warming from the 19th century is reduced to ~0.8 1.0C with the offset method. This value is  more similar to  SST measurements such as HADSST3.  The offset method uses all 7280 stations whereas the traditional one uses only 5855 stations. Remarkably though it agrees very well with Hadcrut4 which includes SST as shown in the next figure.

Is it possible that the standard anomaly method artificially increases  warming trends? Why should the 30% land surface air temperatures increase much faster than the dominant 70% ocean air temperatures? Such a discrepency in trends could not possibly continue indefinitely or the temperature difference beween land and ocean would forever grow. Maybe it is just a coincidence that the ‘offset’ method almost exactly matches Hadcrut4 rather than CRUTEM4. Perhaps it gives biased results. Otherwise the orthodox anomaly calculation itself may be biased and exaggerates land warming trends as compared to Ocean temperatures.

Posted in AGW, Climate Change, climate science, Institiutions, IPCC, Science | Tagged , , | 28 Comments

## Untangling Global Temperatures

There are systematic problems in defining global “temperatures”. This post looks more into this problem by comparing uncorrected GHCN weather station data with CRUTEM using different methodologies. This is an ongoing study.

Comparisons of uncorrected GHCN V3 with CRUTEM4 and uncorrected GHCN V1 with a contemporary CRUTEM (Jones 88) .

I have calculated new temperature anomaly data from all stations in GHCN V1 and GHCN V3 raw data. I have applied the linear offset correction for each  stations and for each month, to avoid sampling bias as discussed in the previous post as originally proposed by Tamino. This avoids both biases from differing seasonal responses and differing altitudes of individual stations within a single grid cell. The basic problem facing all analyses of global temperature data are because we only have an evolving set of stations and measurements with time.  For this reason temperature anomalies for each station are traditionally  used rather than raw temperatures.  An anomaly is the difference between the measured temperature for one month and a ‘normal’ monthly value which is usually pre-calculated based on a fixed 30 year period. However if we calculate such anomalies at the station level, then we must discard all stations with zero or few values within the fixed period. I wanted to avoid this and use ALL available data. GHCN V3 contains  7280 stations whereas CRUTEM4 has 5549 stations which include 628 new ones from the Arctic.

To make progress, I assume that there exists a true average temperature for each month in a grid cell – $T_r$ . We can estimate $T_r$ by subtracting the net offsets of those stations present each month. Then we calculate normals based on the monthly $T_r$ values and use these to derive grid anomalies.  This  new method also avoids discarding stations which do not fall within the  ‘normal 30 year window’ by correcting their offsets for cells in which they do appear. I also want to avoid linear interpolation of station data to months and years when they do not have data.

Each station is characterised by a monthly offset $\mu_i$ from the grid (regional) average. This remains constant in time because, for example,  it is due to its altitude. We  first calculate all the monthly average temperatures $T_{av}$. Then for each station we derive the offset for each month from the particular grid cell average in which it appears. This is our first estimate for $\mu_i$. There are 12 such offsets – one for each month in the year.

$\mu_i =\frac{1}{N_t}\sum_{time}{T_i -T_{av}}$

You can then iterate again using the new offsets to derive a new set of offsets. In reality the second iteration changes the end result only a very small amount.  Then to estimate the true grid average temperature for a given month $T_r$ we get

$T_r = \frac{1}{N_{grid}} \sum_{s}{T_{av} - \mu_s}$

So we average the seasonal temperatures per month in each grid cell across the full time range to get ‘normal’ temperatures. You can also select a fixed 30 year range for the normals but it makes little difference. We then calculate  one anomaly per month and per grid cell. These anomalies are first area averaged and then annually averaged to get yearly global temperature anomalies. The results are shown below for V3 compared to CRUTEM4 and V1 compared to a contemporary version of CRUTEM (Jones 1988)

There is fairly close agreement except before 1920 where V3 is warmer and post 1998 where V3 is cooler. Now look at V1 compared to a 1988 version of CRUTEM made by Phil Jones in 1988.

In general there is good agreement between CRUTEM and V1 except before 1900. In both cases the net warming since the late 19th century is about 0.2C less than that observed by CRUTEM.

Now I can already hear objections from Nick Stokes to my methodology because he will likely argue that you should average station ‘anomalies’ in each cell and not first ‘temperatures’. Another objection could be that my normals are over a longer time period instead of the standard 1961-1990. Let’s look at these in turn.

In principal I could first derive anomalies for each station by using the offsets and fixing the monthly Tm at  one particular year such as 1975. The station anomaly would then be   (Tmes + offset) – Tm. I can  look into this if I find time,  but I don’t believe it can make much difference.

My main objective was to use all (raw) station data independent of time span to answer one question. The problem with a fixed 30-year normalisation period is that a significant number of stations have no data to define such  ‘normals’.  One can imagine using the offsets to define station normals based on the grid average temperature, but this generates fake data. There is another philosophical reason why this may be wrong.

Suppose that for some reason only winters have been gradually warmed. By defining seasonal normals within  a recent fixed time span you risk skewing winter months by reducing natural seasonal variation.

My conclusion thus far is that there are small but significant differences in temperature trends depending on the definition of temperature anomalies and on data correction/homogenization.  The overall warming trend varies  by about 0.2C depending on how it is defined.

Posted in AGW, Climate Change, climate science, Science | Tagged , | 13 Comments