A Study of Global Land Temperature Anomalies

In this post I summarise the results of calculating temperature anomalies using  weather station data from GHCN versions 1, 3(uncorrected) and 3(corrected). I compare these results against three of the main climate groups. Updated :23/5 – a small correction to station offsets (see comment)

Overview

  1. Calculating global land temperature anomalies from GHCN  V3C using the  standard algorithom adopted by CRU  reproduces all the results of CRUTEM4, NCDC and Berkely Earth.
  2. Exactly the same  calculation when applied to  the uncorrected GHCN V3U (raw) results in ~0.3C less warming since the 19th century. This same result is confirmed when applied to the earlier GHCN V1 data. The conclusion is that weather station correction and data homogenisation have increased  apparent global warming on land by ~30%.
  3. All the independent climate groups essentially use the same core set of station  data and adopt variations of the same methodology. Their results differ only very slightly once correction for  normalisation period is applied.  The results are  not independent confirmations of global warming. Instead they are  basically checking  each others’ software.
  4. An alternative methdology based on individual station offsets to a regional norm uses all available station data. It also results in a lower slightly lower temperature trend even when applied to the corrected data. It agrees instead with Hadcrut4

Station Anomalies

To investigate temperature anomaly values, I have calculated temperature anomalies for the three station datasets: GHCN V1, V3U (uncorrected) and V3C (corrected),  in two different ways. Firstly I use the orthodox method based on subtracting seasonal monthly normals, and secondly I use regional station offsets  as described in the previous post. The results are  then compared with the published data from three of the main ‘independent’ research groups:  CRUTEM4, NCDC and BEST.

I start by  using the orthodox method based on station anomalies derived from monthly normals calculated on a fixed 30 year period. Then we compare to my offset method described earlier. The figure below shows the GHCN V3C(corrected) global temperature anomaly result compared to CRUTEM4. They are essentially the same. This demonstrates I am calculating them correctly.

Comaprison of  V3 corrected global temperaure anomalies calculated in the same 30 year period 1961-1991 as CRUTEM4

Comaprison of V3 corrected global temperaure anomalies calculated in the same 30 year period 1961-1991 as CRUTEM4

I selected only stations recording at least 10 of the 30 year period. This reduces the total station count to 5855.  This is summarised for V1 and V3 below

dataset Total After normalisation
V1 6039 2618
V3 7280 5855
CRUTEM4 ? 5549

CRUTEM4 uses 5549 stations, the vast majority of which are identical to GHCN. This is because CRU originally provided their station data to GHCN V1.

Comparison of Uncorrected data with Corrected data.

Now we look at differences beween the uncorrected station data V3U and the corrected homogenised data as provided in V3C. The next plot compares  the  uncorrected V3U anomalies calculated in exactly the same way as CRUTEM4. This is the equivalent plot to that shown above.

V3 raw Anomalies calculated exactly the same way as CRUTEM4

V3U (raw uncorrected) anomalies calculated exactly the same way as CRUTEM4

There is a clear systematic difference between the V3U uncorrected and CRUTEM4. The effect of correcting the historic temperature measurements  has been to increase the net observed global warming signal.  We can look at this in more detail by comparing all V1 and  V3  anomalies. Exactly the same processing software is used for all three.

Comparison of V1, V3 uncorrected and V3 corrected. V1 and V3U agree with each other while V3C has cooled the past and warmed the pressent.

Comparison of V1 (green), V3U (uncorrected) and V3C (corrected). V1 and V3U essentially agree with each other whereas V3C has an increased temperature anomaly trend particularly in the past. The curves are smoothed moving averages

V1 and V3U give practically the same result, but V3C shows stronger warming. The conclusion is that the data correction and homogenisation procedure has increased the apparent “global warming” by 0.3C from 1850 to the present. I am not saying the corrections are wrong and there may be very good reasons why the corrections have been applied, but the end result has been  to increase warming.

Effects of changing normalisation period.

To investigate whether there is any change in trends if we select an earlier  30 year normalisation period I recalculated the V3C anomailes based on a 1941-1960 period and compared them to the standard result.

Compare normalisation period 1941-1960 (green) with standard 1961-1990(blue)

Compare normalisation period 1941-1960 (green) with standard 1961-1990(blue)

The trend is the same and the only effect is to shift the ‘anomalies’ up by about 0.2C. This is simple to understand as the zero reference line is set 30 years earlier on a rising trend. The absolute values of temperature anomalies have no meaning since they are always relative to some reference time period.  The good news though is that the arbitrarily selected time period does not affect the trend.

Comparison to other Datasets

I now compare the V3C global anomalies both to NCDC and to Berekely Earth (BEST) global time series. NCDC base their anomalies on normals calculated over the 20th century. They also exclude data before 1880. BEST use the period 1951-1980 to define their anomalies and as far as I understand apply their own data homogenisation and linear interpolation procedures. To derive the BEST yearly anomalies I simply averaged their published monthly series. I use the V3C results normalised to 1941-1960 to be consistent with BEST/NCDC. The figure below compares all three datasets.

Almost an exact match between NCDC, BEST and V3C (1941-1960)

An almost  exact match between NCDC, BEST and the calculated V3C (1941-1960)

They are essentially identical. The conclusions then are simple:

  • They all use the same corrected station data.
  • They all use the same anomaly definitions.
  • Different interpolation algorithms make practically no difference to the end result.
  • BEST and NCDC are not really independent checks on Global Surface temperatures anomalies.

 Systematic differences in the definition of anomalies.

Does an average annual global land surface temperature actually exist? If so, is it increasing? Such a global surface temperature cannot be measured directly because the number and distribution of stations is always changing with time.  However  we can measure the changes in temperature for each location once the far larger seasonal variations have been subtracted.  The orthodox method to do this is to define monthly temperature anomalies on a station by station basis relative to a ‘normal’ 30 year period 1961-1990. These normal values are the average temperatures for each month within the selected period. The anomaly for each month is then the measured temperature minus the monthly ‘normal’. These are all gridded together on a 5×5 degree base, area-averaged weighted by cos(lat) to form first a global monthly average and then an annual average. This procedure assumes that all stations react in the same way to an external (CO2) forcing and that all locations behave in synchrony. All the climate groups (NASA GISS, NCDC, CRUTEM, BEST) basically follow the same procedure, and since they all use GHCN V3C stations they essentially all get the same result.

However this orthodox method necessarily causes stations with poor coverage within the reference period to be dropped. Efforts to avoid this rely on interpolation or derive values from near neighbours in the same grid. BEST uses such  interpolation techniques. This then generates fake data which might affect the final results slightly. NCDC use a longer time base for normals covering the 20th century but this is itself weighted to  1960’s period because that is where the station population peaks.

Is the orthodox method correct? Is it the only way to eliminate sampling biases?  Is there group thinking happening ? For this reason I have tried to define another grid-based anomaly using local station offsets as described in the previous post.  I first compare the results of using the two methods on  GHCNV1 data.

Compare offsets-normals-jones

There is very little difference between the offset anomalies and the standard anomalies. The only exception is before 1900 where the offsets values are slightly warmer. This is exactly where many stations are rejected. The anomaly method uses all available stations whereas the orthodox drops many stations in this period if they fail to continue into 1961-1990. The main difference is that anomalies are calculated per grid cell using offsets rather than per station.  Next we compare  the two methods applied to V3C and compared to CRUTEM4. CRUTEM4 uses the orthodox method.

Compare different methods of calculating temperature anomalies. The offset method uses the seasonal mean offset from the grid average .of each station

Compare different methods of calculating temperature anomalies. The offset method uses the seasonal mean offset from the grid average .of each station. UPDATE: corrected  23/5

There is now a stronger difference in trends. The net warming from the 19th century is reduced to ~0.8 1.0C with the offset method. This value is  more similar to  SST measurements such as HADSST3.  The offset method uses all 7280 stations whereas the traditional one uses only 5855 stations. Remarkably though it agrees very well with Hadcrut4 which includes SST as shown in the next figure.

V3C-Offsets-Hadcrut4

Is it possible that the standard anomaly method artificially increases  warming trends? Why should the 30% land surface air temperatures increase much faster than the dominant 70% ocean air temperatures? Such a discrepency in trends could not possibly continue indefinitely or the temperature difference beween land and ocean would forever grow. Maybe it is just a coincidence that the ‘offset’ method almost exactly matches Hadcrut4 rather than CRUTEM4. Perhaps it gives biased results. Otherwise the orthodox anomaly calculation itself may be biased and exaggerates land warming trends as compared to Ocean temperatures.

 

This entry was posted in AGW, Climate Change, climate science, Institiutions, IPCC, Science and tagged , , . Bookmark the permalink.

28 Responses to A Study of Global Land Temperature Anomalies

  1. Keith Brown says:

    Though not conclusive this is an interesting analysis. Whilst the mathematical / statistical methods applied in climate models are no doubt valid in general, the devil is in the detail, and the possibility of a widely adopted orthodoxy generating a small but significant bias cannot be ruled out imo – though I am not so up to speed as to be able to test them out for myself. It would be good to get feedback on your work from the climate scientists, and I mean constructive and illuminating feedback, not dismissal. Is there any prospect of this?

  2. Nick Stokes says:

    “A Study of Global Temperature Anomalies”
    No, you are studying land only anomalies. This is quite different to the usually quoted land/ocean index. This is mis-stated often throughout.

    “Exactly the same calculation when applied to the uncorrected GHCN V3U (raw) results in ~0.3C less warming since the 19th century. “
    Lawrimore et al did a similar calculation (Table 4). Their number is about 0.23C:
    1901–2010 0.70°C/Century 0.91°C/Century

    I did a detailed breakdown here. A bit less than Lawrimore for all land. I get a significant downtrend since about 1970. Your difference there may be due to the culling of stations to meet the basis period requirement. I don’t do that.

    “NCDC base their anomalies on normals calculated over the 20th century. “
    I may be wrong, but I believe NCDC use 1961-90 at grid level, then switch to 20th Cen after averaging.
    “BEST use the period 1951-1980 to define their anomalies”
    I believe BEST use a somewhat similar procedure to mine, least squares fitting. Later normalised to 1951-80.

    “This procedure assumes that all stations react in the same way to an external (CO2) forcing and that all locations behave in synchrony.”
    No, it’s just averaging. No assumptions about forcing.

    “Is it possible that the standard anomaly method artificially increases warming trends?”
    I think it is much more likely that yours reduces them. This tendency has been known for a long time. I’ve recently analysed this here in detail. You can make use of all station data. You just have to apply an iteration, of which your method is the first step, if used for station normals. Using grid cell normals has no merit at all.

    It is certainly true that the adjustments applied increase trends over the century (but reducing them in the later part). But it’s your method that gets the trend wrong, not theirs.

    • Clive Best says:

      You’re right about land only temperatures. I will change the title.

    • Clive Best says:

      Nick,

      “This procedure assumes that all stations react in the same way to an external (CO2) forcing and that all locations behave in synchrony.”
      No, it’s just averaging. No assumptions about forcing.

      You are being a little bit pedantic. Yes its averaging. But the implicit assumption is that the average represents the change in temperature for that grid cell since ~1975. The annual global average is assumed to represent the net change in temperature over all land surfaces. This change in temperature is presented as the response to enhanced CO2 forcing. That is all I am trying to say.

      I think it is much more likely that yours reduces them…… Using grid cell normals has no merit at all……

      All methods have some in-built bias and maybe mine has more than theirs – we will see. At least it is different. I don’t understand why 5 or more research groups essentially repeat the same work on the same data and get the same results. Do we really need that much duplication?

      Also how do you answer the following conundrum?

      CRU/BEST/NCDC data show that land temperatures are rising at a faster rate than oceans(HADSST3). They have risen by about 1.4C since 1880 whereas oceans have risen only by half that amount. If this trend continues then land temperatures will reach 4C before oceans rise by 2C. Do you think that is reasonable?

      • Land at 4C and Oceans at 2C are not unreasonable. Look at these model outputs from the MET office for a 4C mean global warming, for example:

        I can’t speak for the other groups, but I would hardly call Berkeley duplication. It uses a (mostly) different dataset, different homogenization, different spatial interpolation, and different station record combination approach than any other group. It doesn’t even technically use anomalies. In fact, I can’t think of a single thing that it does that overlaps with NCDC, apart from using GHCN-M as part of the dataset.

        GISS does its own weird spatial interpolation thats different from standard gridding. They also apply specific UHI adjustments that other groups don’t use base on satellite-detected nighlight brightness.

        CRUTEM doesn’t use the same adjustments as NCDC and GISS. In fact, they do very little adjustment themselves, mostly relying on adjustments made by national MET offices (while NCDC/GISS ignore these national MET office adjustments in favor of their own automated homogenization approach). This is one of the reasons CRUTEM4 tends to be more similar to GHCN v3 raw than GHCN v3 adjusted.

      • Nick Stokes says:

        Clive,
        “All methods have some in-built bias and maybe mine has more than theirs – we will see. At least it is different.”
        There is a simple test which I covered in some detail in my latest post. Take a world with no spatial T variation, uniform progression in time. Two stations which measure this accurately, but have missing values – simplest to put that at opposite ends of the time range. Does your estimate return the uniform progression? Common anomaly and least squares both do. I think your answer will have reduced trend.

        “Do we really need that much duplication?”
        So which should be discontinued?

        “Do you think that is reasonable?”
        Yes. GHG’s increase downwelling IR which falls on land and sea alike. The sea is a heat sink; land not so much. So the heat has to be transported from land to sea, raising the land temperature in the process.

  3. You are incorrect in a few of your assertions regarding Berkeley Earth.

    1) Berkeley Earth does not use homogenized data from NCDC (your V3C). Rather, is uses raw data (from multiple sources) and applies its own homogenization technique that is independent of the one used by NCDC.

    2) Berkeley does not use the same underlying set of stations. Rather, it uses >30k stations (vs. the 7k or so in GHCN Monthly). This even allows us to reconstruct the global temperature record using -no- stations contained in GHCN Monthly:

    Also, I believe your “orthodox method” calcs are still a tad buggy. There should be very little difference between either GHCN v3 raw or adjusted and CRUTEM4 in recent years using a 1961-1990 baseline period:

  4. Clive Best says:

    Also, I believe your “orthodox method” calcs are still a tad buggy. There should be very little difference between either GHCN v3 raw or adjusted and CRUTEM4 in recent years using a 1961-1990 baseline period:

    Yet I find this graph of the effect of adjustments between V3 raw and corrected on your website

    which supports exactly what I find?

    • You do realize that the “US” in USHCN stands for United States, right? While many of my compatriots may think otherwise, the U.S. is not the world (and homogenization has much larger impacts in the U.S. than globally).

  5. Ron Graf says:

    Why should the 30% land surface air temperatures increase much faster than the dominant 70% ocean air temperatures? Such a discrepancy in trends could not possibly continue indefinitely or the temperature difference between land and ocean would forever grow.

    Clive, I would say you have a strong argument here notwithstanding the understandable establishment skepticism. Group-think I believe is the Orwellian term if that I think you looking for asking: “Is there group thinking happening ?”

    Certainly, if a change in forcing was occurring it would affect land first over sea just as it should affect TOA before surface.Then the sea would close the gap especially if the rate of change in forcing diminished. This would be how models behaved one would think.

    Frankly, I am surprised there is not much more interest in your results among believers in high AGW and high ECS since for the reasons I gave above except I see a problem. If the land surface temp is more coupled with the ocean surface temp then there would be a closer correlation in anomaly trend and there would be a closer fingerprint match from effects in ocean current oscillation, the AMO/PMO. On the other hand, if there is less coupling one would expect to see your less oscillating trend followed by a sharp rise as the CO2 Keeling curve rose after1950 to present. Clive, if your process is the more accurate one, then ECS/TCR is at the low end, but the CO2 forcing fingerprint is clearer. Do you agree?

    Lastly, I notice you first chart has VC3 and CRUTEM4 in sync except for what I would believe should be the most easily reproduced segment, the last 15 years.
    Any explanations?

  6. Clive Best says:

    Ron,

    You make a good point about AMO/PDO. If these are responsible for the 60y oscillation observed in Hadcrut4, then the fact the same oscillation occurs in the land data means there is strong coupling. Clearly land and ocean temperatures are strongly coupled as evidenced by ENSO. Land temperatures react fast to changes in forcing (night/day) and albedo (cloud/snow) due to their low heat capacity. Ocean temperature changes are slow but oceans but obviously stabilise land temperatures.

    Yes since the trend is smaller then ECS/TCR would also be smaller. I am not claiming I have got it right and everyone else has got it wrong. Nick may be right that there grid scale averaging affects trends. I have allowed for that by subtracting station offsets

    Nick glosses over the fact that there are many grid cells with just one station. For example islands and remote places such as Antarctica, Africa and South America. This problem increases in magnitude in the 19th century. It is only Europe and N.America that have several stations per grid cell. This is where the trend discrepency is greatest.

    When CRU moved from CRUTEM3 to CRUTEM4 I looked at where the changes were. They added hundreds of stations at high latitudes near the arctic and removed others in the southern hemisphere. The net effect was to change the trend. 2005 became hotter than 2008. This is an example of sampling bias.

    • Nick Stokes says:

      Clive,
      “I have allowed for that by subtracting station offsets”

      That’s a first step. But not enough. I suggest the following test. Take one of your datasets, and do the same averaging as you did for temp, with the same station/months reporting, but substituting date (maybe in Century units) for temperature numbers. So all stations have exactly the same trend (1 year/year). That’s what your averaging process should return for the global. But I bet it doesn’t.

  7. Robert Way says:

    I believe it is fairly well known that land-adjustments increase the warming trend whereas ocean adjustments decrease it leading to a total decrease in the trend from raw to adjusted calculated for a global land+ocean index.

    • Clive Best says:

      I doubt whether it is well known at all that corrections to ocean temperatures decreased warming trends. Are you saying that corrections made to ‘bucket’ measurements during the pre-satellite era decreased trends ?

      Here is another conundrum for someone to explain. As an exercise a few years ago I calculated the average global land ‘temperature’. This is of course anathema to Nick and the absolute value should not be take seriously. However I did it two different ways.

      1. Simply calculate the global weighted average on a 5×5 grid
      2. Average the global weighted radiative tenperature (T^4) on a 5×5 grid. Then take the 4th root.

      The result was this :

      The southern hemisphere dominated by oceans and shows no difference between the two, whereas in the northern hemisphere the difference is significant.

      I think this shows how rapidly land surfaces reacts to any changes in solar/CO2 forcing?

      • Nick Stokes says:

        Yes, anathema. However, it might be quite interesting to calculate average of 4T^3*anomaly.

        I suggested above a test for any method to see if it could recover an imposed spatially uniform linear T increase. I did my own version here. However, radiation to space is mostly not determined by surface temperature.

        • A C Osborn says:

          “However, radiation to space is mostly not determined by surface temperature.”
          That is a very interesting statement.
          Have you got something to back it up as I thought the “Energy Budget” depended upon it?

          • Nick Stokes says:

            It is determined principally by the temperature at the point of final emission, which is near TOA for most thermal IR (except the atmospheric window).

            With AGW you get higher surface temperatures, with no steady state increase in OLR (which still has to balance incoming solar).

          • Clive Best says:

            Of course it is. The atmosphere is mostly transparent to solar radiation which warms the surface. 90% of heat loss is from the surface. CO2 and H2O just act like an IR fog

          • Nick Stokes says:

            “CO2 and H2O just act like an IR fog”
            And it is the fog that does most of the emission to space. That’s why the apparent IR temperature seen from space is about 255K. That’s not surface temperature.

          • Clive Best says:

            Yes but it is a moot point. The surface warms the atmosphere which radiates to space mainly from H2O molecules. The main heat source for the atmosphere is the surface. CO2 radiation contributes ~ 4C of greenhouse warming while H2O and clouds provide the rest.

        • Clive Best says:

          DT = DS/(4.epsilon.sigma.T^3)

          Yes that is better.

  8. Ron Graf says:

    Does anyone know if the marine climates of the western USA and UK have been analyzed for trend coupling to the ocean temp trends? And have the marine climate trends then compared to increasingly non-marine land temp trends to diagnose that trend? Because from that one should be able to extrapolate the pure land temp trends, effectively providing a method for direct observation of ECS by removing ocean uptake lag from the trends. Would not a study of the continental 15-30-year temp trends, corrected for marine effect give a statistically significant approximation of ECS?

    • Ron Graf says:

      To clarify, I am proposing one should be able to derive ECS through the record because there are multiple mathematical relationships in the data. The first is the temp anomaly described in this post. The second is the rate of fading of the marine imprint as the air temp adjusts to land surface effects. The marine effect, of course, warms the air half the year and cools the air the other half. But this supposes the TOA imbalance nets out on average to zero. If there is a net average positive imbalance through the year the land effect should be positive and in sync with the TOA. Even the is little sink of the land could be eliminated by extrapolation since the heat capacity of land vs ocean is well known.

      Therefore a marine effect study of the data should not only yield a baseline but also a time trend in TOA imbalance. This in turn could be plotted against the Keeling curve to calculate ECS.

      Further, a marine effect study can be more accurate than global temp since it can be independently verified across multiple regions.

  9. Robert Way says:

    “I doubt whether it is well known at all that corrections to ocean temperatures decreased warming trends. Are you saying that corrections made to ‘bucket’ measurements during the pre-satellite era decreased trends ?”

    Yes

    See here for a recent example of this:
    http://variable-variability.blogspot.ca/2015/02/homogenization-adjustments-reduce-global-warming.html

  10. Clive Best says:

    I found a small error in my “offset” method. I had use the avergare temperaures per grid cell from the uncorrected V3 data by mistake to calculate the station offsets. Repeating it with the V3 corrected averages improves agreement. It is really only the 19th century where there is a small discrepency.

Leave a Reply to Robert Way Cancel reply