A new Global Temperature Calculation

Monthly and Annual temperature anomalies calculated by a new triangulation method applied to all V3C station data and HadSST3 ocean data. Normalisation period is 1961-1990. All data and code are available (see below).

In recent posts I have been combining GHCN V3C station data with Hadley HADSST3 Ocean data. The station data are located at specific locations, while HADSST3 data are recorded sporadically within a regular 5×5 degree grid. In addition active station locations and active SST cells vary from month to month. This is particularly true for early years in the temperature series.

I have been using IDL to make a Delauney triangulation in (lat,lon) of all active measurements globally for each month, and then investigate different interpolations of this onto regular grids. However, following a proposal by Nick Stokes, I realised that this is unnecessary because the triangulation itself can produce an almost perfect global average. I believe this is probably the best spatial integration possible because it uses all available data and gives a direct result. There is no need for any interpolation or grid cell averaging. The last two posts showed how such interpolation can introduce biases. This is avoided using triangulation based spatial averaging. Here is the algorithm I used.

  • Each triangle contains one measurement at each vertex. Use Heron’s formula to calculate the area of the triangle.
  • Calculate the  centroid position and assign this an anomaly equal to  the average of all 3 vertex values. This centroid value is then used as the average anomaly for the whole triangular area.
  • Use a spatial weighting for each triangle in the integration equal to cos(lat)*area. Where Lat is the latitude of the centroid.
  • Calculate the global average = \frac{ \sum {T_i wt_i}}{\sum{wt_i}}

Centroid of any triangle

Using this method every possible measurement is used and no extrapolation outside triangular boundaries is needed. Each month is normalised to the surface area covered only by the active triangles. This also means that for areas of dense station data like the US and Europe regional temperature variation can be studied at full resolution. There is no need to average nearby temperature measurements. I think Triangulation is  a near perfect perfect solution also to regional and global temperature studies. It is something I will look at next.

To see how all this works in practice  we  first look at the triangulation results for January 1880 with rather sparse measurements.

Colours of triangles represent centroid temperatures. They are washed out because I use a transparency of 50% and I am using my normal ‘conservative’ colour scheme.

In comparison here is January 2015 with dense data in the US and Europe. I am plotting each individual triangle, showing just how dense measurements are in the US. For regional studies it is best to suppress triangular borders and show just the colour coding.

January 2015 showing intense triangulation in North America and Europe. The triangle edges also obscure some of the colouring but  shows how well the system copes with complex patterns.

For all anomalies I use the same normalisation period as Hadcrut4 1961-1990. To compare the results to other series I renormalise these also to 1961-1990. The offsets used for each series are shown in the plots. A detailed comparison of recent years is then shown below. The new result neatly lies between those temperature series which use infilling and Hadcrut4 which doesn’t.

Detailed comparison for recent years with the major global temperature series.

A comparison of the monthly values shows a remarkably close agreement with Berkeley Earth.

In conclusion a new method calculating spatial averages of randomly dispersed temperature data has been developed based on triangulation. The original idea for this method is thanks to Nick Stokes. The method has potential benefits also for detailed regional studies.

Annual temperature series are available here
Monthly temperature series are available here
IDL code to generate all results is available here

Posted in AGW, Climate Change | Tagged | 21 Comments

Tweaking Global Temperature Data

The “kriging” biases described in the previous post can be mostly avoided by using  ‘spherical’ triangulation. This treats the earth as a sphere and triangulates all measurement points onto the earth’s surface.  In this case vertex angles no longer add up to 180 degrees. The data are then re-gridded onto a regular 2 degree grid coving all latitudes and  longitudes using an inverse distance weighting.  The spatial average of measurements over all latitudes and longitudes is then calculated. The data used are all 7300 station data from GHCN V3C combined with HadSST3 ocean temperature data. Here is a comparison of this new method with all the other data.

Comparison of annual global temperature anomalies normalised to 1961-1990. The new interpolated data labelled CBEST essentially agree with the other extrapolated (kriged) data. Notable differences are that CBEST enhances net el Nino warming peaks in 1998 and 2015. The 2 degree and 5 degree gridding results now nearly lie on top of each other.

Of particular interest is the extrapolation of data near the poles, which is where most warming has been observed. The problem though is that there are very few measurements in that region especially before 1940. Figure 2 shows triangulation grids for both poles in 1880 and 2016.

Triangulation of surface temperature data at both poles for 1880 and 2016. Notice how poor coverage remains in South America, Africa and Australia even in 2016.

In 1880 there are no measurements further south than 70S or further north than 70N, and triangles cover huge distances. Therefore artefacts are likely introduced by krigging into these regions. The 2016 triangulation shows much better coverage but there are still no data inside 75N.  Antarctica does a better job because stations now exist at research stations including one at the south pole.

Here is the full 167 year comparison.

Global Temperature data comparison 1880-2016

My conclusions are that before 1940 it is best not to use any interpolation into unmeasured areas of the world because the coverage is so low, Hadcrut4 methodology is preferable. Recent warming is enhanced by interpolation because those empty lat,lon cells are filled by the influence of ‘warm’ nearby neighbours. Likewise natural warming cycles such as el Nino also get enhanced.

Data  are available here
IDL code that generates this data is here.

Next I will use the triangulation itself of measurement locations  to calculate the global average, avoiding any interpolation.

Posted in Climate Change, NOAA | Tagged , | 15 Comments

Kriging biases in global temperature data

I have made a new calculation of global temperatures using 7300 NOAA/NCDC V3c station data combined with HadSST3 ocean temperature. For the ocean data I use cell locations only where measurements exist for a given month.  I then make a (lat,lon) triangulation of all combined station/ocean locations for that month to form a global irregular grid structure. Then I use the IDL irregular griding routine GRIDDATA to interpolate this triangulation onto a regular grid and thereby calculate global monthly and annual anomaly averages normalised to 1961-1990. Anomalies for each V3C station data are independently calculated relative to their monthly averages over the 30 year period. The end result of this procedure is essentially a full global integration of irregularly interspersed measurements for each month. The annual average shown is then simply the 12 month average.

How does this compare to other ‘kriging’ methods which supposedly remove the coverage bias of Hadcrut4?  What I discovered is that the end result depends critically on what grid spacing you interpolate onto. If you chose a fine grid spacing, such as the 1 degree used by Berkeley Earth, then you get an enhanced warming trend over recent years. If however you chose the same grid size as Hadrut4 (5 degrees) then you get a reduced trend. This implies that a systematic error is introduced by the methodology. Here is the comparison.

Comparison of my values (CBEST) for 2 different grid spacings with Berkeley Earth (BEST), Cowtan & Way and HADCRUT4. BEST has been scaled up by 0.02C to compensate its 1951-1970 baseline and uses their June value for the 12 monthly average (as recommended)

The 2 degree results are very similar to Berkeley Earth but give a slightly larger warming trend. However by using the same 5 degree target grid size as Hadcrut4 the result gives a much reduced warming trend. Cowtan and Way use the HADCRUT4 station data rather than V3C and lies somewhere in the middle. Here is a detailed comparison of results for one month – September 2016.

2-degree target grid (CBEST)

The 2 degree resolution extends the expanse of each warm zonal area.

5 degree target grid (CBEST)

The 5 degree resolution is in line with that of HADSST3 and HADCRUT4

Cowtan & Way Version 2. The trend over Antarctica looks significantly different.

This is Cowtan and Way version 2 which reconstructs ocean and land separately and then blends them during the time period shown.

Original Hadcrut4 results without interpolation. White equates to missing data.

Does kriging actually improve the accuracy of global temperatures? While it is probably correct that Hadcrut4 has a ‘coverage bias’ over polar regions, what is even clearer is that interpolation to remedy this can itself introduce a systematic warming bias dependent on method and target grid size. The other temperature series all use data infilling based on ‘kriging’ type techniques.

Posted in AGW, Climate Change, climate science, NOAA, UK Met Office | Tagged | 12 Comments