HadCRUT 4.6 temperatures compared

CRUTEM station data for October was released a few days ago. So I can now make a direct comparison of different surface averaging methodologies. The calculated October temperature anomaly for the 3 different spatial averages, as described previously are the following :

Classical (5×5 Lat,Lon Grid) 0.57C
Icosahedral (2562 node, 3D triangular Grid) 0.66C
Spherical Triangulation (3D) 0.72C

Figure 1: Comparison of the 3 averaging methods. Bottom graph shows the differences between the 3D methods and the HadCRUT4.6 classic 5×5 latitude , longitude binning

The 3D methods tend to give higher values for recent years and accentuate the peaks and troughs. The reason for this is the way polar regions are handled. Spherical triangulation essentially interpolates nearby stations right across polar regions by forming large triangular areas. Icosahedral binning however treats polar regions identically to any other region on the earth’s surface.

Figure 2. Comparison of Polar processing by on the left Spherical Triangulation and on the right Icosahedral binning.

Figure 3. The same comparison over Antarctica. Note how there is no interpolation into areas without measurements.

I discovered earlier that the Spherical Triangulation method gives almost exactly the same result as Cowtan and Way, who use kriging to extrapolate into polar regions. Both methods essentially interpolate temperatures into poorly covered regions. Icosahedral binning avoids any interpolation,  while treating polar regions in an unbiased way. It should really be compared  to the official HadCRUT.4.6 version, which is based on a regular 5 deg. lat, lon grid.

Figure 4: HadCRUT4.6 gridded results for October 2017

It is often argued that HadCRUT4 has a coverage bias in the Arctic due to using fixed sized bins, which converge in area with latitude ( as cos(lat) ). However all other groups who attempt to correct for this, interpolate results or smoothed fits into regions without any measurements. Rightly or wrongly HadCRUT4 remains the only pure measurement based result. I argue that the methodology would be improved if they moved to using Icosahedral 3D binning rather than sticking to 2D based binning. This would still retain the advantage of being a measurement only result.



Posted in AGW, climate science | Tagged , | 7 Comments

Icosahedral Binning

The earth is roughly a sphere so an unbiased method to bin surface temperature data is to treat the surface itself in 3D. Ideally each bin should cover exactly the same surface area, so that the global average is simply the average over all populated bins. I have been using Icosahedral grids to try to solve this. An Icosahedron is a regular 20-sided object, whose faces are all equilateral triangles. By  subdividing each triangle into 4 new ones and then extrapolating the edges to a unit sphere, we generate a detailed triangular spherical grid. Each subdivision or  level increases the grid size by a factor 4. The diagram below shows the resultant grid up to level 3. I am currently using a level 4 grid which contains 2562 vertices, to analyse temperature data.

Figure 1: Generating a triangular mesh on a spherical surface by subdividing an Icosahedron

What is interesting is that the corners of each triangle connect together the 6 neighbouring triangles to form equal area hexagons. In this way you can also view the grid as a hexagonal tessellation of a sphere. Scattered over the surface of the sphere are temperature measurements, each of which lie within both a single triangle and three neighbouring hexagons.

Figure 2: Averaging algorithm based on overlapping hexagons

The averaging method I am currently using  takes each node on the grid and averages together all station temperature anomalies that are contained within the hexagon centred on that particular node. This results in an average temperature for each hexagon centred on each grid node.  But of course these hexagons overlap with each other so that stations lying within one triangle contribute to the 3 neighbouring hexagons of which they form part. As a result there is a near neighbour averaging taking place at the same time. The example in Figure 2 shows a colour blending of 3 overlapping hexagons with primary blue, yellow and red shading. The overlapping triangles are colour mixed.

This effect can also be seen in the rendering of the October 2017 results. Note however that IDL reflects the hexagonal structure differently in the shading. It doesn’t blend each overlapping triangle but rather divides the shading contributions.  White triangular areas signify  no data coverage contribution.

Figure 3. October 2017 temperature distribution on a level 4 icosahedral grid with 2562 node points. White signifies missing data.

Can we do better than this simply by increasing the grid density? Each additional level increases the number of nodes by a factor 4. So a level 5 grid now contains 10224 node points adding also  a factor 4 increase in computation time. The results from level 4 and 5 grids are compared in Figure 4.

Figure 4: Comparison of results from a level 4 grid and an level 5 grid

The smaller grid size works fine in the US where station density is very high, but elsewhere many empty grid cells start to appear. In addition there is  a limiting factor on Ocean temperature data also causing empty grid cells. This is because HADSST3 has already been  pre-binned onto  5×5°  Lat, Lon bins.  Mainly for this reason I have decided to limit  the grid size to level 4.

Figure 5 compares three different integration methods to calculate monthly temperature averages: 1) Spherical Triangulation,  2) Icosahedral binning, 3) Regular 5° Lat, Lon binning. In general the results match up very well. The main difference is how the polar regions are handled.

Figure 5: Close agreement between Icosahedral and Spherical Triangulation of V3C data. HadCRUT4.6 peaks are slightly smaller and Spherical slightly larger

Spherical Triangulation covers the poles by connecting large area triangles, while Icosahedral grids avoids any latitude binning bias. However, it is encouraging that three very different methods of averaging global temperature data give very similar results.



Posted in AGW, Climate Change, climate science | Tagged , , | 3 Comments

El Niño 1998 & 2016

Here are the two El Niño maxima of the last 20 years, both of which caused peaks in global temperatures for the following year (1998 and 2016)

The Pacific releases vast amounts of acquired heat to the atmosphere roughly every 18 years as water sloshes back from Indonesia. If anything 1997 looks the stronger of the two.

Here is an animation of the period January 1997 to October 2017

Posted in Climate Change | Tagged | 5 Comments