Shown below are my new Global land temperature anomalies calculated directly from daily temperature measurements of over 35300 weather stations (NCDC-Daily) compared to Berkeley Earth. The NCDC-Daily archive extends back to 1762, although coverage then is mostly restricted to central Europe. The new methodology is described.
GHCN Daily contains the raw measurement data from 106283 weather stations. However of these ~ 35300 stations contain temperature data, the rest are mainly precipitation only data. Each of these stations record daily values of the maximum temperature (TMAX) and the minimum temperature (TMIN) over a 24 hour period. This is then complicated by each station having different time coverage spans, and often containing gaps within this data.
To derive a global temperature estimate when spatial distributions are continuously changing means you must calculate monthly temperature ‘anomalies’, ideally for each station relative to a common baseline. For this I always use the CRU standard 30 year period of 1961-1990. The normals are simply the 30-year temperature averages for each station and for each month. The temperature anomalies are then deviations from these averages. For GHCN-DAILY you actually need to calculate two sets of normals, one for TMAX and one for TMIN. The daily average temperature TAV is then simply (TMAX+TMIN)/2.
It turns out that 22645 stations have sufficient coverage within the 30 year normalisation period in order to calculate individual station temperature anomalies. The remaining 12655 stations must be treated in a separate way, by comparing them to nearby stations which overlap with their time coverage.
Of course in the end what we really want to calculate is the global temperature ‘anomaly’ on Land. This involves a spatial integration of all temperature anomalies over the earth’s Land surface. It turns out that the optimum equal area binning over the spherical surface of the Earth is to use so-called ‘Icosahedral’ grids as has been described previously. The first attempt I made (see last post) had coverage bias problems because the normalisation in early years used bins that were too large. As a result I have now increased the number of bins by a factor 4 to 10242. This means that each bin now covers an area of about 5000 square km or roughly 70km by 70km. This reduction in scale is important because I must also calculate the average monthly temperatures within each bin by averaging together all station normals within the standard 1961-1990. This is because the reference normalisation used to derive temperature anomalies for those 12655 stations outside the 1961-1990 range are instead those of near neighbours which do.
The full algorithm used is as follows:
- Generate a level 5 Icosahedral grid with 10242 bins. Loop over all stations and geolocate each station based on Lat, Lon to a bin index number.
- Loop over the 30 year period 1961-1990 and calculate both all individual station normals and the average bin normals for each month.
- Process all stations over their full time coverage. Use station normals where available or bin anomalies where not available to calculate time series for TMAX and TMIN anomalies. Derive the average temperature anomaly for each month and for each occupied bin from 1762 to 2018. This gives the spatial distribution of temperature (anomaly) with time.
- Integrate all bins to form a global average Land temperature anomaly for each month and an annual average global temperature anomaly.
The monthly results look like this.
It is interesting that minimum temperature anomalies have risen faster than maximum anomalies since 1950, yet the inverse was true in the 19th century. That implies that “Global Warming” has mostly occurred at night. The new Berkeley Daily temperature anomalies show exactly the same effect.
- These new Annual temperature anomalies can be downloaded here
- Monthly temperature anomalies can be downloaded here