A sceptic’s guide to global temperatures

Climate change may well turn out to be a benign problem rather than the severe problem or “emergency” it is claimed to be. This will eventually depend on just how much the earth’s climate is warming due to our  transient but relatively large increase in atmospheric CO2 levels. This is why it is so  important to accurately and impartially measure the earth’s average temperature rise since 1850. It turns out that such a measurement is neither straightforward, independent nor easy.  For some  climate scientists there sometimes appears to be a slight temptation to exaggerate recent warming,  perhaps because their careers and status improve the higher temperatures rise. They are human like the rest of us. Similarly the green energy lobby welcome each scarier  temperature increase to push ever more funding for their unproven solutions, never really explaining how they could possibly work better than a rapid expansion in nuclear energy instead. Despite over 30 years of strident warnings and the fairly successful efforts of G7 countries to actually reduce emissions, CO2 levels in the atmosphere are still stubbornly accelerating upwards. This is because simultaneously the developing world has strived to raise the wellbeing and living standards of their large populations through the use of ever more coal and oil, exactly as we did. This is our current dilemma. Should they somehow be stopped from burning fossil fuels, or maybe compensated financially  to ‘transition’ to so-called renewable energy instead? All this again depends on the speed of climate change, which simply translates to the slope of the temperature record.

The good news is that once global emissions start reducing, as they inevitably will, so the climate will rather quickly stabilise. Yet it may still take a thousand years or more for the earth to fully return to a supposedly “normal” pre-industrial climate. But is this actually what we want? This idyllic  normal climate also includes regular ice ages, the last one of which reduced the  human population to just a few thousands of individuals struggling to survive outside of Africa. The earth’s climate has anyway been cooling by ~4C over the last 5 million years mainly due to land uplift caused by plate tectonics, which initiated violent natural swings in climate.  Human civilisation only developed within the current holocene interglacial (last 10,000 years).  Therefore it is in all our interests for the holocene to continue for as long as possible. Once we get through this temporary climate “transition” we will soon realise that controlling  the climate through some enhanced CO2 levels is a far better outcome for humanity than returning to a pre-industrial climate. A 2C colder climate is far worse than a 2C warmer climate.

So how accurately do we really know what the earth’s temperature really is? The answer would appear to be not very well  at all since it has been constantly changing since the last IPCC report. The most notable of these changes since 2012 is a  dramatic increase in what experts now say the global temperatures is compared to what those same experts  said it was 10 years ago. The hiatus as reported in the latest IPCC report has now completely vanished.  How is this possible?  The Paris agreement proposed limiting temperature increases to 1.5C since pre-industrial times, but if you believe the most recent temperature results this limit was already breached in 2016, and will surely be exceeded with another major El Nino event. In this report I hope to explain how a combination of new temperature data, further adjustments to those data, and changes in the methods used for calculating global averages can explain why temperatures ended up being ~0.25C warmer than they were originally reported in 2012. Somehow each new version of any temperature series essentially rewrites climate history. Global warming always turns out to be  far worse than we feared it was a year previously. This ratcheting up of alarm is continuous.

A 7 year evolution in global temperatures.  Each new version of a temperature series rewrites climate history

Recent temperatures now apparently show a ~0.25C  warming  simply due to a continuous process of adding, merging and adjusting multiple short temperature records. But how is that possible?  To really answer that question we first need to understand what the term “global temperature” really means.

Temperatures have traditionally been measured by national weather stations around the world since the early 19th century. Traditionally these used mercury thermometers inside a Stevenson screen designed to shield them from direct sunlight and allow ambient air to mix inside. For this reason the slats of the Stevensen screen are painted white to reflect sunlight. This is because  the aim is to measure the ambient air temperature at about 1.5-2m above the ground and not any localised effects. The siting of weather stations should therefore be away from buildings and any other artificial source of heat.

Weather stations were always read manually by an operator until about the early 1990s. Large stations would be read 8 or more times per day, while smaller ones read just twice a day usually at 9am and 4pm. In addition to real time thermometers, nearly all stations also contained a max/min thermometer. This is a linked dual glass bulb thermometer where the mercury on one side could only move up and not down and vice versa on the other side.

simple min/max thermometer

These thermometers measure the maximum and minimum temperatures reached between each reading whereupon they were reset by shaking the mercury back down/up again. These max/min thermometers were read and reset once a day. The time of observation (TOBS) was important because if they were read at 9am then the minimum temperature would be for today (early morning) but the maximum temperature would be for yesterday eg. 3pm. However if they were instead read at say 6pm then both maximum and minimum would be for today. The readings were then written in log books, which have since been digitised. Typically then each station produces 3 values each day. Tmax, Tmin and Tav. Tav is defined as (Tmax-Tmin)/2, but this is really an approximation to the average daily temperature, because the shape of the diurnal variation changes every day. A better estimate would be to use the hourly temperature values and  integrate them over a 24 hour period. However since these don’t exist for the early data, it is common practice to simply use Tav = (Tmax+Tmin)/2 instead. Note that Tav also increases if  just Tmax increases or just Tmin increases. Corrections for TOBS have become notorious especially for the US stations because the corrections are time dependent as operating practices evolved in time. The TOBS corrections alone produce the US warming, although on paper these corrections seem correct. However I discovered that is probably just as good to use the twice per day measurements at fixed times to calculate temperature anomalies.

Automated weather stations began to be introduced in the 1980s and essentially replaced all manual weather stations by the late 1990s. These log temperatures automatically every hour. This eventually removed any TOBS effects.

Spatial Coverage

The historical record shows a large increase in the spatial coverage of weather stations globally with time. In the 18th century weather stations mainly existed only in Europe, the US and a few colonial outposts. Even today large parts of Africa and South America still have no coverage of weather stations.

Weather stations before 1900

An archive of the daily temperatures and rainfall back to the 18th century has been collected at NCDC and is updated regularly. The data has been subjected to quality control but should otherwise be as close to the original measured values as possible. Monthly temperature averages predate GHCN-Daily and are processed and published by GHCN-Monthly, and CRUTEM. Berkeley Earth used a collection of archives to maximise station coverage, but today you can do almost the same by using GHCN-Daily. Recently NCDC have released version 4 of GHCN-Monthly which is 75% the monthly averages of GHCN-Daily and 25% of national archives. There are 24,000 stations in total but 17,000 with data between the standard 30y period 1961-1990. V4 has been subject to quality control features and in the adjusted version has homogenisation applied.

Sea Surface Temperatures

The ocean temperature data is perhaps even more complicated as ‘sea surface temperatures’ were measured by ships until after the 1980s. Ships move and measured temperatures in three different ways by using a) wooden buckets b) canvas buckets and c) Engine Room intake temperatures. There are problems with all three. The first two read cool because of evaporation removing latent heat each by a different amount and the third reads too warm. In addition they read temperatures at different depths which also clearly depend on swell conditions. Anyone who swims in the Mediterranean soon realises  that their toes  feel ~5C colder that their shoulders so there is no absolute SST. As a result there are some ad hoc corrections applied depending on which type of measurement was made. For early years ocean temperature data follow the trade routes and sampling very small areas of ocean. After the 1980s drifting buoys and fixed were deployed improving coverage and accuracy. An archive of historic records is kept at ICOADS and there are two processed SST temperature series on a 2 degree grid ERSST4 and 5 degree grid HadSST3. Interestingly the SST data is the only one which could publish absolute temperatures rather than temperature anomalies because all measurements are at the same altitude (zero). However to be compatible with the land values they all produce anomalies which depend critically on the normalisation period (1951-180 or 1961-1990) which itself spans the uncertain bucket/ERI period. Since a third of the earth’s surface is ocean SST values tend to dominate the global average. They are also the most uncertain during the early 20th and 19th centuries because they rely on systematic bias corrections.  Of course the temptation is to increase recent temperature trends through these corrections. The new HadSST4 is an example.

The new version of the HadSST4 has reassigned ship based  measurements from before WW II  into the early 1990s. Bias adjustments depend on the fraction of measurements using wooden or canvas buckets and engine room intakes (ERI), which are partly defined by the metadata in ICOADS based on ship logs. The assignment of each measurement to the type of bucket or ERI is sometimes uncertain. HadSST4 now use instead the diurnal temperature dependence of the measurements (time of day) to identify which measurement type was used by each ship. The overall bias adjustment to SST will change if this procedure changes the fraction of data falling into into each category since each adjustments is different.

They claim that 75% of measurements could be classified in this way, and that buckets were still in use in US ships into the early 1990s. Since then measurements are based on floating buoys and Argo buoys and these recent temperatures measurements are unaffected. However that doesn’t matter because the crucial 1961-1990 normalisation period certainly is affected and HadSST4 only publishes temperature anomalies – not absolute temperatures. So the net effect of the new assignments  is to to lower the zero line (normals) from which anomalies are calculated,  and as a result all recent  anomalies have indeed increased in ‘temperature’. Hey presto the oceans have now warmed by an extra 0.1C.

Raw data.
It is almost impossible to get the raw daily measurement data from individual stations as they were originally recorded. GHCN provide a version of the monthly data which they call unadjusted but which in reality is quality controlled monthly averages from stations, some of which have already  been merged with nearby previous stations. Thanks to Ron Graff I was able to access the original hourly and daily data from Australian stations. In one location like Darwin we have 3 or 4 stations in different locations covering different time periods. These then get merged into one station to cover the whole period. GHCN V4 and DAILY contain all 4 stations but the Airport is still a merge of all 3 leading to double counting. There are several other similar examples. For Australia I was able to compare the average anomaly from the raw data (hourly and daily) to the adjusted ACORN data.  This showed warming was dominated by minimum temperatures, whereas maximum temperatures were more of less stable. In other words warming occurs at night.

First we compare the raw data with the adjusted data for ACRON-SAT

Raw data compared to adjusted data for Australia (ACORN SAT)

The raw data don’t show any warming after 1980, whereas the homogenised data clearly does. Even the homogenised data show an interesting effect. It is mostly minimum temperatures that are rising and this causes the average temperature to increase. This essentially means that warming occurs mostly at night.

Average annual maximum and minimum temperatures across Australia

Another way to view the same thing  is to use the annual maximum temperature range (hottest-coldest) area averaged across all ACORN stations. The range of extremes is actually reducing as the average temperature anomaly rises.

Annual extreme temperature range (max-min) compared to temperature anomalies for ACORN-SAT stations.

It is the adjustments made to the data through ‘homogenisation’ which always increases warming. So what does homogenisation really mean and is it justified?

Homogenisation.
Homogenisation is the name given to the process carried out to adjust the underlying measurement data. The basic assumption is that nearby stations behave all in the same way. They are assumed to all warm in synchrony, and that any outliers that do not must be due to data problems at that station. Over long periods of time stations can get moved or have instrumental changes which will affect the measurements that they make through break points (linear shifts). Another occurrence  is when 2 or more nearby stations are merged together to cover longer timescales. An older station may have closed in 1970 but overlaps with a more recent nearby station and can yield a continuous 150 year timescale when combined. Maybe they are in the same town but differ slightly in altitude, and the temptation then is to combine them by merging the overlap period. The standard homogenisation procedure is to look for breakpoints and for differences in trend with near neighbours – so-called pairwise homogenisation. Sudden kinks are evidence of station moves which then get corrected by shifting earlier data up or down by a linear offset. However there are more subtle effects that homogenisation produces. The pair-wise process has a tendency to align trends to all follow the same positive value. I have seen this clearly in the Australian data.

1.Launceston, Tasmania. The ACORN time series is actually a combination of 3 nearby sites in the city. a) Launceston Pumping Station from 1910-1946. b) Launceston Airport (original site) from 1939-2009 and c) Launceston Airport (current site) from 2004 onwards. The joining of the 3 bands at first sight looks to be fine, but closer inspection shows that the maximum and minimum temperatures in the central section are differentially being shifted so as to produce a linear rising temperature trend where there was none apparent before. There are no obvious kinks in the raw data to justify this.

2. Alice Springs consists of a merge between the Post Office station (1910-1953) and the Airport since 1953. Note how the animation shows an increase in minimum temperatures at the airport resulting in a linearisation of the trend, neither of which has any direct connection with the merge.

 

3. Dubbo shows how a trend can be generated from a simple merge of two stations, the  Post Office and airport.  This should be a straightforward linear shift between the two locations but again the shape of the early data is changed thus producing a small linear warming trend. This is most likely generated by the pair wise homogenisation.

Notice how the shift in the early PO data is time dependent generating a warming trend where none previously existed.

So why do we use Temperature “Anomalies” anyway?
If there was perfect coverage of the earth by weather stations then we could measure the average temperature of the surface and track changes with time. Instead there is an evolving set of incomplete station measurements both in place and time. This causes biases. Consider a 5°x5° cell which contains a 2 level plateau above a flat plain at sea level. Temperature falls by -6.5C per 1000m in height so the real temperatures at different levels would be as shown in the diagram. Therefore the correct average surface temperature for that grid would be roughly (3*20+2*14+7)/6 or about 16C. What is actually measured depends on where the sampled stations are located. Since the number of stations and their location is constantly changing with time, there is little hope of measuring any underlying trends in temperature. You might even argue that an average surface temperature, in this context, is a meaningless concept.

The mainstream answer to this problem is to use temperature anomalies instead. Anomalies are typically defined relative to 12 monthly ‘normal’ temperatures calculated over a 30-year period for each station. CRU use 1961-1990, GISS use 1959-1980 and NCDC use the full 20th century. Then, in a second step, these ‘normals’ are subtracted from the measured temperatures to get DT or the station ‘anomaly’ for any given month. These anomalies are averaged within each grid cell, and combined in a weighted average to derive a global temperature anomaly. Any sampling bias has not really disappeared but has been mostly subtracted. There is still the assumption that all stations react in synchrony to warming (or cooling) within a cell. This procedure also introduces a new problem for stations without sufficient coverage in the 30-year period, perhaps invalidating some of the most valuable older stations. There are methods to avoid this based on minimising the squares of offsets, but these rely on least squares fitting to adjust average values. The end result however changes little whichever method is used. Far more important however, are the  coverage biases involved in spatial averaging. How do you combine SST, and land data to derive a single global temperature (anomaly).

Global averages
Deriving an average global temperature anomaly involves the surface averaging of the combined ocean and station temperatures. The distribution in space and time varies enormously before about 1950. Different groups adopt different averaging schemes. HadCRUT4 and NCDC adopt a simple (Lat,Lon) binning whereby they take the average of temperature anomalies within each bin. A similar process has already been performed on the ocean data HADSST3/ERSST4. They then form a weighted average over all occupied bins weighted by cos(lat) to compensate the bin size change with latitude. GISS instead use approximately 8000 equal area cells for binning. Each station within 1200km can contribute inversely weighted according to its distance from the bin centre (zero at 1200km). This populates empty bins with data extrapolated from nearby bins, so essentially they interpolate data into empty regions. The area weighted average is then done in the same way as HadCRUT4. Berkeley Earth (Robert Rohde, 2013) use kriging from the start to derive a temperature distribution projected onto a 1-degree grid. Cowtan and Way (Cowtan & Way, 2014) attempt to correct Hadcrut4 for empty bins by kriging the HadCRUT4 lat,lon results into empty bins and in particular extending coverage over Arctic regions.  You can see what the effect of kriging has on empty cells below where I krig the HadCRUT4 results for 2016 and then the  far sparser result for 1864.

Hadcrut4.5 measured anomaly data (no kriging)

Kriged result for February 2016

The infilling of the Arctic depends on the few nearby isolated data.  As a result these kriging techniques mostly only affect recent data because before 1950 the data is so sparse that kriging into empty areas changes very little. This then introduces an information  bias in that the most recent temperatures are enhanced by interpolating sparse nearby data to cover more of the Arctic regions where warming is stronger, but not the earlier data which lacks any Arctic coverage to work at all so biases the Arctic  cold.

Raw H4 data Jan 1884

kriged values Jan 1884

 

Finally there are the 3-D techniques which myself and Nick Stokes have been developing independently, which I think are the most natural way to integrate temperatures over the earth’s surface. Weather station data and sea surface temperature measurements are considered point locations (lat,lon) on the surface of a sphere. A triangulation between these point locations then generates a 3D mesh of triangles each of whose areas can be calculated. The temperature of each triangle is calculated as the average of the 3 vertices, and the global average is the sum of the area weighted temperatures divided by the surface area of the earth.

Figure 2 Spherical triangulation of CRUTEM4 & HadSST3 (Hadcrut4.5) temperature data for February and March 2017.

The way you calculate the global average changes the result, but only in recent years. Interpolating into unmeasured regions like the Arctic increases the weighting of warmer areas. There is a simple reason why the Arctic appears to warm faster than other regions – because it starts off being much colder. The extra energy needed to raise the temperature from -50C to -49C is much less than from say +24C to +25C. This is due to Stefan-Boltzmann’s law \frac{DS_1}{DS_2} = \frac{T_1^3}{T_2^3} or 0.42. That means  radiative forcing only needs to increase  by a 40% rise in CO2 at the North Pole to produce a 1C rise in temperature.  So clearly if you focus on increasing coverage of the Arctic instead of say Africa then you will boost the global average. However this only works if you have stations in the Arctic, and for some reason the Antarctic shows much less warming.

CMIP5 Model comparisons

A fundamental problem with all models is that they also cannot agree on what the absolute global temperature should be to balance energy. As a result they too rely on the use of  temperature anomalies to make their “projections”. To do this they simply normalise to past measured  temperatures and tune volcanic forcing. Only future projections can be eventually tested. The error in projections is still about 1C.

CMIP5 Global surface temperatures taken from the paper. The coloured graphs are Meteorological reanalyses and represent best ‘observed’ global values

By comparing anomalies to past data they can adjust and subtract their ‘normals’ and then tune volcanic and aerosol forcing so as to match past ‘measured’ temperature anomalies. They then project these values into the future. There was then clearly a problem for climate science in AR5  because all models from such a 2005 ‘normalisation’ were running much hotter than the data.

AR5 Comparison of global temperature anomalies with CMIP5 models

So were all the models wrong?

SST blending 

Once again Kevin Cowtan (who is a chemist) came to the (partial) rescue of climate science. He noticed that the ocean data measured sea surface temperature, whereas the models were predicting  2m air temperatures. Theoretically there is a slight difference due to latent heat of evaporation cooling air temperatures. He ‘blended’ the model variable ‘tos’ (temperature at surface) over oceans with ‘tas’ (air temperature 2m above the surface) over land. This reduced slightly the discrepancy of the models with the data as shown below. Blending is the difference between the red and blue trends.

The blue curve is the average of 6 CMIP5 model results for the global temperature anomaly. The red curve is the blended result corresponding to Hadcrut4.

Of course ocean temperatures don’t really measure SST either. The historical use of buckets, ERI and Buoy measurements all favour  different depths below the surface. Only satellites can truly measure real SST values. Furthermore some areas of the oceans have average swell heights well over 2m making any small differences in depth or height above the ocean almost irrelevant.

Warming occurs mostly at night

Normally only the average temperature (Tav) anomalies is presented in the media and the rise in Tav is what we call global warming. However I decided to analyse in exactly the same way both components of Tav, namely Tmax and Tmin anomalies. Here are the results for global temperatures (including oceans).

Comparison of temperature anomalies normalised to 1961-1990 for a)Tmax b)Tmin c)Tav. The difference Tmax-Tmin is shown by the blue curve plotted on the right hand Y-scale.

A reduction in Tmax-Tmin of about 0.1C is observed since 1950. Minimum temperatures always occur at night over land areas. This means that nights have actually been warming faster than days since 1950. The effect over land is of course much larger than 0.1C because nearly 70% of the earth’s surface is ocean with just single monthly average temperature ‘anomalies’. So nights over land areas have on average warmed ~ 0.3C more than daytime temperatures. So if we assume that average land temperatures have risen by ~1C since 1900, then maximum temperatures have really risen only by 0.85C while minimum temperatures have risen by 1.15C. This effect may also be apparent in equatorial regions where the night/day and winter/summer temperature differences are much smaller than at high latitudes.

Meridional Warming

If you take a spatial temperature series and integrate it over longitude and seasons then you get a meridional temperature profile. Here are the results for Hadcrut4 from 1900 to 2016.

Figure 1. All 117 meridional temperature anomaly profiles from 1990 to 2016. They are coloured blue if the annual global anomalies < -0.2C, Blue,-0.2<grey<0.2, 0.2<yellow<0.4, red > 0.4. Traces are 80% transparent to view them all.

This shows that warming is amplified at high latitudes as expected, although with surprisingly less evidence of any increased change in Antarctica. Another way to view this is to see how little the net average temperatures have changed versus latitude. Now we see that the differences are relatively small. So what seems like a large global increase of 2C is small on this absolute scale. Arctic ice will not disappear while average annual temperatures remain below -15C

Figure 2. Average temperature profiles from 1900 to 2016 calculated relative to a standard profile. Colour scheme is the same as Figure 1.

Over the long history of the earth the climate has been through extreme states of  a ‘Hothouse’ during the Jurasic and a ‘Coldhouse’ during Snowball earth and the current Pliocene.  These are the meridional profiles estimated for such extreme climates.

Credits: Christopher R. Scotese. Palemap Project 2015

The current global average annual temperature is about  14C rising to 15C. So we are currently in an Ice House climate, but 15,000 years ago the earth was in a Severe Icehouse.

A wetter world

A warmer world is likely going to be a wetter world. GHCN-Daily contains the raw precipitation measurements from about 100,000 weather stations some dating back back to 1780.  The rainfall data can be processed in exactly the same way as the temperature data to derive rainfall anomalies relative to a 1961-1990 average. So I put my computer  (iMAC-i7) to calculate this , but it took a week of CPU time!  Here are the results as compared to temperature anomalies.

Top graph is the change in the global annual average daily rainfall compared to the 30 year ‘normal’ value from 1961-1990 (Rainfall Anomaly) . The bottom graph compares this to the global average temperature anomaly (CRUTEM4 in blue) and my own GHCN-DAILY in green.

There has indeed been about a 1mm increase in average daily rainfall since about 1980. This is actually beneficial for arid areas. A slight increase in land rainfall makes sense because of more evaporation form the oceans. So dire warnings of famine and drought are likely false, whereas we can probably better adapt to a warmer but wetter world.

Summary

There is no doubt that human activity has led to a ~40% increase in atmospheric CO2 and that this has warmed the planet so far by roughly 0.8C. This could rise to a doubling of CO2 levels with a temperature rise of 2C by 2100. By then we will have stabilised emissions at some lower level leading to temperatures peaking at ~3C warming in the 22nd century followed by a slow stabilising of climate to some cooler level at perhaps 1C above the natural climate. This is not a climate disaster. It will not lead to any mass extinctions, flooding of coastal cities or vanishing coral islands. Humans have for millenia dramatically changed the earth’s environment through deforestation, hunting of large animals, use of fire, farming and pollution. Modern life also introduced an overuse of plastics which all ends up in the oceans. Much of the world does not have ‘recycling’ so all this plastic ends up in rivers and transported it to the sea, endangering marine life. We know how to stop plastic pollution. We know how to improve the environment, but what can we possibly do about climate change?  Climate change is a problem in the same way that original sin is a problem.  Modern society cannot function without energy. We abandoned renewable energy in the 19th century for good reason, can we really turn the clocks back? I doubt it because today’s renewables aren’t even what they claim to be because they can’t even renew themselves without fossil fuels, let alone power transport and heating. Nuclear Energy is a solution as France proved in the 1980s, but  we can’t even agree on that anytime soon. Meanwhile we pretend to go green and buy electric cars dependent on lithium mining in Chile.

Climate change is a ‘problem’ but not as serious as it is made out to be. It is more of a distraction from accepting that we ourselves are the real problem. Climate change is just a symptom of that. We have been so successful for the last 10,000 years because we learned how to exploit the environment for our benefit. Now we have to learn how to maintain that environment on which our future and the rest of nature depend. If we can do that the climate will then simply look after itself.

Posted in AGW, Climate Change | Tagged , | 35 Comments

July temperature up 0.1C

The global averaged surface temperature for July 2019 was 0.75C using my spherical triangulation method merging GHCNV3 with HadSST3. This is an increase of 0.13C since June.

Monthly temperatures updated for July

This is a large monthly rise but still in line with past monthly fluctuations. For the moment I am sticking with GHNC-V3/V3C. The annual temperature after 7 months is shown below. The 2019 value averaged over 7 months is 0.76C

Here is the spatial dependence for July.

Northern Hemisphere

Southern Hemisphere

Monthly variations are still quite large so one should not read too much into one month’s values. However it looks like 2019 will end up the second or third warmest year.

Note: If I use V4C instead then all past temperatures increase significantly. That is why I am hesitant to change to V4 until I understand why. I suspect it is because of a huge increase in recent stations without long term histories, so the normals are less affected than recent anomalies.

V4 annual results compared to V3, both combined with HadSST3.

Posted in AGW, Climate Change, CRU, NOAA | Tagged | 25 Comments

HadSST4 and knock on effects

The new version of the HadSST4 has reassigned ship based  measurements from before WW II  into the early 1990s. Bias adjustments depend on the fraction of measurements using wooden or canvas buckets and engine room intakes (ERI), which are partly defined by the metadata in ICOADS based on ship logs. The assignment of each measurement to the type of bucket or ERI is sometimes uncertain. HadSST4 now use instead the diurnal temperature dependence of the measurements (time of day) to identify which measurement type was used by each ship. The overall bias adjustment to SST will change if this procedure changes the fraction of data falling into into each category since each adjustments is different.

They claim that 75% of measurements could be classified in this way, and that buckets were still in use in US ships into the early 1990s. Since then measurements are based on floating buoys and Argo buoys and these recent temperatures measurements are unaffected. However that doesn’t matter because the crucial 1961-1990 normalisation period certainly is affected and HadSST4 only publishes temperature anomalies – not absolute temperatures. So the net effect of the new assignments  is to to lower the zero line (normals) from which anomalies are calculated,  and as a result all recent  anomalies have indeed increased in ‘temperature’. Hey presto the oceans have now warmed by an extra 0.1C.

We saw in the last post how moving from V3 to V4C had boosted warming when global temperatures are calculated in 3D using HadSST3. So what happens if instead we now use HadSST4 instead of HadSST3?

V4C/HadSST4 calculated with spherical triangulation covering poles. C&W would give similar result.

Recent temperatures get a boost increasing the apparent 2016 temperature by 0.25C  compared to the latest HadCRUT4. The record 2016 now stands at 1.05C or 1.45C above the pre-industrial era. Temperature anomalies are wonderful things. Changes to the past can affect the future. So expect to see alarmist headlines in the press soon once HadSST4 gets integrated into Berkeley Earth or into a future HadCRUT5?

V4/HADSST4 also kind of gazumps the Paris Agreement since the 1.5C target seems to have been almost been reached already in 2016.

Posted in AGW, Climate Change, climate science, CRU, UK Met Office | Tagged | 15 Comments