# Global absolute temperature

Is there such a thing as a global absolute temperature of the earth’s surface? The temperature at any point on the earth’s surface is forever changing, hour to hour, night to day and with the seasons.  A  global average temperature Tgl is theoretically the area average of all local temperatures integrated over the earth’s surface. I claim that Tgl can really only be defined at one instance in time (maximum one day). In 1993 I was asked to process about twenty 9-track magnetic tapes containing an archive of ECMWF daily forecast data in GRIB Format and then write the results to an optical jukebox. Having finally succeeded,  I decided to calculate the ECMWF global average temperatures.  These were published were 23 years ago in GRL. Today such archives of weather forecast data are called reanalyses.

from:’Observation of a monthly variation in global surface temperature data’, C Best, Nov 1994. GRL

As the earth rotates so different fractions of land and ocean are illuminated by the Sun, and the earth’s albedo changes. The highest heating of the earth’s surface is probably at midday over the Pacific Ocean. The absorbed heat is then dispersed through the atmosphere (weather) and by ocean currents. Over land the albedo has been changing over thousands of years due to human activity. Deforestation, agriculture, drainage and urbanisation alter local albedo and weather, while recently man has also increased CO2 levels. Therefore the absolute temperature of the earth’s surface is always changing. Can long term trends be measured directly?

The average daily temperature (Tav) is  calculated from the maximum (Tmax) and the minimum (Tmin) recorded temperatures at each station.  Tav=(Tmax+Tmin)/2 . Likewise long term temperature series used for climate studies calculate the monthly averages in the same way, where Tmax and Tmin now are the extreme temperatures for any given month. These monthly values also vary from year to year due to fluctuations in weather. The average seasonal variation at one station is calculated over some given 30 years period and are called  ‘normal’ values. The 30 year period is called the baseline.

Now suppose you simply calculate the global average temperature for one month or for one year. This is fairly easy to do by preforming an area weighted average of Tav over the earth’s surface and then  averaging over one year. Here is the result for land based station in CRUTEM3.

Globally averaged temperatures based on CRUTEM3 Station Data

There are obviously some problems here. For example the temperature appears to jump up in 1950, but this is simply because a lot of new stations were suddenly added that year. This demonstrates that there is always a spatial bias due to where you have available measurements and this bias gets worse the further back in time you go. Before 1860 weather stations were mostly confined to Europe and the US , while ocean temperature data was confined to a few shipping lanes.

Stations with data back before 1860

So we can’t really measure global temperatures much before the satellite era or before ~1980.

The answer to this problem is to use temperature anomalies instead. Anomalies for a given station are defined relative to the monthly ‘normal’ temperatures over the 30-year period. CRU use 1961-1990, GISS use 1959-1980 and NCDC use all the 20th century. The temperature ‘anomaly’ for each month is then Tav-Tnorm.  Any sampling bias has not really disappeared but has instead been mostly subtracted. There is still the underlying assumption that all stations react in synchrony to warming (or cooling) as do their near neighbours. In addition it assumes that areas of the world with zero coverage behave similarly to those areas with good coverage.  It seems that Guy Callendar was the first person to use temperature anomalies for this purpose back in 1938

So the conclusion is that you can’t measure a global temperature directly, and even if you could it would  be changing on a daily and even hourly basis. The only thing you can measure is a global average temperature ‘anomaly’. Spatial biases are reduced but not fully eliminated, plus there remains an overall assumption of regional homogeneity. So when you hear that global temperatures have risen by 1C, it really means that the global average anomaly has risen by 1C. For any given month the temperature where you live could even be colder than ‘normal’. So for example, Europe was colder than normal in October 2015, despite 2015 being itself the ‘warmest’ anomaly year on record.

H4 Temperature anomalies for October 2015 using new temperature colour scale.

This post was prompted by Gavin Schmidt : Observations, Reanalyses and the Elusive Absolute Global Mean Temperature

This entry was posted in AGW, Climate Change, climate science, NASA, UK Met Office and tagged , . Bookmark the permalink.

### 17 Responses to Global absolute temperature

1. Lance Wallace says:

Typo:
The average daily temperature Tav is calculated for each weather station as (Tmax-Tmin)/2 .

2. Cytokinin says:

Your article has prodded me once again. All this temperature analysis and smoothing by using anomalies does not suit well with me. Surface temperature measurements taken twice a day to make an average seems like a rediculous way to measure the energy of a system. The system we are talking about stretches from the Van Allen belts to the bottom of the oceans and also has input from geothermal sources. For a stable system like a bucket of water a temperature measure will give a good approximation to the heat in that system, but the atmosphere is dynamic and non homogenous. Around the outside of my house, the temperature varies by as much as 2 degrees. The temperature can be very stable for hours, but can rapidly change if a cloud passes over, or a cloudy sky clears. The mean temperature therefore varies considerably from the median temperature. This is I think important for the same reason that mean income can be considerably different from median income. Why do we use mean? I suspect that it is meaninglessly.

Is there data that gives a better indication of the heat in the system?

• Clive Best says:

When we claim that the earth is warming all we are really saying is that the surface of the earth is warming. Much of the upper atmosphere could well be cooling. As you say each day is different and the temperature depends on whether it’s cloudy or sunny. So the monthly average temperature at one place calculate using the maximum and minimum recorded values is a crude approximation to the mean temperature. However it is the definition that is being used.

There are three ways the average temperature for say December could increase.

1) The minimum temperature alone increases. If a place becomes more cloudy or if it is affected by UHI or if CO2 greenhouse effect is inhibiting radiative cooling.

2) The maximum temperature increases. I doubt that CO2 is likely to have any significant effect on maximum temperatures.

3) Both rise together. This would be a clearer signal that the climate is really changing

A better indication of heat in the system would be IR flux from earth measured from space. So far no satellite data has unambiguously measured an energy imbalance.

• Cytokinin says:

Thanks Clive,
I certainly agree that only an energy audit from satellite data has any chance of determining what is happening, but since lags in the system are so long, even that will have difficulty pin pointing causes. Living in Scotland gives me a good feel for weather and climate, since most of what we get is the result of oceanic activity. 2/3 of the planet is ocean, so probably 3/4 or more of weather results from what happens in the seas and what happens elsewhere is indirectly derived from this. Ice melt in the artic sinks to the bottom of the ocean and is transported south welling up around the Caribbean years later. This then pushes warm water across the Atlantic to give Scotland a climate that is warmer and wetter than might be assumed for the Latitude.

• Cytokinin says:

Some how long winded, so post button obscure. Here’s the rest.

When we measure temperature around the world, what we are really measuring is energy that was in the system years earlier. I’m sure that I read recently that the lag in the system is about 60 years. If that is the case then the warming experienced now is a reflection of the system in the sixties. Was there more solar penetration sixty years ago? If the major component of climate has such a long lag, how do you dissociate this unknown from the much smaller component resulting from forcing? Even with sophisticated maths,I don’t think it is possible to calculate this since there are major holes in the data. Rather than concentrating on CO2, it would seem sensible to put energy into fully understanding the biggest driver of global climate

• Clive Best says:

I ran an old climate model GISS Model II on my computer. This shows the response following a doubling on CO2 in 1958. The surface temperature response time is indeed about 60 years. The model has a very crude ocean model, but I think the result is probably about right. However the climate sensitivity is about twice that now expected 2.3C (4.4C)

CO2 has been rising slowly for the last 100 years, so you really need to integrate each annual increment over a 60 year period. When I did that and used the latest value for the solar constant then I got these results.

So I don’t think it is correct that the temperatures we see today reflect the ocean temperatures of 60 years ago. Rather it reflects an integral of the responses from all previous years which will taper off once CO2 levels remain constant.

3. wrong.

First off. No one who produces a global average is measuring the global average.

They are all estimating the expected value of the temperature at locations where there is
no measurement.

Of course since this process involves an integration people call it an ‘average’.

I’ve done this analogy before perhaps it will help.

You have a pool in your back yard. You put one thermometer in the shallow end. it says 73F
You put another in the deep end it measures 75F

Now I give you a challenge. I am going to randomly select 1000 different locations to place
a third thermometer.

I want you to predict the value I will observe.

you will be judged on the size of the error of your prediction.

That prediction represent what folks are doing when they publish an ‘average”. It is the expected value of the temperature at a randomly selected position.

Now lets do another experiment.

I have 20,000 stations around the world reporting today.

i take 5000 of them at random. I use those 5000 to create a predictive field based on
altitude, latitude.

I then use those 5000 to predict the temperature of the other 15000.

Guess what?

And then I use the 20,000 to predict the values of the field ( a continous surface) at all the un measured locations.

Guess what?

Then I integrate that feild. I get an expected value for all the unobserved locations.

This is called “an average”

• Clive Best says:

Stephen,

Yes that is all very true. But you are always working with temperature anomalies DT not with absolute temperatures T.

If I select 5000 stations which are all above 2000m in altitude to do the integration then I will get an average temperature that is 13C lower than that from 5000 selected at sea level.

So we work with anomalies and integrate DT over the earth’s surface. This assumes that all stations react the same to CO2. Now lets consider whether they really do react all the same to CO2.

Air pressure at 4000m is 60% of that at sea level. Therefore there is 40% less CO2 above that station. Clearly they do react differently with height to increasing CO2.

Before 1900 the integration is at best rudimentary as there were very few stations in the southern hemisphere, Africa etc and Ocean temperatures depended on standard shipping lanes. So we end up interpolating one measurement over 10^6 km^2

I am not criticising the data or the analysis. It is the best we can do given the constraints.

• kakatoa says:

Steve,

Your pool example reminded me of how much I hated wasting money on keeping our pool clean, I refused to heat the pool, for the dogs to swim around. Bear didn’t mind the temperature gradients that developed overnight when I didn’t run the pool filter. I’d suggest running the pool pump for a while before trying to measure the average temp of the water in the pool. Otherwise you are likely just wasting your time trying to improve the accuracy metric and my be adding more error into the metric. Gavin’s discussion on significant digits seems to be important to consider right about now.

4. Clive Best says:

There is actually one way to unambiguously measure the earth’s temperature. You place a bolometer somewhere far away in space and measure the total IR radiation emitted by the earth to space. Then you can derive the black body temperature of the earth Teff from Stefan Boltzmann’s law.

This temperature should vary during the day (rotation of the earth) since albedo will change and also during one year as the sun moves from the northern hemisphere to the southern hemisphere. So we would need a yearly average.

If we plot this yearly average over time then we might see unambiguous evidence of an enhanced greenhouse effect. Strangely enough this evidence would be a fall in Teff because a jump in CO2 would initially inhibit IR radiation in the 15 micron band. Teff would then relax back to its initial value (~252K) once CO2 stabilises.

• Nick Stokes says:

“If we plot this yearly average over time then we might see unambiguous evidence of an enhanced greenhouse effect.”
No, because total IR emitted has to match sunlight absorbed. GHE doesn’t change the steady state flux emitted; it just impedes it so a higher temperature differential is required. The heat still has to leave.

IR temperature averages temperatures from emission points, which at some frequencies are near surface, and others in high troposphere.

• Clive Best says:

In equilibrium you are right.

What I really meant was that after a sudden increase in CO2 the net IR flux should decrease (Teff falls) and then decay back to Teff as the surface warms to balance incoming solar energy.

In other words you measure the transient energy imbalance following increases in GHE and the response function.

It would be a good test of models.

5. Cytokinin says:

Do you know if much work has been done on this in the lab? I would have thought that it was possible to get a really good grasp of individual components in the system by controlling each variable. I keep trying to find such data, but if it’s there it is very shy.

• Clive Best says:

I thought initially that CERES was measuring the energy imbalance at the TOA. However it turns out that they fiddle the result so as to agree with models ~0.6W/m2 CERES does not have sufficient resolution to measure IR flux to see any actual imbalance .

I asked the question on realclimate and got tis interesting response from Gavin Schmidt.

16
Clive Best says:
13 Aug 2017 at 4:10 AM
Suppose one were to measure the IR emission of Earth from say Neptune, to derive the Earth’s effective Black Body Temperature – Teff.

1) Would Teff vary during one (earth) year ?
2) Would Teff have changed at all since 1850 ?

[Response: You don’t have to go to Neptune, you can just degrade the data from DSCOVR (there is a paper on this either out or in press). You’ll see a diurnal cycle (as a function of the continental configuration) and an annual cycle as a function of the imbalance in hemispheric seasonality. As for long term trends, that depends on the energy imbalance. At equilibrium at 2xCO2, Teff could be smaller or larger depending on the SW cloud feedbacks. Assuming that they are small for the time being, you would initially get a *decrease* in Teff as CO2 levels rose (blocking some IR), and then a gradual increase as the surface equilibriated. Given that we are still in the transient phase, I think you’d see a small decrease that would persist. This can be calculated from GCM output from historical runs though, so if I get time I’ll make a plot. – gavin]

So maybe the absolute BB temperature will soon be measured.

6. Brendan King says:

Hi Clive,

Thanks for your posts, I’m getting a lot out of the interesting information. Adding to the temperature measurement difficulties along with limited continuous long term records, the change from mercury to digital thermometers appears to have created a substantial warming bias as mentioned here https://notrickszone.com/2015/01/13/weather-instrumentation-debacle-analysis-shows-0-9c-of-germanys-warming-may-be-due-to-transition-to-electronic-measurement/
A comparison of climatological observing windows and their impact on detecting daily temperature extrema https://link.springer.com/article/10.1007/s00704-017-2068-y

Regarding your model of temperature increase due to CO2 using ECS of 2.3C, it seems to still be overcooking things (after the previous drop from 4.4C!). There are a number of papers indicating an ECS of around 1C or lower. This question is vital in the whole warming discussion. The models are all running too hot due to using these high ECS values, and can’t even approximate temperature direction in the holocene by weighting the CO2 effect too high, as per the “Holocene conumdrum”

“A recent temperature reconstruction of global annual temperature shows Early Holocene warmth followed by a cooling trend through the Middle to Late Holocene [Marcott SA, et al., 2013, Science 339(6124):1198–1201]. This global cooling is puzzling because it is opposite from the expected and simulated global warming trend due to the retreating ice sheets and rising atmospheric greenhouse gases. Our critical reexamination of this contradiction between the reconstructed cooling and the simulated warming points to potentially significant biases in both the seasonality of the proxy reconstruction” https://www.pnas.org/content/pnas/111/34/E3501.full.pdf

These are nice brief versions having a look at ECS values