How does temperature depend on CO2?

Robert Rohde has produced a very nice animation of global temperatures as a function of CO2 levels in the atmosphere. Of course it is designed for public relations purposes in order to show increasing CO2 causes warming.  He even uses absolute temperatures which are not even directly measured. Here is my version of how temperature anomalies depend on CO2.

Fig 1. Land temperature (GHCN) and Global temperatures (HADCRUT4) plotted as a function of CO2 levels. GHCN-Daily agrees with Berkeley Earth Land temperatures. Normalised to 1961-1990 baseline.

After a rather uncertain temperature rise from pre-industrial (280ppm) temperatures, there is a long period with no net warming between CO2 levels of 300 to 340 ppm, corresponding to the  period ~1939 to ~1980. Warming afterwards continued as expected but then began tailing off towards a logarithmic dependency on CO2.

Many people will often glibly inform you that the CO2 greenhouse effect produces logarithmic radiative forcing, and state that this can easily derived from simple physics. However, few can really explain to you why it should be logarithmic, and it turns out that there is no simple proof as to why it should be. The often quoted formula for radiative forcing:

S = 5.35 \times \ln{\frac{C}{C0}}

can be traced back to a paper from 1998 in GRL (Myhre et al)

This formula is in reality a fit to some rather complex line by line radiative transfer calculations by hundreds of vibrational excitation states of CO2 molecules for absorption and re-emission of infrared radiation .  I have perviously described my own calculation of this radiative transfer and how you can fit a logarithmic dependency to it. The physical reason why increasing CO2 apparently produces a logarithmic forcing is that the central lines rapidly get saturated way up into the stratosphere, the strongest of which can then even cause cooling of the surface. Overall net warming is mostly due to strengthening of the weaker peripheral excitation  levels of the 15 micron band.

Fig 2: Calculated IR spectra for 300ppm and 600ppm using Planck spectra. Also shown are the curves for 289K and 220K which roughly corresponds to the Stratosphere. The central peak is cooling the planet because it lies high up in the stratosphere where temperatures are rising.

The net effect produces an apparent ‘logarithmic’ dependency, that I also calculated, and which is very similar to that of Myhre et al. Notice also how 3/4 of the “greenhouse” effect from  CO2  kicks in from zero to 400ppm.

Figure 1: Logarithmic dependence of radiative forcing on CO2 concentration up to 1000 ppm

The effect of increasing CO2 is to raise the effective emission height for 15micron IR radiation photons. The atmosphere thins out with height according to barometric pressure, and eventually the air is so thin that IR photons escape directly to space, thereby releasing energy from the atmosphere. Some IR frequencies can escape directly to space from the surface (the IR window). Others escape from cloud tops or high altitude water vapour and ozone.

The loss of energy from the top of the atmosphere drives convection and evaporation which is the primary heat loss from the surface. This process also drives the temperature lapse rate in the troposphere without which there could be no greenhouse effect. The overall energy balance between incoming solar insolation and the radiative losses to space determines the height of the tropopause and the earth’s  average temperature. A small sudden increase in CO2 will slightly reduce the outgoing radiative loss to space, thereby  creating an energy imbalance. This small energy imbalance is called “radiative forcing”. The surface will consequently warm slightly to compensate, thereby restoring the earth’s  energy balance.

This effect can be estimate from Stefan Boltzman’s law.

S = \sigma \epsilon T^4

DS = 4 \sigma \epsilon T^3 DT

If you assume T is constant (the answer increases by 1% for 1C if you don’t) then

DT = \frac{DS}{4 \sigma \epsilon T^3}

so with T = 288K and \epsilon \approx 0.6 and an effective insolation area of the earth of \pi \times R^2 this then  gives

DT \approx 1.6 \times \ln{\frac{C}{C0}} (^\circ C )

A steeper slope would be expected with net positive feedbacks

Figure 2 shows HadCRUT4.6 and my version of GHCNV3/HadSST3 plotted versus CO2 and compared to a logarithmic Temperature Dependence.

HadCRUT4.6 and 3D-GHCNV3/HadSST3 plotted versus CO2. The orange and purple curves show logarithmic temperature dependencies.

There is still a discrepancy in trends before CO2 reaches ~340ppm but thereafter temperatures follow a logarithmic increase with a scale factor of about 2.5. This implies a climate sensitivity (TCR) of about 1.7C .

Posted in AGW, Climate Change, climate science | Tagged , | 39 Comments

2018 Temperature Comparisons

The US Government shutdown delayed the release of the December GHCN station data. This also, perhaps surprisingly, also delayed the the Met Office/CRU results. So just how independent are they one from each other? Here is a comparison of the annual results from Berkeley Earth, GIStemp, my own 3d-GHCN, 3D-H4 and HadCRUT4.6,  all on the same baseline of 1961-1990.

Compare temperature series in 2018

You can see that they all have the same shape but that they then begin to diverge after 2004. Why?  GHCN V3 has 7280 valid stations and CRU have 7688 the vast majority of which use the same data, hence the reason for the delay in also releasing CRUTEM . The ocean surface data are  also interdependent between  HADSST3 and ERSST3, with only marginal differences.  So what causes these apparent changes in results and trends ?

The basic difference is simply the way the surface weighted average is made.

  • HadCRUT only average temperatures over locations where there are measurements based on  a 5×5 lat,lon grid.  The area weighted average of each cell is cos(Lat). This method has remained constant since 1990. You can argue that this is the only impartial choice since it avoids any interpolation.
  • GISSTemp  results in  the steepest temperature rise because it assumes that every station is representative of temperature change within a 1200km radius. So the addition of new ‘rapidly’ warming stations in regions like the the Arctic have a far larger effect over adjacent regions when forming the average. My gut feeling is this method,  originated by James Hansen has a warming biased.
  • Berkeley Earth uses a least squares fitting technique based on an  assumption that Temperature is a smooth function of position over the earth’s surface. So they also extrapolate into areas without data. Cowtan & Way use a kriging technique on the raw HadCRUT4 data, essentially doing the same thing.
  • I use spherical triangulation of all station and ocean data over the surface of the sphere to cover all the earth’s surface. Each triangle has one measurement at each vertex and all the earth’s surface is covered. Nick Stokes uses a similar technique for TempLS.

Here is a comparison between the monthly temperature anomalies of HadCrut4 and GHCN V3 when calculated exactly the same way using Spherical Triangulation. Only about 5% of stations are different, but there remains a small difference in data corrections (homogenisation). HadCRUT4.6 uses 7688 stations and GHCN has 7280.

Almost exact agreement between HadCUT4.6 and GHCNV3 when calculated exactly the same way.

The new GHCN V4 provisional data has collected 27315 stations of which 17372 have at least 10y of data between 1961-1990, so eventually V4 should double the number of stations, although not dramatically increasing the geographic coverage. We can expect a bit of extra warming though since Arctic latitudes have yet more data. Here is a preview.

GHCN V4 December temperatures over the Northern Hemisphere from over 17000 stations+HadSST3

With so many new stations mainly in the Northern regions global temperatures will apparently ‘warm again’ once the main groups adopt it.

Comparison of GHCN V3 (blue) and V4 (red) calculated in exactly the same way. Deltas are shown against the right hand y-axis.

The December temperature rises by another ~0.05C above that of GHCN V3.

In conclusions all global temperature indices agree with each other in general, but that is mainly because they all use the same core station and ocean data. The main differences are due to how they spatially average the available data. HadCRUT4.6 is the most conservative because it only averages over (Lat, Lon) cells where there is real data. All the others extrapolate into regions without data. I think Spherical triangulation is the most honest because it works on the surface of a sphere, weighting each measurement equally.

Posted in AGW, Climate Change, climate science, NASA, NOAA, UK Met Office | Tagged | 5 Comments

Zeke’s Wonder Plot

Zeke Hausfather who works for Carbon Brief and Berkeley Earth has produced a plot which shows almost perfect agreement between CMIP5 model projections and global temperature data. This is based on RCP4.5 models and a baseline of 1981-2010. First here is his original plot.

I have reproduced his plot and  essentially agree that it is correct. However, I also found some interesting quirks. Firstly here is my version of his plot where I have added the CMIP5 mean to compare with the new blended TOS/TAS mean. I have also included the latest HadCRUT4.6 annual values in purple.

Original plot with RCP4.5 model ensemble members overlaid and unblended model mean shown in red. HadCRU4.6 annual values have been added in purple. Click to expand

The apples to apples comparison (model SSTs blended with model land 2m temperatures)  reduces the model mean by about 0.06C. Zeke has also smoothed out the temperature data by using a 12 month running average. This has the effect of exaggerating peak values as compared to using the annual averages. To see this simply compare HadCrut4 (annual) in purple with his Hadley/UEA.

So now what happens if you change RCP?

Here is the result for RCP2.6 which has less forcing that RCP4.5

The same plot but now overlaid with RCP2.6 model ensemble and mean. click to expand

The model spread and the mean have increased slightly. So the model mean and grey shading should also  slightly rise.

Next, does the normalisation (baseline) affect the result ?

Effect of changing normalisation period. Cowtan & Way uses kriging to interpolate Hadcrut4.6 coverage into the Arctic and elsewhere.

Yes it does. Shown above is the result for a normalisation from 1961-1990. Firstly look how the lowest 2 model projections now drop further down while the data seemingly now lies below both the blended (thick black) and the original CMIP average (thin black). HadCRUT4 2016 is now below the blended value.

This improved model agreement has nothing to do with the data itself but instead is due to a reduction in warming predicted by the models. So what exactly is meant by ‘blending’?

Measurements of global average temperature anomalies use weather stations on land and sea surface temperatures (SST) over oceans. The land measurements are “surface air temperatures”(SAT)  defined as the temperature 2m above ground level. The CMIP5 simulations however used SAT everywhere. The blended model projections use simulated SAT over land and TOS (temperature at surface) over oceans. This reduces all model predictions slightly, thereby marginally improving agreement with data.  See also Climate-lab-book

The detailed blending calculations were done by Kevin Cowtan using a land mask and ice mask to define where TOS and SAT should be used in forming the global average. I downloaded his python scripts and checked all the algorithm, and they look good to me. His results are based on the RCP8.5 ensemble. These are the results I get using his Python code.

RCP 8.5 ensemble. The original projections are in blue and the blended ones in red. The ensemble mean is reduced by up to 0.07C . Data shown is Cowtan & Way.

Agreement has definitely now improved between the data (Cowtan a& Way) and the models, but they are still running warmer from 1998 to 2014.

Here finally is my 1950-2050 overview, where the blended RCP4.5 result has been added.

The solid blue curve is the CMIP5 RCP4.6 ensemble average after blending. The dashed curve is the original. Click to expand.

Again the models mostly lie above the data after 1999.

This post is intended to demonstrate just how careful you must be when interpreting plots that seemingly demonstrate either full agreement of climate models with data, or else total disagreement.

In summary, Zeke Hausfather writing for Carbon Brief 1) used a clever choice of baseline, 2) of RCP for blended models and 3) by using a 12 month running average, was able to show an almost perfect agreement between data and models. His plot is 100% correct.  However exactly the same data plotted with a different baseline and using annual values (exactly like those in the models), instead of 12 monthly running averages shows instead that the models are still lying consistently above the data. I know which one I think best represents reality.

Posted in AGW, Climate Change, IPCC | 27 Comments