# Is TCR a measurable quantity?

Until yesterday I had assumed that climate sensitivity was the measured temperature rise after a doubling of CO2 in the atmosphere from 280ppm in the pre-industrial era to 560ppm as a result of human activity.

$\Delta{T_{cr}} = \lambda\Delta{S}$  where  $\Delta{S} = S_{560} - S_{280}$   and    $\lambda$ is the climate response

The discussion on Ed Hawkins blog concerning Lewis & Croc’s criticism of the AR5 climate sensitivity analysis has highlighted that the official IPCC’s definition of TCR is purely model based and not a directly observable quantity. Piers Foster writes

“Lewis & Crok perform their own evaluation of climate sensitivity, placing more weight on studies using “observational data” than estimates of climate sensitivity based on climate model analysis”. “Here we illustrate the effect of the data quality issues and assumptions made in these “observational” approaches and demonstrate that these methods do not necessarily produce more robust estimates of climate sensitivity.”

The IPCC definition of TCR is : “Transient climate response (TCR) is defined as the average temperature response over a twenty-year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year[Randel et al. 2007]”.

This is a model derived value calculated by drip feeding  CO2 into the atmosphere while keeping all other variables constant. It is only be the temperature rise since pre-industrial times when CO2 levels reach 560 ppm if nothing else changes. Lewis & Croc then try to untangle TCR from the observed temperatures by unfolding  model derived forcings (CO2,NO2,aerosols etc.)  and find that TCR lies in the range 1-2C with a most likely value of 1.35C. They are then criticised for doing this because they “rely” too much on observations !

For a similar reason I am also concerned about the reasoning behind why RCP8.5 emissions scenario results in a forcing of 8.5 W/m2 by 2100. This emission scenario “business as usual” results in CO2 concentrations reaching about 900 ppm by 2100. I assume that models have been used to calculate that this scenario results in a final net forcing of 8.5 W/m2. However those same models implicitly must have built in climate sensitivity in order to derive that forcing. The emission scenario not only covers CO2 but also other anthropogenic GHGs (methane, NCO, CFC)

In other words RCP8.5 has a built in feedback which can be calculated as follows.

1) feedback explicit $\Delta{T}_{0} = (\Delta{S_0} + F\Delta{T}_{0})G_{0}$

2) no feedback $\Delta{T}_{0} = (\Delta{S})G_{0}$

In case 1 $\Delta{S_0} = 5.3 log (\frac{C}{C_0}) = 6.19$
In case 2 $\Delta{S} =8.5$

Since the temperature rise must be the same in both cases
$\frac{\Delta{S_0}}{\Delta{S}} = \frac{3.75}{3.75-F} = \frac{8.5}{6.19}$

Therefore F = 1.02 W/m2/deg.C

Or a built in “anthropogenic” booster to the above definition of TCR of about 50%. In fact if you look at the AR5 model forcings you can see that indeed the other GHGs add currently calculated to add ~0.9 W/m2 to the 1.8 W/m2 from increased CO2. However this now introduces model dependent “anthropogenic” feedback.

The root of the problem lies in the entanglement of models and  observations in the definition of TCR.  In my opinion it would be much simpler and cleaner to define TCR as a purely measurable quantity rather than one solely based on model simulations.

“Transient climate response (TCR) is defined as the measured average temperature response over a twenty-year period centered on the observed CO2 doubling.”

This definition TCR(E) can be measured by experiment. It is simply the average temperature rise when CO2 levels reach 560ppm. It can also be essentially measured today – see A Fit to Global Temperature Data. This definition removes the non-CO2 anthropogenic effects (CH4, NO2,CFC etc.) and avoids getting trapped by the model centric view. These effects are essentially anthropogenic feedbacks in a sense similar similar to climate feedbacks – e.g. increased H2O.

In all other branches of physics models make predictions and experiments then test the models. Why should climate science be different?

The emission scenarios are based on  socio-economic modeling based on current energy sources. Nearly all of our energy still comes from burning fossil fuels, while farming and transport depend on oil. This is also reflected in the signature of CH4 and NO2 emissions associated with CO2 emissions. Therefore CO2 levels is still a good measure of anthropogenic effects.

Let’s suppose that in the next 50 years there is a breakthrough in zero-carbon energy – say thorium or fusion reactors. CO2 emissions would begin to fall and so too would the balance between other GH gases and CO2. As far as I can see none of this is reflected in the RCP scenarios.

Ed Hawkins says that climate science is an observational science – like astronomy. That seems to be about right. However in astronomy observations are the drivers of progress – for example the discovery of pulsars, dark matter and dark energy. In climate science the modelers are in control and observations play second fiddle. A prime example of this is the definition of TCR.

This entry was posted in AGW, Climate Change, climate science, Science and tagged , , . Bookmark the permalink.

### 4 Responses to Is TCR a measurable quantity?

1. A C Osborn says:

They can’t do real science using observations because they can’t fudge the results enough to make it “Catastrophic”.
The can’t use the current (much modified) temperature record and CO2 increase because they can’t tell how much of the temperature increase is CO2 and how much is LIA rebound or natural swings.
In other words because they have not been doing good science in the past what they have got to work with is just not good enough.
So they “estimate” how much of the temperature increase is CO2 based and use that and then call it science.
They are like a bunch of Quack doctors trying to do brain surgery.

• Clive Best says:

Indeed. The fudge factor turns out to be aerosols. This can be adjusted too offset the over-sensitivity to GH gases built into nearly all GCMs. The built in H2O and cloud feedbacks are way too high so in order to keep model “hindcasts” agreeing with measured temperature data the excess warming can be offset by aerosols. This way IPCC can continue to claim that TCR is 2-4C.

2. Roger Andrews says:

Clive: You ask, is climate sensitivity a measurable quantity?

Sure it is. Here’s a robust estimate of 2.2C for the RCP8.5 scenario, based on the CMIP5 surface air temperature (tas) model simulations:

http://oi57.tinypic.com/25alwl0.jpg

The question is whether this number means anything. It will, but only if CO2 is the ONLY variable that affects temperature. (The R^2 value for temperature vs CO2 in this case is 0.997, leaving no room for anything else).

Repeating the exercise for CMIP5 ocean surface temperatures (tos) gives an equally good best fit (R^2 = 0.996), but at CS = 1.5C, 0.7C lower than the surface air temperature CS. This difference highlights the need to evaluate ocean and air temperatures separately, not average them together as HadCRUT4 does.

3.  Doug Cotton  says:

Following on from my comments on the Loschmidt thread, this now explains precisely why there is no (warming) sensitivity to carbon dioxide, because the greenhouse radiative forcing conjecture fails to explain reality. Planetary temperatures are determined primarily by the gravito-thermal effect, not by radiative forcing.

We start with the second law of thermodynamics which states that “the entropy of an isolated system never decreases, because isolated systems always evolve toward thermodynamic equilibrium— a state depending on the maximum entropy.”

To think of entropy as a “degree of randomness” can often lead to misunderstandings. The main concept is that isolated systems progress towards thermodynamic equilibrium, that being the state wherein entropy is maximised within the constraints of the system of course.

It now makes sense when we note that entropy can be described as “a measure of progressing towards thermodynamic equilibrium.” This way we have no ambiguity or conflict between the Second Law and the concept of entropy.

Note also that thermodynamic equilibrium is a state which has “no unbalanced potentials (or driving forces), within the system” It also embraces thermal equilibrium and people should note that thermal equilibrium does not imply isothermal conditions, just no transfer of kinetic energy across a boundary.

But, if we start with isothermal rows of molecules (say, 68nm apart in altitude) we see the top row losing kinetic energy and the bottom row gaining kinetic energy as molecules go up and down across the boundary between the rows, because the sum of kinetic energy and gravitational potential energy is kept constant as they move, and then, when they collide, the total kinetic energy of the two molecules is shared equally between them. (See my four molecule thought experiment in a comment above.).

So the isothermal state was not in thermal equilibrium, and thus not the state of thermodynamic equilibrium. The Second Law implies that will change. It only stops changing when, in the absence of inter-molecular radiation, the difference in gravitational potential energy between the rows equals the difference in kinetic energy. Then, and only then, do we have thermodynamic equilibrium. Not surprisingly, entropy is homogeneous and so there are no unbalanced energy potentials that would lead to further increases in entropy.

Yes, you must include gravitational potential energy in all this, because thermodynamic equilibrium also includes mechanical equilibrium, and that means no net movements of more molecules going downwards than upwards. Obviously the external force of gravity does affect the kinetic energy distribution, leading to the gravito-thermal gradient which I consider thus proven by induction from the four molecule thought experiment.

Furthermore, it is confirmed by data from throughout the Solar System.