There has been much media interest about “climate sensitivity”, following a recent paper (Otto et al.) which showed lower values than previous IPCC estimates. Climate sensitivity essentially measures how earth’s climate reacts to a sudden kick (volcano, meteor, CO2). The new lower result is mainly due to the stalling in observed global temperatures since 1998 despite rising CO2 levels. One measure of climate sensitivity is called Transient Climate Response (TCR) which is the instantaneous temperature rise once CO2 levels reach twice pre-industrial levels (560 ppm). The eventual rise in temperature with full energy balance (perhaps decades later) is the second measure – Equilibrium Climate Sensitivity (ECS). The difference between them is due to the time lag heat inertia within oceans. In this post I focus on ECS and simply assume that GCM models are a correct description of climate. I then use HADCRUT4 temperature data to try to pin down ECS. Unlike the Otto et al. paper I will avoid using OHC data and simply assume an e-folding ocean heat capacity delay of 15 years (also based on models) to reach equilibrium. The result shows that ECS due to the CO2 GHG effect is unlikely to be more than 3 deg.C. Recent temperatures instead imply a lower estimate of ECS ~ 2C.

The dashed curve is the transient CO2 temperature dependence assuming a feedback factor equivalent to an ECS = 2.0. The red and blue curves are predictions for different ECS values using CMIP5 model forcings up to 2005. Calculated model temperatures assume an ocean relaxation time of 15 years (see text). The data points are the latest HADCRUT4 values
Background
In the end the whole climate debate boils down to just one thing – climate sensitivity. So what is it and how can we measure it ? Climate sensitivity is the temperature response to an increment in forcing.
In the case of no “feedbacks” due to the Stefan Boltzman stabilization term
with T=288K. The climate reacts to any increase or decrease in “forcing” by radiating a little more or a little less energy from the surface. Temperature changes are slight because radiation varies a the 4th power of T.
Confusingly however the term “Climate Sensitivity” is usually defined as the change in temperature after a doubling of CO2. So for the no feedback case this results in a “climate sensitivity” ~ 1.1C. This is no big deal because even if mankind continued to burn all available fossil fuels so that CO2 levels reached up to 1000ppm global temperatures would only increase by at most 2C, which some argue might actually be beneficial to life on Earth.
The global warming scare then is all about “positive feedbacks” to CO2 forcing. However these positive feedbacks must not be too large, because otherwise temperatures would explode – just like the feedback you get when a microphone gets too close to its speakers. Suppose then the net feedback gain is f so that a temperature rise DT gets enhanced by (1+f)DT, but now we have a second amplification f(fDT) or f^2DT to add on and so on. The climate system would soon run out of control if the feedback gain approaches 1 as follows.
Figure 1 shows the temperature response to a doubling of CO2 for different feedback factors f.

Fig 1: Positive feedbacks can soon blow up “climate sensitivity”. Negative feedbacks are stable always stable.
So if the feedback gain approaches 1 the climate simply runs away with itself. So this is the problem that global warming alarmists face. They need to demonstrate that “run away” warming might occur unless civilization “de-carbonizes” and/or abandons growth. This means that feedbacks must be positive so that warming becomes scary but can’t be too strongly positive because otherwise the climate would have run away eons ago, and we wouldn’t be here. The earth’s climate has seen huge swings in CO2 levels in the past, survived meteor impacts, supernovae and a 30% increase in solar output. During all this time (4 billion years) liquid oceans have existed on Earth and life has prospered, which implies that feedbacks must be small or even negative. Figure 1 shows the range of estimates for climate sensitivity from the IPCC 2007 AR4 report ranging from 2C- 5C.
A recent paper (Otto et al.) has measured climate sensitivity based on Hadcrut4 temperature data, OHC (Levitus et al.) and model forcing data. They used the relationships for “equilibrium climate sensitivity (ECS)” and ” transient climate response (TCR)” as follows.
TCR is the based on the immediate temperature response to a change in forcing DF, ignoring any temperature inertia of the oceans, where as ECS is the final temperature response once the oceans have stabilized. This equilibrium value is calculated by including the transient heat uptake by the oceans with the surface temperature value. is the forcing due to a doubling of CO2 (~ 3.7 W/m2). When they include the flat period 2000-2012 they find the most likely value of ECS to be now 2C – which is significantly lower than 2007 IPCC estimates. The main reason for the updated estimates is the inclusion of temperature data up to 2009(12).
We can do the same analysis without OHC data if we assume a climate inertia time delay as described here. For this work I use the average forcings derived from CMIP5 generation climate models (Forster et al.) which have been digitized by Willis Eschenbach – see his post here. Note here that these model forcings have been tuned to fit the hindcast of temperature data since 1850. I find this backward retrofitting to be fairly suspect because they assume the models describe nature. The variable forcings include known volcanic eruptions, man made aerosols and “natural variability”. We can see just how quickly these forcings change in Figure 2 where the CMIP5 spectrum is compared to an absolute CO2 forcing calculated . The CO2 values are taken from seasonally averaged Mauna Loa data extrapolated back to 1750.

Fig2: Comparison of a pure CO2 GHG forcing and the CMIP5 avergaed forcings used to hindcast past temperature dependency since 1850.
It is clear that the model forcings are moving up and down rapidly in order to match the measured temperature anomalies from Hadcrut4. It is not just simple man-made or natural aerosols, but also to follow “natural variations”. Despite this let’s now use these CMIP5 forcings together with the Hadcrut4 data to measure the resultant climate sensitivity.
where
is the transient temperature response and
is the equilibrium temperature response.
and then taking Stefan Boltzman to derive the IR energy balance
or in terms of feedbacks
and for equilibrium climate sensitivity for a doubling of CO2
Figure 3 shows the temperature response calculated from the model forcings for given values of ECS.
Until 2005 the CMIP5 models give ECS = 3 deg.C. Extending the data till 2012 reduces the apparent value to 2.5C. However, the model forcing data are not available post 2005. Although the models are driven by CO2 forcing, and climate feedbacks, it is clear that they have a natural forcing component that is adjusted so as to agree with the temperature data. How well does the CO2 forcing alone do in matching the Hadcrut4 data?
To calculate the CO2 forcing I take a yearly increment of
, where C and C0 are the values before and after the yearly pulse. All values are calculated from seasonally averaged Mauna Loa data smoothed back to an initial concentration of 280ppm in 1750.
Each pulse is tracked through time and integrated into the overall transient temperature change using:
was calculated based on an assumed ECS of 2.0C. The results are compared to the observed HADCRUT4 anomaly measurements in Figure 4.

Fig 4: The dashed curve is the transient CO2 temperature dependence assuming a feedback factor equivalent to an ECS = 2.0
Willis Eschenbach fitted the model forcing/temperature response of CMIP5 model data to an e-folding factor of just 2.9y, and then showed that the measured data fails to reproduce volcanic cooling. He then makes a compelling argument that the models fail to match such volcanic forcing in the measured data. This argument is less strong however if the ocean heat capacity results in an e-folding time of ~ 15 years as shown in figure 4. This washes out the volcanic signals.
A CO2 signal for ECS gives a value ~ 2.0 deg.C agrees to match the overall trend of recent temperature data. Superimposed on this trend is an apparent regular 60y oscillation. This has been noted by many others and been identified and fitted previously. These results for a lower ECS than reported in AR4 are in line with those of Otto et al. This analysis assumes a slow climate inertia time e-folding scale of 15 years. If In suspect temperatures remain flat for another 5-10 years then ECS will reduce further and we can then worry about something else.
References
Otto, A. et al. (2013): Energy budget constraints on climate response. Nature Geoscience,doi:10.1038/ngeo1836
Forster, P. M., T. Andrews, P. Good, J. M. Gregory, L. S. Jackson, and M. Zelinka (2013): Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models, J. Geophys. Res. Atmos., 118, doi:10.1002/jgrd.50174
Christopher Colose, An analysis of Radiative Equilibrium, Forcings and Feedbacks
“When they include the the flat period 200-2012”. The period should be 2000-2012.
The calculation of climate sensitivity assumes only the forcings included in climate models and do not include any significant natural causes of climate change that could affect the warming trends. You assumed that all of the temperature rise since pre-industrial times was caused by CO2, which was partly offset by the cooling effects of rising aerosols. The CMIP5 aerosols forcing changes from -0.3 W/m2 in 1955 to -1.0 W/m2 in 1975, but this is just a fudge. There is no data to support this. The increase in black carbon soot may have caused as much positive forcing as sulphate aerosols has caused negative forcing. Even the sign of the net aerosol forcing is uncertain.
Correlations of cosmic rays to climate show that the solar forcing changes are much greater than the total solar irradiance (TSI) changes, as now admitted in Chapter 7 of the SOD. An analysis of OHC changes over the solar cycles shows there is a seven fold amplification of the warming effect from TSI.
http://www.friendsofscience.org/index.php?id=425
The deltaT in your equation for ECS should be just the portion of the temperature change that was caused by CO2. The greenhouse forcing is not just CO2. If you assumed all GHG forcing was CO2 you would overestimate the climate sensitivity to double CO2. I suggest you assume the solar forcing is seven times that of TSI, subtract that from the CMIP5 forcing numbers and recalculate the ECS.
Airborne aerosols have been measured by satellites since 1980. Climate Explorer shows the TOMS aerosol index has declined from 1.43 in 1992 to 0.15 in 2001. The lack of warming since 1998 can’t be blamed on aerosols.
Only changes in greenhouse gases can cause a change in the greenhouse effect, defined as the temperature difference between the average surface temperature and the effective radiating temperature of the earth, which is determined from the outgoing longwave radiation. Natural causes of climate change do not change the greenhouse effect. An analysis of the change in the greenhouse effect shows the climate sensitivity is about 0.4 C per double CO2.
http://www.friendsofscience.org/index.php?id=533
The first paragraph says, “In this post I focus on ECS and simply assume that GCM models are a correct description of climate.” The last of warming over the last 16 years proved that the GCM models are NOT a correct description of climate. I suggest you add a comment to the post that this determination of climate sensitivity assumes that all 20th century warming was due to CO2 and denies natural climate change. The effects from volcanoes are too short to significantly affect climate trends.
I agree. Climate models are fudging aerosols in order to better fit past temperature data. The resultant “forcings” vary all over the place.
Solar induced variations in cosmic ray intensity could also be a major driver in climate through cloud seeding. Hopefully the CERN experiment will report new results soon.
However there is a huge bandwagon of political capitol behind the CAGW narrative and we know that there is a basic physics case for some enhanced greenhouse effect from increasing CO2. The objective was to demonstrate that even assuming the worst case the data now points to a maximum temperature rise of ~2C which is fairly benign. It is time to call a halt to damaging measures to “de-carbonise” too rapidly.
I suspect you are right and that as temperatures fail to rise over the next 10 years some rationality will be restored.
In a few recent conversations with press contacts it has become clear that there is confusion regarding climate sensitivity. More specifically, the confusion is about equilibrium climate sensitivity vs. transient climate response. I will try to set the record straight here.
I’m unable to follow this as I got stuck right at the beginning.
1. How do you get lambda = 0.3? Using T = 288 K and sigma = 5.67E-8 in your formula for the “stabilization term” lambda = 1/(4*sigma*T^3), I get lambda = 0.185. Have I miscalculated something?
2. Next you say “So for the no feedback case this results in a “climate sensitivity” ~ 1.1C.” What’s with your “so”? If this is supposed to be something to do with CO2, what property of CO2 are you assuming here, and what numbers are you using to justify your claim of 1.1?
Sorry you are right. There is a factor epsilon (emissivity) missing. This reduces the Boltzmann factor. For a doubling of CO2 we get about 3.5 W/m2 offset by an increase in IR from the surface if temperature rises by 1C, assuming no feedbacks.
Emissivity of what? If the Earth’s surface, supposedly only 6% of Earth’s OLR to space originates at the surface. Hence even if you used an emissivity of 0 this would only increase lambda by 1/(1 – .06) giving lambda = 0.197.
If emissivity of clouds and greenhouse gases, how do you define each of those?
It seems to me that the formula dT = lambda dS is far from simple as soon as you get into the physics supposedly justifying it.
Your 3.5 W/m2 figure is close to the IPCC’s conventional choice of 3.7 W/m2. However that 3.7 number is truly cloaked in mystery, and I seriously doubt that any derivation of it is based on sound physics. The HITRAN tables are not the problem here, they’re fine, the problem is with the details of how that information is applied.
The emissivity is the total average value over the earth as seen from outer space. When you average over oceans + land and especially over the 68% coverage of clouds then the average figure is more like 0.6 . This then gives a ball park figure for lambda of 0.3 .
The magic 3.7 W/m2 “forcing” for a doubling of CO2 comes from the formula DS = 5.3 ln(C/C0) and this approximation is derived from radiative transfer models. I have tried to derive it here https://clivebest.com/blog/?p=4697
Thanks, Clive, that all makes things much clearer.
Which species of CO2 did you take into account? All the ones present in the atmosphere? Thanks to the logarithmic law, even trace species can have a big impact on sensitivity.
I don’t see where you took latitude into account. Presumably lower latitudes lose much more heat to space than higher, but on the other hand some low-latitude heat is transferred to high-, that is, the high latitudes contribute to the cooling of the low, besides radiation to space. Those effects shouldn’t need a full-blown GCM for a first approximation.