# The strange case of TCR and ECS

In this post we consider the strange coincidence that the net forcing used by CMIP5 models is essentially the same as CO2 forcing alone. This allows us to derive a value of TCR(Transient Climate Response) just from observational data.  Measuring ECS(EquilibriumClimate Sensitivity) however requires modeling information. We use the average CMIP5 forcing and a model derived “hysteresis” function in order to determine ECS from temperature data. The resulting energy imbalance calculated using these values of ECS and TCR is found to be the same as that derived by other methods.

The net climate forcing is mainly due to changes in anthropogenic GHGs and in aerosols. Something like 20-40% of aerosols are of anthropogenic origin. Aerosols have 3 main effects:

1. They scatter incoming solar radiation cooling the earth.
2. They (e.g. black carbon) absorb both incoming solar radiation and surface IR radiation
3. They help seed clouds formation – net cooling effect.

Energy imbalance $Q = F -\lambda\Delta{T}$  where $\lambda$ is the aerosol feedback. Models trade off aerosols against Climate Sensitivity to match observed temperatures. Aerosols are essentially the tuning parameter that match GCMs in hindcasts to previous surface temperatures. AR5 admits that they have  “low confidence” in the aerosol-cloud interaction, and the estimated uncertainties are that the net effect of aerosols could even be zero. However, if aerosol forcing is reduced then model sensitivities would be far too high.  I argued in the previous post that climate sensitivity should be defined as the measured temperature change for a measured doubling of CO2. Instead IPCC has defined it as the simulated change in temperature due to CO2 forcing alone, excluding other GHGs and aerosols. The amazing fact however is  that it doesn’t matter! In order for the models to agree with observed temperature rises since 1850 there is a near perfect cancelation between other GHGs and aerosols! Figure 1 shows net CMIP5 forcings compared to a CO2 only forcing.

Fig1: Comparison of a pure CO2 GHG forcing and the CMIP5 averaged forcings used to hindcast past temperature dependency since 1850.

So therefore it doesn’t really matter whether we use GCM models to derive a value for TCR or simply fit the temperature data instead. Let’s do that and derive a value for TCR using

DT = $\lambda$DS and
DS = 5.3ln(C02/290) where 290 is the CO2 value in 1850.
so DT = $\lambda$*5.3ln(CO2/290)

For CO2 I take the Mauna Loa data smoothly interpolated back to a value of 290 in 1850. We then fit the temperature data to a ln(CO2/290) term. The result is shown below

A fit of the temperature anomaly data to lambda*5.3ln(CO2/290)

This gives a climate response value $\lambda$ = 0.47 degC/Wm2     therefore

TCR = $\lambda$*3.7 = 1.7C

A  fit to the temperature data which includes a 60 year natural oscillation, possibly linked to AMO (see recent post by Bob Tisdale) , gives a slightly different result.

Fig 3: Overall fit to 164 years of global temperature data (HADCRUT4)

A part of the rapid warming from 1970 to 2000 can be seen as potentially due to the upturn in this oscillation. The CO2 component now has a lower climate response with a $\lambda$ = 0.41 degC/Wm2  and

TCR = $\lambda$*3.7 = 1.5C

Figure 1 shows that the effective average forcing from all CMIP5 models has essentially been the same as that from CO2 forcing alone. This means we can derive TCR as defined by IPCC through this coincidence. This remains true now even if the ratio of aerosols to other GHGs were to change in the future. These two  analysis essentially measure a value:

TCR = 1.6 ± 0.2 C   where the error is an estimate of the spread in fits.

Equilibrium Climate Response (ECS)

“ECS is defined as the change in global mean temperature, T2x, that results when the climate system, or a climate model, attains a new equilibrium with the forcing change F2x resulting from a doubling of the atmospheric CO2 concentration.” It is the temperature reached after the earth has restored energy balance following a doubling of CO2.  The observed global temperatures since 1850 are instantaneous measurements while the earth is “warming”. The cause of the delay is because the oceans have a huge thermal capacity. One way to estimate ECS is to “measure” the change in heat content of the oceans $\Delta{Q}$. Then

$ECS = \frac{F_{2x}\Delta{T}}{\Delta{F}-\Delta{Q}}$

However there is another way to do it by “measuring” instead the response of the earth to a sudden increase in forcing. I used an old GISS model to measure that inertia from a model  run which instantaneously doubles CO2. The temperature response is shown in figure 4 where the red curve is a fit I made to a $(1-e^{\frac{t}{\tau}})$ term.

Fig 4: Response temperature curve from a pulse doubling of CO2 in 1958 and fit described i the text

$T(t) = T_0 + \Delta{T_0}(1-e^\frac{-t}{15})$

This provides a method to derive ECS from the temperature data once the net forcing is known.

where $\Delta{T}_{0}$ is the equilibrium temperature response to a change in forcing $\Delta{S}$.

To calculate the CO2 forcing  I take a yearly increment of

$\Delta{S} = 5.3 log (\frac{C}{C_0})$  ,     where  C and C0 are the values before and after the yearly pulse. All values are calculated from seasonally averaged Mauna Loa data smoothed back to an initial concentration of 280ppm in 1750.

Each pulse is tracked through time and integrated into the overall transient temperature change using:

$\Delta{T}(t) = \sum_{k=1}^N (\Delta{T}_{0}(1 - e^\frac{-t}{15}))$

$\Delta{T}_{0}$ was calculated based on different values of ECS.  The results are compared to the observed HADCRUT4 anomaly measurements in Figure 4. The publication of AR5 report allows us to update CMIP5 forcings up to 2013 based on this graph.

The data was extended from 2005 to increase forcing and agree with the data – black curve from AR5. The final net forcing in 2013 is 2.2 W/m2. The code that calculates the temperature for different values of ECS  is available here. Figure 4 shows the temperature response calculated from the model using AR5 forcing for different  values of ECS.

Comparison of H4 to ECS values ranging from 1.5-4C. The thinck black line is the 5 year running average of anomaly data

Now looking in more detail at recent temperatures where the cumulative effect of past forcing is strongest, we see how unusual the current hiatus in warming appears.

Detailed comparison of ECS with H4 temperature anomalies details since 1960. Thick black line is an  FFT smoothing through the temperature anomaly data with a 5 year filter.

Values of ECS > 3 or ECS < 2 are ruled out by the data. The most likely value for ECS consistent with the recent data is apparently slightly less than  2.5C. The longer the hiatus continues the lower the estimate for ECS.

The overall result from this analysis is ECS = 2.3 ± 0.5 C.  The error is really asymmetric so it is more like  +0.5 and -0.3

Let’s see if all this works out as being consistent with the value of TCR that we measured before, and isolate the energy imbalance $\Delta{Q}$

$\frac{ECS}{TCR} = \frac{\Delta{F}}{\Delta{F}-\Delta{Q}}$

$\Delta{Q} = \Delta{F}(1 - \frac{TCR}{ECS})$

= 0.7 ± 0.5 W/m2

This is consistent with other values for $\Delta{Q}$.

In summary we have shown that there has been a remarkable approximate agreement between pure CO2 forcing and net CMIP5 forcing. This has allowed us to fit the Hadcrut4 temperature anomaly data to derive a value of TCR = 1.6 ±0.2C. The equilibrium climate sensitivity has been measured by using a model derived value for ocean temperature response to forcing of the form $\Delta{T}(t) = \sum_{k=1}^N (\Delta{T}_{0}(1 - e^\frac{-t}{15}))$. By integrating each annual pulse of  CMIP5 model forcings, we have compared different values for ECS to the Hadcrut4 anomaly data. This hysteresis effect becomes stronger over time so the current hiatus in warming strongly distinguishes between different values of ECS. Values greater than 3C are ruled out as are values < 2C. The best estimate  for ECS based on this method is 2.3 ± 0.5. The values measured values of TCR and ECS are for a total net forcing of 2.2W/m2 with an energy imbalance of 0.7 ± 0.5.

This entry was posted in AGW, Climate Change, climate science, Science and tagged , , , . Bookmark the permalink.

### 23 Responses to The strange case of TCR and ECS

1. Roger Andrews says:

Clive:

As you note, the IPCC defines transient climate response as “the average temperature response over a twenty-year period centered at CO2 doubling in a transient simulation with CO2 increasing at 1% per year”.

But the average temperature response of what? Land surface air temperatures? Lower troposphere temperatures? Ocean surface temperatures? They’re all different. And why an average over 20 years? Why not five years, or thirty? And why a CO2 increase of 1% per year? Would we expect a different TCR at 0.5% per year, or 2% per year?

The IPCC defines equilibrium climate sensitivity as “the change in global mean temperature that results when the climate system, or a climate model, attains a new equilibrium with the forcing change, resulting from a doubling of the atmospheric CO2 concentration.” Equilibrium is supposedly reached after all the surplus CO2-generated heat stored in the oceans has been released back to the atmosphere, which takes centuries according to the IPCC. But because downwelling long wave radiation penetrates only a few microns into the sea surface there are questions as to whether CO2 adds any significant amount of heat to the ocean. As far as I’m aware the main line of evidence that it does is that the climate models say it does.

And once again, which particular version of “global mean temperature” are we talking about?

The whole concept of TCR and ECS is hokey. As shown in the graph I posted on your earlier TCR thread (link below for reference) the CMIP5 RCP8.5 surface air temperature simulations can be fitted almost exactly over a period of 200 years, which is quite long enough for prediction purposes, using a constant climate sensitivity of 2.2C. No TCR or ECS necessary. One size fits all.

http://oi57.tinypic.com/25alwl0.jpg

One size fits all for the ocean surface temperature simulations too, except that the size is different (1.5C vs. 2.2C)

The there’s the question of HadCRUT4, which you (and most others) use to define “global mean temperature”. I don’t want to belabor the point, but HadCRUT4 is an area-weighted average of surface air temperatures over land and SSTs over the oceans, and to get an apples-to-apples comparison you have to area-weight modeled surface air temperatures in land areas and modeled ocean surface temperatures in ocean areas in the same way as HadCRUT4 weights them (which, incidentally, is what the IPCC does in the AR5 graphs). The correct way to do it is to compare modeled land surface air temperatures separately with CRUTEM4 and modeled ocean surface temperatures separately with HadSST3, the two series that are averaged together to construct HadCRUT4, and here’s what we get when we do this.

http://tallbloke.files.wordpress.com/2013/02/image2.png?w=614&h=368

http://tallbloke.files.wordpress.com/2013/02/fig3.png?w=614&h=368

Modeled surface air temperatures don’t show the current warming plateau but otherwise the match with CRUTEM4 isn’t too bad. But the models still can’t replicate HadSST3 ocean surface temperatures despite the massive “corrections” that have been applied to the raw SST data to construct this series (the match with unadjusted ICOADS SST is far worse) If this is the closest the models can come to hindcasting observed temperatures in the oceans, where ninety-nine-point-something percent of the heat in the atmosphere and the oceans is stored, then we’re unlikely to get any meaningful results when we use them to estimate climate sensitivity.

2. Clive Best says:

Roger,

I assume they choose a 20 year period so as to average over “natural variations”. A more cynical explanation would be that they are covering their backsides. Presumably the response is measured by Hadcrut4 – the official global average temperature anomaly, which is supposed to be cover land and ocean. I also guess that only by increasing slowly CO2 does TCR have any meaning – so they take 1% arbitrarily.

In reality TCR must be mainly caused by past warming and not current CO2 levels. A CO2 pulse released in 1850 must have by now finished its contribution to current temperatures whereas this year’s increase has yet to change surface temperatures much at all.

This gets even worse for untangling ECS. When I do that by integrating yearly pulses of CO2 and then comparing different values of ECS to H4 data then I get a value for ECS = 2.3. This agrees with your value. This low value is greatly influenced by the fact that there has been no warming for 15 years. This pause is very hard to understand if all that stored heat energy in the oceans has still to come out – where is it ?

Your approach of comparing CRUTEM4 and HADSST4 separately is interesting. Since oceans are 70% of the earth’s surface they dominate OLR compared to land. However the land warms fast. It would be interesting to compare changes in OLR to see if land could alone balances total increase in CO2 forcing. An increased GHE depends on an energy imbalance.

cheers.

• Roger Andrews says:

Hi Clive:

First re your comment: “In reality TCR must be mainly caused by past warming and not current CO2 levels. A CO2 pulse released in 1850 must have by now finished its contribution to current temperatures whereas this year’s increase has yet to change surface temperatures much at all.”

I don’t think past warming comes into it. The radiative forcing impacts of CO2 are instantaneous. Any heat that CO2 prevents from escaping to space is immediately present somewhere in the Earth’s climate system. It doesn’t hang around in some other form and then re-emerge as heat at some later time, or at least not so far as I know.

The impacts are delayed only if CO2-induced heat gets stored in the oceans and slowly released back to the atmosphere after CO2 levels have dropped or stabilized, which is what ECS is all about. I think the evidence that this happens is so weak that it’s not worth consideration, but it sure bumps up the IPCC’s temperature predictions.

Residence time also isn’t a consideration. It doesn’t matter how long the CO2 has been in the atmosphere. The concentration at the time is what counts. I’m pretty sure that the RCP scenarios, which estimate CO2 concentrations from emissions scenarios, also take residence time into account.

Bottom line? If we ignore the speculative impacts of delayed heat releases from the ocean – and the only way of quantifying them is with simulations obtained from climate models that don’t fit observations – there is only one value of climate sensitivity, call it what you will, and we can measure it over any period we like using the delta T and delta CO2 values over this period and watts/sq m = 5.35 ln(C2/C1).

Assuming, that is, that the delta T was caused by CO2.

Second on your question: “This pause is very hard to understand if all that stored heat energy in the oceans has still to come out – where is it?”

Well, it’s gone. It entered the oceans during the rapid increase in solar activity between 1910 and 1950 and left them during the sequence of El Niño/La Niña events between 1976 and 1998, generating a stair-step pattern in the SST, SAT and TLT records that accounts for all of the warming over that period. However, nothing happened during the 2009/2010 El Niño, indicating that the 1997/98 El Niño removed the last of the stored heat. This is why there’s been no warming since then.

Finally on HadCRUT4. It combines air temperature readings taken about 5 feet above the land surface with SST readings taken anywhere between a few inches and fifty feet below the ocean surface. If we assume a 30/70 split then the average HadCRUT4 reading was taken maybe a foot or two below the ocean surface in a medium consisting of 70% sea water and 30% air bubbles.

• Clive Best says:

Roger,

Yes sorry for the sloppy language. Some portion of the CO2 forcing enters the ocean as heat which only slowly increases temperatures. However this also affects TCR . Otherwise I agree that there is only one climate sensitivity and that is the observed response to CO2 – assuming all natural variations are averaged out. ECS is a model only concept and depends on delayed warming through ocean heat and which evidently the models got wrong since 2000.

Yes resident time of CO2 is irrelevant. However, suppose that a step increase in CO2 of 10ppm occurred in 1900 inducing a certain heat uptake by the ocean and then CO2 levels remained constant. This heat would have by now warmed the surface to restore energy balance. That is what I am saying. The fact that annualCO2 emissions have increased to reach a total 400ppm just means that the energy balance has slightly increased. However it has increased far less than dumping the full 120ppm of CO2 in one go 10 years ago.

3. A C Osborn says:

How can you possibly do anything with the garbage that is supposed to be the Temperature Record, it has been so mangled over the last 15 years that it bears no resemblance to the original temperature records prior to 2000?
Next how can you do anything with any change in temperature when you do not know how much contribution is made by Solar, Cloud and all the other Dynamics in the system?
Having done the “Cloud” analysis that you did how can you still contemplate trying to calculate the TCR and ECS?
Does CO2 now control Clouds as well?

• Roger Andrews says:

“How can you possibly do anything with the garbage that is supposed to be the Temperature Record, it has been so mangled over the last 15 years that it bears no resemblance to the original temperature records prior to 2000?”

Couldn’t have put it better myself

Except that the mangling actually began over thirty years ago in 1982, when Folland, Parker & Kates applied the first “bias corrections” to the raw ICOADS SST record. The latest incarnation of these “corrections” is to be found in HadSST3, and here’s what they look like (black line):

http://oi62.tinypic.com/2hyaza0.jpg

Rather large, huh?

And are there any large biases in the raw ICOADS data? Probably not.

Yet ~70% of HadCRUT4, the world’s “official” global surface temperature series, is contributed by HadSST3, and everyone accepts HadCRUT4 as gospel.

4. Clive Best says:

Fair point. The “corrections” applied to temperature data have not been properly accounted. It is clear that urbanisation and land change over the last 100 years must have had an effect as well. However I have checked that the area averaging of the “anomalies” doest result in Hadcrut4. IPCC assume solar is negligible and clouds are a feedback to AGW. If as I suspect clouds are an independent forcing on climate and related to solar activity then this explains both the enhanced warming from 1970 to 2000 and the current pause. Underlying all this is a real anthropogenic CO2 effect which is likely < 2C by the end of this century even without curbing emissions.

The rest is politics !

5. pdtillman says:

http://tinypic.com/view.php?pic=2hyaza0&s=8#.UyzJVV50Ey4

That’s a striking image. Do you have a published source?

It does look like one should start with the raw data, as the “correction” factors are murky at best, and multiple (potential) conflicts of interest + confirmation bias problems are all over the place. What a mess.

Cheers — Pete Tillman
Professional geologist, amateur climatologist

6.  Doug Cotton  says:

As I have said, there is no “forcing” at all by any so-called greenhouse gas. It is the gravito-thermal effect which sets up temperature gradients, which then determine surface temperatures and even temperatures below any planetary surface.

In my 10 minute video I talk about the need for greater understanding of physics – not just memorising of equations and laws. This emphasis has helped my university students ever since the 1960′s, and of course I have engaged in additional post graduate level study, especially in thermodynamics and the physics of radiative heat transfer, which is also seriously misunderstood, but explained in my paper “Radiated Energy and the Second Law of Thermodynamics.”

The Second Law “holds everything together” in that it applies to all forms of energy transfer, not just heat transfer. There can be only one state of thermodynamic equilibrium. There are not other processes that then determine density and pressure gradients, because these also have to be involved in the mechanical equilibrium which is an integral part of thermodynamic equilibrium. (Thermal equilibrium is far less all embracing. There are separate pages on Wikipedia which you could read to understand the differences.)

Now, what is happening in all planetary tropospheres is that they evolve spontaneously towards thermodynamic equilibrium. Of course they never quite get there due to weather conditions, but the important issue is that they evolve towards it with entropy increasing, never decreasing.

As tropospheres evolve towards thermodynamic equilibrium, gravity is setting up not only a density gradient, but also a thermal gradient. After all, there are just molecules up there and all they “know” is that gravity is pulling them downwards. But they have sufficient kinetic energy to “bounce” back upwards, even continuing to transfer kinetic energy when they are near the top of the troposphere. While ever they have any temperature (above 0K) they have kinetic energy.

As gravity is forming the density and temperature gradients, pressure gradients follow as a corollary, because pressure is proportional to the product of density and temperature, where temperature is proportional to the mean kinetic energy per molecule.

Pressure is caused by molecules striking a boundary (or surface) and so it doubles when density doubles and it also doubles when absolute (K) temperature doubles. But the important thing to remember is that pressure is the result of changes in density and temperature – it is not the cause of temperature changes, because it does not supply energy. High pressure at the base of a troposphere is not what is maintaining the high temperature – it’s the other way around. Nor does extra density necessarily increase the mean kinetic energy per molecule upon which temperature depends.

7. Diogenes says:

Clive

I would like to invite you to enter the debate on Bishop Hill on whether sensitivity is meaningful. Your choice of course….

http://www.bishop-hill.net/discussion/post/2314648?currentPage=2

8. Ken Gregory says:

I argued in the previous post that climate sensitivity should be defined as the measured temperature change for a measured doubling of CO2.

This is completely wrong. This falsely assumes that all climate change is due to greenhouse gas emissions, and that there are no natural climate forcings. The lack of warming over the last 16 years proves that natural climate forcing is very strong as it nullified the AGW effect. There are two types of climate forcings; shortwave forcings and longwave forcings.

Increases of anthropogenic greenhouse gases are longwave forcings. They increase the temperature difference between the top-of-atmosphere effective radiation temperature Te and the surface temperature Ts. The temperature difference, Ts – Te, is called the greenhouse effect, and usually given as 33 C. This forcing is well known.

Natural climate changes causes a shortwave forcing and is unknown. But natural climate change forcing is very significant because is caused most of the warming from 1910 to 1945 and much of the cooling from 1945 to 1975. Natural climate change totally counteracted the AGW forcings over the last 16 years resulting in no significant air temperature change. Longer term, the well documented Holocene Climate Optimum, Roman Warm Period, Dark Age Cold Period, the Medieval Warm Period and the Little Ice Age all resulted from shortwave natural forcings. These forcings, resulting from some combination of solar-induced cloud changes, TSI and natural ocean oscillations, do not directly change the greenhouse effect (Ts – Te) except via a water vapor feedback. That is, a natural change in albedo or TSI, changes Ts and Te equally.

The transient climate response (TCR) is defined as TCR = F2xCO2 x (dT/dF), where the dT must be the change in temperature resulting from the dF. The only dF that can be calculated is the anthropogenic greenhouse gas forcing (dFghg), so the dT must be ONLY that portion of the measured temperature change resulting from the change in greenhouse gas forcing.

However, since greenhouse gas forcing is the only forcing that change the greenhouse effect Ts – Te without feedbacks, we can calculate the TCR by using the ratio (d(Ts – Te)/dFghg) instead of (dT/dF), during a period when there was no net feedback change.

The 13.3 years of CERES data was fortunately during the period of no net temperature change (from HadCRUT), therefore there was no net feedback change during this period. [Feedback is defined as a change in a climate parameter resulting from a temperature change. So no temperature change means no net feedback.] The shortwave forcing, which is equal and opposite to the GHG forcing, caused no feedback response, and did not effect the Ts – Te difference.

The CERES Outgoing longwave radiation data determines Te, and HadCRUT4 can be used for Ts. CERES data shows that TCR = 0.74 +/- 0.54 C using hadCRTU4 data at 95% confidence, as shown:
http://www.friendsofscience.org/index.php?id=739

This is only 46% of the 1.6 C that you calculated. You assumed there was no natural climate forcing, and I make no assumption about natural climate forcing.

While the IPCC defines TCR as the response when CO2 increases at 1% per year, the response at the actual CO2 increase rate is not much different. But if you want to adjust the TCR definition to be more useful, use the actual measure CO2 increase rate.

• Clive Best says:

Ken,

I completely agree with you. I was just trying to extricate us from the trap set by the IPCC in their definition of TCR – please also read the previous post.

The IPCC definition of TCR as it stands is an unmeasurable quantity. Natural variation acts over periods of decades whether it is due to PDO/AMO, due to solar variability or both. It is likely that natural variability is currently in a negative phase resulting in the hiatus in warming.

What we need to do is place experiment back in the prime role like any normal branch of physics and NOT the inverse. TCR should therefore be defined as the measured response to CO2 which includes both anthropogenic and natural effects. Models should then be developed whose goal is to explain the measurements and then make predictions which can then be tested by experiment.

I have been told by Ed Hawkins that climate science is an observational science like astronomy. This week we have experimental evidence for a theoretical prediction of gravity waves caused by inflation after the big bang.

That is the way science should work !

• blueice2hotsea says:

Ken -

FYI – there’s a dead link in your CS document. The Mauna Loa CO2 data file (co2_mm_mlo.txt) is here.

9. Doug Cotton says:

You can’t calculate sensitivity to backradiation because the truth is that the Radiative Greenhouse is smashed by radiation itself:

There is no two-way radiation involved when a black metal disc just under the surface of water is receiving solar radiation from the Sun. Its temperature is raised by the hotter Sun. Its temperature is not raised by back radiation from a colder atmosphere, because that would violate the Second Law of Thermodynamics.

Back radiation does not melt frost in the shade of a tree, but the Sun would if you cut down the tree. But the IPCC and NASA claim that the intensity of back radiation is greater than that of solar radiation reaching the surface.

Every one-way transition of radiation is a completed, independent process which must (on its own) obey the Second Law. To claim that there is some net reverse process (such as the black disc warming the water which then evaporates and, days later, releases energy when it rains, is absurd. How can the first process of one-way radiation “know” that will happen in the future? What does happen is that the back radiation is pseudo scattered with each photon resonating and only ever temporarily raising electron energy (between quantum energy states) in the first molecule it strikes. That electron energy is not thermal energy which takes the form of kinetic energy mostly in the far heavier neutrons and protons. In other words, the energy never gets from the electrons to the nucleus.

So here’s how to get energy from back radiation:

Build a model toy train. Place a black disc under water in the tender (coal car) and, at night, the back radiation will warm the black disc (being still as intense as solar radiation in the day) and the water will boil and thus be able to be used to drive a miniature steam engine that makes the train go around, and around, and around .. the track.

You could make a fortune patenting this process scaled up to light up a city at night. /sarc

But, until you do, I’ll rest my case.

10. Doug Cotton says:

This week the March temperature data appeared here for example. As I predicted in August 2011, this year (2014) should see the rate of cooling increase a little, but there will be about half a degree of warming between about 2029 and 2059. The expected 500 years of long term cooling will probably start before the year 2100.

So why are we in the middle of a 30 year period of slight net cooling? Because natural cycles control climate – not mankind.

Standard physics tells us why carbon dioxide has no warming effect and water vapour has a significant cooling effect, because it reduces the thermal gradient and thus lowers the supporting temperature at the base of the troposphere.

The Ranque-Hilsch vortex tube confirms what physics tells us, namely that the force of gravity produces a state wherein the maximum entropy (at thermodynamic equilibrium) has both a density gradient and a temperature gradient, because of the effect of gravity acting on molecules when they are in free path motion between collisions.

Hence, since the whole greenhouse conjecture starts out from an assumption that the Second Law of Thermodynamics can be ignored and so (they think) isothermal conditions would apply if you removed all the “pollutants” like water gas, droplets and vapour, carbon dioxide and its colleagues from the atmosphere.

• Clive Best says:

Your assumption is that gravity alone generates a lapse rate on planets. However, another condition is needed. it is only because GHGs can radiate heat away from the top of the atmosphere that the lapse rate can be maintained. It needs a heat engine to maintain the heat flow – solar heating at the surface, convective/ latent heat flow upwards and radiation out to space. The atmosphere is not in thermodynamic equilibrium. If you removed all GHGs from the atmosphere the lapse rate would slowly collapse with the tropopause slowly moving to lower altitudes.

11. nuclear_is_good says:

http://clivebest.com/blog/?p=5694

and now you realize that that aerosols are a major forcing (even if your tone on that matter still is infantile at best).

• Clive Best says:

Aerosols have 3 main effects:

1)They scatter incoming solar radiation cooling the earth.
2)They (e.g. black carbon) absorb both incoming solar radiation and surface IR radiation – net warming
3)They help seed clouds formation – net cooling effect.

Overall aerosols have a net cooling effect

Models trade off aerosols against Climate Sensitivity to match observed temperatures. Aerosols are essentially the tuning parameter that match GCMs in hindcasts to previous surface temperatures.

Ocean heat capacity is why ECS .ne. TCR

12. Doug Cotton says:

The same forcing as CO2 forcing? Like zero you mean? Doesn’t anyone think?

The greenhouse conjecture adds back radiation to solar radiation and uses the total to “explain” the surface temperature. But of course this is wrong. The original NASA net energy diagram showed only about 165W/m^2 entering the surface, but that gave a far too cold temperature in Stefan-Boltzmann (S-B) calculations, so it had to be nearly trebled with back radiation.

The problem is, no one should be using S-B and be expecting to get the right answer.

All that S-B calculations can be used for is the mean temperature of the whole Earth-plus-atmosphere system, and it does give about the right value when you deduct about 30% of incident solar radiation due to reflection, but retain about 20% that is absorbed by the atmosphere itself.

Now, the big problem with all this is that 70% of the real surface is a thin layer of transparent water, let’s say 1cm deep. If you were to use S-B calculations to determine the temperature of that layer, bear in mind that over 99% of incident solar radiation that is not reflected passes right through it. So you should only use 1% or less of the 165W/m^2 of solar radiation and thus get ridiculously low values. Back radiation doesn’t have a hope, because it does not even penetrate a hair’s width into that first 1cm of water, and if all its energy were converted to kinetic (thermal) energy in that hair’s breadth, it sure would be hot and evaporate rather quickly without warming anything else. The fact that it doesn’t, confirms what I wrote in my paper over two years ago about resonant (or pseudo) scattering.

So you see, you cannot explain Earth’s surface temperature with radiation calculations, for the simple reason that, like the Venus surface, it receives a significant amount of energy by non-radiative processes as is explained in the Amazon book “Why it’s not carbon dioxide after all” by yours truly.

• Clive Best says:

Actually the two things are related. The CO2 greenhouse effect depends on there being a lapse rate. The atmosphere is opaque to IR within the 15 micron band way up to the high troposphere and even the main central quantum line it is opaque even into the stratosphere. Eventually the fall off in density with height caused by gravity thins out the CO2 molecules out enough that 15 micron photons can escape to space. This cools the top of the atmosphere which together with H2O cooling maintains convection. The CO2 effect is actually quite small compared to H2O because H2O covers more wavelengths plus there is some direct cooling from the surface through the IR window. However the net result of all this is that today CO2 in the atmosphere only causes about a 4C rise in surface temperatures on earth compared with an atmosphere devoid of CO2. It is not that much but it is still significant.

13. charles says:

IPCC versus HOTTEl et al

IPCC’s radiative forcing.((What is really true? Is there anybody to present a scientific derivation or a reference where this figure is not copied or just stated from assumptions, but properly calculated?

)) http://www.john-daly.com/artifact.htm by Dr. Heinz Hug