Long term temperature trends are best visualised on decadal timescales. This is because it smooths out short term local effects caused for example by ENSO and other ocean oscillations. My method to achieve this in a consistent way uses an icosahedral grid where each equal area cell is averaged over 10 years (see here). I have now updated my previous results to cover also the 2011-2020 decade. These results are shown below. If you click on each header such as 2011-2020 West-Hemisphere then you access the 3D icosahedral gridded temperature data directly. Simply click and spin the globe (thanks to Nick Stokes!)

The colour scale shows temperature differences relative to a baseline of 1961-1990

The warming trend since 1970 appears to be linear as shown below.

Ten year average temperatures based spherical triangulation of GHCN4/HadSST4

Overall the world has warmed by about 0.8C starting around 1965. However, CO2 levels have if anything been rising faster than the temperature response.

Moana Loa CO2 measurements.

The temperature response appears to be linear to an exponential increase in CO2 levels. Therefore there is no evidence of any positive climate feedback to CO2. Instead it is far more likely to be a negative feedback.

We need to eventually stop burning fossil fuels and find realistic alternatives, but there is no imminent “climate emergency” or “climate breakdown”.

*. Thanks to Nick Stokes for the WebGL visualisation*

*Related*

##
About Clive Best

PhD High Energy Physics
Worked at CERN, Rutherford Lab, JET, JRC, OSVision

careful Clive , you’ll be upsetting some powerful people…

In the middel of the Sahara there appears to be a hotspot surround by two cold spots. Does this make sense or is it just a data artefact?

I think it is probably an artefact. There are few reliable stations in central Africa and those that we have can miss years. I suspect those red spots are probably a single station. I only average over the years that there are data in each cell.

Good

Agree with the eventual elimination of FF but not now. Do you have any comments on the Wijngaard Happer paper that shows GH gasses are saturated or close to saturation?

I looked at it. They use Hitran to perform line by line integration over atmospheric greenhouse gases. Of course they agree that increasing CO2 warms the climate, but they calculate extra methane has a negligible effect. I think they probably right as it is already saturated and at the end of the spectrum. Far too much fuss is made over increasing methane emissions whereas CO2 is the key issue.

Hi Clive,

Why do you think far too much fuss is made over methane? 80x the GHG effect, and we seem to be undercounting it by at least 70%. Plus if you capture it you save money or fuel you would have lost. Prioritizing methane seems like a really good idea, imho, to avoid further warming that will trigger more positive feedbacks.

What am I missing? BR

Maybe this video will help to explain it: “Methane – the Irrelevant GreenHouse Gas” by Thomas Sheahen, from September 2022:

Great !

“The temperature response appears to be linear to an exponential increase in CO2 levels. ”

This is not surprising.

If the temperature increase was directly proportional to the increase in CO2, then the exponential increase in CO2 would produce an exponential temperature increase.

Remember the forcing equation.

?F=5.35ln(C/Co)

The increase in temperature with increasing CO2 is logarithmic. The rate of increase in temperature is less than the rate of increase in CO2.

Doubling CO2 from 280ppm to 560ppm produces a forcing of

5.35ln(560/280=3.7W/m^2

The next 280ppm produces 5.35ln(840/560)=2.17W/m^2.

In temperature terms the first 280ppm we add will produce about 3C warming. The second 280ppm will increase the temperature rise by another 1.75C to 4.75C.

This is better than 6C but still high enough to cause considerable damage.

Put the exponential increase in CO2 concentration (increasing slope) together with the logarithmic increase in forcing (decreasing slope) and you get the observed linear increase.

It would be nice to be optimistic, but the difference between exponential warming and linear warming is not going to be large enough to keep us out of trouble.

Yes I agree. The basic CO2 greenhouse effect leads to the warming shown below.

We must find alternative energy sources way sooner than a doubling of CO2 occurs, but it shows there is no “emergency” or “breakdown”.

@Clive Best I’m sorry I’m confused. Could I ask some questions? I apologize I think I know just enough to be a pain im sorry. But:

1. How does the graph show there is no emergency or breakdown? Isn’t that a whole new set of inferences from temperature to effects on people and crops etc? Or is it safe to assume that warming less than say 1.5C will not create emergencies for significant numbers of people, flora, fauna.

2. Could you add uncertainties to the curve? Will uncertainties likely be large enough to change your confidence in no emergencies (I realize I’m pushing you a little here but I am little concerned by the certainty of your statement). So many variables and FBs..

3. Aren’t there some “tipping points”, if that is correct term even below 2C increase, that will amplify warming, or is that unlikely.

4. Is it appropriate to show estimated total CO2e, i.e. include estimated methane and other GHGs here? Would that indicate less of an emergency, since warming is linear right now? Or are they significant, since they cover different absorption bands.

Please pardon my ignorance and my pushiness. I mean no disrespect but the no emergency confidence seems so high, and sometimes I wonder if we lived in Bangladesh or Pakistan, would we be more alarmed, than living in a richer country, higher above MSL? I know I’mbeing pushy again…

But your massive conclusion threw me. as did another of your comments, but I guess I expected more um detailed caveated conclusions given the complexity and uncertainty.

How likely is it there will be NO emergencies, no breakdowns?

A third pushy note sorry: Since we in US produce 4x-5x what everyone else on earth does, shouldn’t we convert sooner. We are the driver of this.

Eg, https://earth.nullschool.net Mode Chem CO2sc 40N 73.5W there’s Exxon. Plus midwest.. Day after day, worst CO2sc^ levels on earth. We better convert. Moore’s law is pushing PV and Battery prices down; wind lagging.

I’m sick of mercury, and oil wars and MTBE, and lies and political distortion, and over time, fracking is going to pollute a lot of water, IMHO. Tick tick tick..

I’m sick of giving money and blood for Exxon, Saudis (e.g, 9/11), and Vlad the Impaler. Oh that refinery it went offline again? It always does that during a crisis like Putin’s war. Bad refinery, bad.

Thank you.

Sorry for the late reply/approval.

Basically one has to prove a positive ( a climate emergency ) rather than a negative ( weather is normal ). The term climate breakdown has no scientific meaning.

It is true that average temperatures seem to have risen by up to 1C since ~1950. Before that the uncertainty in global temperatures is larger than the difference since there is no consistent measurement of temperature.

There is an ecological crisis due to the growth in human population and its encroachment on the natural world, but that is a different narrative. Ideally humanity must learn to live in balance with the natural world by stabilizing population. Climate change is a symptom not a driver – hence the idiotic policy of renewable energy to support a population of up to 20 billion people.

Thanks for your reply. My post was not as focused as it ought. My apology.

I guess i dont know of any policy to support 20 billion people. Abd arent the massive human GHG emissions overwhelminhly caused , per capita(!), by my country, USA, in particular,and wealth, sort of power-law correlation, pardon my lack of precison, better than correlated with population. Words fail me but basically isnt the US pet capita and polluting 4x to 5x as much?

Also, I still think uncertainties are. needed. You had pointed out theur importance earlier..

Thank you

The uncertainty on the recent decadal temperatures is very small ~0.05C

There is no climate “emergency” or breakdown. We have at least another 20 years to solve the energy problem.

There are no accepted “tipping” points. Yes summer ice cover in the arctic will slowly reduce but it won’t suddenly disappear. If tipping points really existed then life would not have survived for millions of years.

In my opinion only CO2 matters because about half our emissions remain in the atmosphere long term. Methane is a transient gas with a short lifetime.

We can blame the industrial revolUtion or whatever, but quality of life and life expectancy has never been higher. We need solutions which don’t overturn quality of life.

Exponential growth in carbon dioxide is expected since world population and GDP are increasing exponentially with similar rates of growth. Conspicuously, atmospheric pressure decays exponentially to accommodate the vertical flux of CO2, matching the CO2 growth rate, and keeping the optical path length of IR linear. So, of course the temperature response is linear to the exponential increase in CO2 levels.

BTW, here is how the forcing equation was derived… http://www.globalwarmingequation.info/eqn%20derivation.pdf

Here is how I got there ( although I got 6.6 rather than 5.6). I am surprised if you can derive it empiricaly.

https://clivebest.com/blog/?p=4697

Clive… excellent presentation!

Thank you for this simple and sobering analysis! What I think is even more itriguing is the fact that the increase has not accelerated the last 2 decades dispite several additional factors that should boost the increase on top of the rise in CO2;

1) A reduction in cloud cover

2) (partly related) Reduction in cooling aerosols as highlighted in a recent paper from well-known scientis (J. Quaas, et al, “Robust evidence for reversal in the aerosol effective climate forcing trend”, 2022. http://dx.doi.org/10.5194/acp-2022-295)

3) A record high rise in Methane concentrations (“Methane concentrations in the atmosphere raced past 1,900 parts per billion last year, nearly triple pre-industrial levels, according to data released in January by the US National Oceanic and Atmospheric Administration (NOAA). https://www.nature.com/articles/d41586-022-00312-2

In addition we now since 2012 also observe a slowering in the mass loss from Greenland and in arctic sea ice cover.

I think the only explanation must be a climate sensitivity that is in the lower range, maybe just 1-1,5 C.

Best regards T. Klemsdal, Oslo

Clive,

It looks good. But are the links that go with the global views correct? They seem to show plots of different decades, which don’t look like the snapshots shown.

You’re right of course !

Thanks – I’l fix it

Over 90% of the heat has been sequestered in the oceans, therefore it is not surprising that air temperatures are rising slower than CO2 concentrations.

“The temperature response appears to be linear to an exponential increase in CO2 levels. Therefore there is no evidence of any positive climate feedback to CO2. Instead it is far more likely to be a negative feedback.”

How can you be so sure when you are not measuring deep ocean temperatures? Why could there not be buffering going on. Buffering would hide the problem until it became obvious.

Nothing to see, move along.

The idea that heat is building up in the deep ocean somehow to reappear in the future violates the 2nd law of thermodynamics.

> The idea that heat is building up in the deep ocean somehow to reappear in the future violates the 2nd law of thermodynamics.

Is anyone claiming that this is what’s happening, though?

The deep oceans are typically the coldest layer of the ocean, and they can certainly act as a thermal buffer without violating any law of thermodynamics.

It’s not that the heat that was transferred there would later “reappear” elsewhere; it’s just that, after warming for a while, they would cease to as much of a thermal buffer.

Even ignoring the ocean, Clive’s feedback claim doesn’t make sense to me. Planck feedback gives ~1.1 C warming for doubled CO2. 1970–2020 CO2 forcing increased ~28 % of a doubling, and if I scale following IPCC AR6 total forcing then we went up by about 40 % of a doubling worth.

Planck feedback alone would give ~0.4 C and we got 0.8 C. That seems like positive net feedback. If you’re just doing rough calcs then temperatures alone suggest ~double Planck response and recently accelerated heat uptake suggests Earth will warm by more than double the Planck response.

There are different definitions but by the standard one, I think Clive’s data show substantial positive feedback.

I’d be interested in hearing Clive’s estimate of ECS and its 5–95 % range.

My simplest estimate for ECS (with no feedbacks) based on a Plank response to my calculation of radiative forcing is 1.4C . The largest unknown is the response of water vapour and clouds to such a 1.4C warming. Man’s impact on landscape and vegetation has also been huge independent of CO2.

My guess for 5-95% of ECS range would be 1.2 -> 3C with a most likely value of 2.4C . However we will never each equilibrium because the next Milankovitch cooling cycle will begin before we reach “equilibrium” in 50,000 years time.

One other thought is to recognize that the increase in CO2 is not “that exponential”. In fact over any one hundred year period, it looks linear… ?_nc_cat=105&ccb=1-7&_nc_sid=730e14&_nc_ohc=gAFs985ezkwAX_zXEi8&_nc_ht=scontent-sjc3-1.xx&oh=00_AfBq0ZvcCV8aftnmPhH6aFraWl87LgGD6_tqB8jEh50Zfw&oe=63640A90

Sigh … that’s a self-limiting positive feedback. Increase CO2 => increases T => more outgassing of CO2 (+H2O) => increases T. Strongly catalytic with respect to H2O since CO2 is non-condensing. (H2O also has a positive feedback but condenses out at higher concentrations)

It’s self-limiting because the GHG effect saturates with increasing CO2 concentrations. Look at Venus

Except that the timescales are all wrong.

“However, based on Antarctic ice core data, changes in CO2 follow changes in temperatures by about 600 to 1000 years”

From what I understand that’s essentially the uncertainty or resolution in the data. In any case, there should be no discernible lead or lag in either situation.

> The temperature response appears to be linear to an exponential increase in CO2 levels. Therefore there is no evidence of any positive climate feedback to CO2. Instead it is far more likely to be a negative feedback.

I’m pretty sure you’re using a different definition of “feedback” from what climate scientists are talking about.

If we are using standard engineering terms, and we’re simply talking about the response of surface temperature to forcing, we know via the Stefan-Boltzmann equation that there is [i]overall[/i] a net negative feedback. Meaning: if we increase the amount of thermal energy a planet’s surface is receiving, then the temperature will increase, and the outgoing radiation will increase. Iterate the math a few times, and the system will stabilize at some new temperature. Because the system stabilizes, that’s a negative feedback – the outgoing radiation serves to counteract the increase in temperature, and a ramp-up of temperature leads to an even greater ramp-up of rate of heat loss (and similarly for a decrease in temperature leading to an even greater decrease in rate of heat loss).

A positive feedback would lead to the opposite result: an increase in temperature would then lead to a *decrease* in the rate of heat loss, which would then lead to a greater temperature increase, and then heat would be retained even more easily, and so on, until temperature goes to infinity. (Or, similarly, if temperature decreases, until it drops below 0K). That’s clearly not what happens. Physics doesn’t allow temperature to go to +/- infinity. And so it’s already widely known that there is, overall, a net negative feedback in climate systems; any increase or decrease in temperature will be restrained, not amplifying itself to infinity. The actual physical law governing this is the Stefan-Boltzmann law. And this law is what produces the logarithmic relationship between a climate forcing and the temperature response.

When climate scientists are talking about “positive feedback” or “negative feedback”, they’re talking about something else; they’re talking about the feedback if we *ignore* the Stefan-Boltzmann feedback. They’re talking about [i]the rest[/i] of the potential feedbacks in the system: whether these amplify the initial forcing or not.

An example would be water vapor: water vapor is a potent greenhouse gas, and colder air holds more water vapor, so if something else pushes the climate to cool, then there’s less water vapor in the air, which means less heat retained still, and temperatures drop further. Which means: the final temperature response to the initial forcing will be greater than you’d expect if you weren’t considering water vapor. This is just an example; there are both positive and negative feedbacks.

Not counting any feedbacks, the amount of warming we’d expect from a doubling of CO2 would be about 1.1C. Note that this is still a logarithmic response! I.e., if CO2 doubles, temperature goes up by +X amount. That logarithmic response holds even when we have *zero* feedback.

The positive feedback that climate scientists generally expect is (roughly) +3C for a doubling of CO2. Again, note the logarithmic response!

So… saying “hey, there’s a logarithmic response, so the feedbacks are negative” is definitely, definitely wrong. It’s implicitly using the engineer’s definition of feedbacks, rather than the climate science definition of feedback. When climate scientists talk about positive feedbacks, they, too, are still expecting a logarithmic relationship between CO2 and temperatures! Pointing out that relationship does *not* imply that feedbacks are negative. It is the *amount* of equilibrium temperature change for a doubling of CO2 that shows whether the feedbacks are positive or not. Greater than +1.1C/2xCO2 is a positive feedback; lower than +1.1C/2xCO2 is a negative feedback.

And yes, currently we’re seeing more than +1.1C for a doubling of CO2. The observed temperature feedback is positive.

PS – if you’re going to check that for yourself, make sure you draw the distinction between the equilibrium climate response to a forcing, versus the transient climate response. The final temperature response lags the forcing, as it takes a while for a planet to heat up.

I completely agree with what you wrote. Sorry if I implied otherwise. My point was rather that so far the observed warming can be explained only as a direct consequence of CO2 forcing. Therefore TCR looks likely to be around 1.4C.

The other effects like Increased humidity and clouds and melting ice caps etc may also affect the final Value of ECS. I am just saying that so far in 2022 it seems to just follow CO2 forcing alone.

> My point was rather that so far the observed warming can be explained only as a direct consequence of CO2 forcing.

I don’t see how you get that from the observed data. Plus it’s not consistent with a TCR of 1.4C/doubling; not when TCR is *lower* than ECS and a non-feedback ECS is only 1.1C. If the TCR is 1.4C/doubling, then there are certainly feedbacks involved.

It’s all pretty well screwed up until climate scientists understand the natural climate variations. For example AMO shows a non-trending multidecadal variation, which tracks the UAH time-series closely since 1979.

Every one of these climate dipoles can be modeled based on a known forcing and an ocean-basin-wide standing-wave solution.

This is all straightforward to do if climate scientists were the least bit curious.

> This is all straightforward to do if climate scientists were the least bit curious.

I don’t think curiosity is the issue. How many free parameters does your model have?

It’s easy to create a phenomenological model that fits some data, but that doesn’t mean that the resulting model has any predictive power. The model is created by the fit to the data, so you can’t say that the good fit is a sign of a good model. Any decent scientist in a physical science field is going to be very skeptical of such a model unless you can identify a physical mechanism that would provide explanatory power. (So: what would be the physical mechanism here? Physically, how does this connection work? Explain it in terms of natural, physical laws, and show the math.)

The other approach would be to get enough data to show that the model is robust, but that’s going to take much, much more data. That’s how fields like medicine get around the issue without demonstrating the mechanism of action, and they still run into a big problem of reproducibility.

Curve-fit models are just.. not good science. It’s not just that the work might fall apart under the test of time; it’s bad enough that it wouldn’t even be publishable. That’s why nobody’s “curious” about this kind of approach.

That is just the AMO. There are eight other major oscillations, all in various phases at any given time. Hard to imagine that just one of them would impact the global warming trend over the last hundred years. Reminds of kids bouncing up and down in the pool… lots of sloshing, but the mean water level remains the same.

It’s a tidal model based on Laplace’s Tidal Equations, which are routinely used in GCMs. Conventional tidal analysis often uses dozens of lunisolar parameters but that’s OK since I simply reuse the same amplitudes and phases. It works for all the climate indices .. ENSO, AMO, IOD, PDO. QBO is beyond obvious.

Good try. Everything in applied science is a curve fit of some kind. Please give me an example of some physical model that

doesn’tinvolve a curve (or n-dimensional manifold) fit that ordinarily is used to match to empirical observations. Perhaps something in biology or perhaps a binary experiment — but even that is a point fit — a line of zero dimension.> It’s a tidal model based on Laplace’s Tidal Equations, which are routinely used in GCMs.

It’s not *just* a tidal model, yes? Physically, in your model, how do the tides control these oceanic oscillations?

> Good try. Everything in applied science is a curve fit of some kind.

A curve fit model is fine if you can get enough data and the relationship between cause and effect in that data is clear. For instance, the Ideal Gas Law was tested rigorously, thousands upon thousands of times, with each test typically generating hundreds of data points. It was tested for different gases, different temperatures, pressures, volumes, containers, so on and so forth. That model was tested over and over and over and over and over and over again, and the results were robust for all ideal gases – and we have a very good definition of what constitutes an “ideal gas” and what’s a non-ideal gas, based in thermodynamic principles.

So, yah, sure, such a model was initially a “curve fit”, but it’s been tested robustly enough that we know when it holds and when it doesn’t hold. It’s reproducible.

On *top* of that, later on we came to understand the underlying physics well enough that we could then explain this model in terms of that more-basic-physics. That’s what I mean when I talk about underlying physical mechanisms. Generally, scientific models are more robust when we can understand them in terms of simpler, better-controlled, better-tested underlying physics, as an emergent property of those underlying physics.

So: if your uses curve-fitting, is it well-tested? Or, does it present a testable physical mechanism?

What’s your training data set for your model, and what’s your validation data set? How does the size of the two data sets compare? And how do the number of data points compare to the number of free parameters?

These would help show that your model is robust.

Where is this so-called la-placian tidal oscillation index? Your graphs contain no information about it. How can you compare IT to other oscillation time series if you don’t have a time series of IT? Are you confusing LaPlace tidal equations with rotary tides?

Windchaser, You essentially proved my point that earth scientists show little curiosity. You immediately produced a negative knee-jerk reaction to my physical model, which incidentally is described in Chapter 12 of Mathematical Geoenergy, published in 2018 (Wiley/AGU).

Tidal action on the ocean’s thermocline is different than on the surface. The reduced effective gravity along the thermocline interface is such that lunisolar forces will cause massive subsurface waves. This is the behavior that my fluid dynamics solution to Laplace’s tidal equations (LTE) models. There are over 140 years of monthly readings of various climate indices such as NINO34, SOI, AMO, etc that are testable. The tidal forcing data used is exactly the same as that applied to producing models of the earth’s length-of-day (LOD) changes — that of course is a solid body, while a fluid such as within an ocean basin produces a sloshing response that must obey the LTE differential equations and the observed standing-wave modes of the basin.

The complaint by Ted that I didn’t show everything in a blog comment is entertaining. I am reminded of the time that Linus Torvalds wrote a quick note in the comp.os news group about some source code he was working on, and then — voila — curiosity took over.

Your reasoning is difficult to follow.

Look at your own curves: between 1965 and 2022, + 100 ppm of Co2 and + 0.8°C in temperature. This can only be due to the effect of Co2 alone… whose effect, without feedback, you say would be 1.4°C for a doubling (i.e. +280 ppm of Co2).

On the other hand, it is better to compare the radiative forcing of Co2 with respect to the temperature, knowing that this forcing alone represents only about 80% of the total forcing, except for periods with a large volcanic eruption.

Regarding the link between temperature and co2 level for 800,000 years, read the latest study from 2013 published in science :

Synchronous Change of Atmospheric CO2 and Antarctic Temperature During the Last Deglacial Warming

To read on https://www.science.org/doi/10.1126/science.1226368

« Abstract :

Understanding the role of atmospheric CO2 during past climate changes requires clear knowledge of how it varies in time relative to temperature. Antarctic ice cores preserve highly resolved records of atmospheric CO2 and Antarctic temperature for the past 800,000 years. Here we propose a revised relative age scale for the concentration of atmospheric CO2 and Antarctic temperature for the last deglacial warming, using data from five Antarctic ice cores. We infer the phasing between CO2 concentration and Antarctic temperature at four times when their trends change abruptly. We find no significant asynchrony between them, indicating that Antarctic temperature did not begin to rise hundreds of years before the concentration of atmospheric CO2, as has been suggested by earlier studies. »

Yes temperatures have risen by ~0.8C for a CO2 increase of ~120 ppm. However there is a subtle effect that inhibits further warming as CO2 increases further. The strongest central CO2 emission lines are already way up in the stratosphere where temperatures increase with height. So more CO2 actually begins to have a compensating cooling effect as well. This gives rise to an overall logarithmic warming effect with further increases in CO2

https://clivebest.com/blog/?p=4597

Therefore warming caused by a further 120ppm will be “logarithmically” reduced. The result is that a doubling of CO2 results in about 1.4C instead of over 2C if the effect were linear.

Feedbacks are a separate issue !

This is a good approximation but do we really know what the global temperature were in 1940 and 1950 to anything close to the needed accuracy? Why did it cool from 1945 to 1965? What caused the Little Age age or Medieval Warm Period if it wasn’t CO2? Do we know these ocean currents are on a 10 year cycle? Could it be a 100 or more years?

Yes, that the evolution of the temperature is a logarithmic function linked to the rate of Co2, it is well known. Until proven otherwise, in the period of measurements that we know, this evolution is linear with respect to the evolution of the corresponding radiative forcing. What about later?

In the case of the Giec, with the choice of a Planck climate sensitivity (1.2°C per doubling of the CO2 rate, without other forcings) and the Myhre forcing equation or in your hypothesis, with a sensitivity of 1.4°C the effect is the same:

A change from 280 to 560 ppm, or from 400 to 800 ppm, or from 560 to 1120 ppm gives + 1.2°C for the IPCC or + 1.4°C in your hypothesis.

Starting from the same absolute temperature for 280 ppm CO2, the equations for the IPCC are:

deltaT= 0.3236126 x 5.35 x Ln(ppm CO2/280) + 288.1 depending on forcing

deltaT= 1.7313 x Ln(ppm CO2) + 278.34 depending on the CO2 level

Correlation coefficient with the previous equation = 1, even with 3000 ppm of CO2…

In your hypothesis, calculated starting from your curve on the useful range, from 250 to 1000 ppm the equations are:

deltaT= 2.0196 x Ln(ppm CO2) + 276.72 depending on the CO2 level

Or deltaT= 0.3060037 x 6.6 x Ln(ppm CO2/100) + 286.024 depending on forcing

Correlation coefficient with above equation = 1, even with 3000 ppm CO2….

In both cases, the temperature increase is indeed linear with respect to the value of the forcing and logarithmic with respect to the CO2 level. At 1000 ppm of CO2 (let’s do everything to never get there), your estimate is 0.37°C higher than the Giec calculation.

These sensitivities do not reflect reality since they do not take into account the many other complementary forcings, nor the feedbacks, nor the influences of Enso, Amo, AO, etc.

For a doubling of the ppm of CO2 (280 to 560), I recalculated the temperature evolution, starting from your estimates of monthly anomalies and the monthly CO2 rates of Mauna Loa since 1958, via linear or logarithmic regressions .

Calculation basis 1 = temperature anomaly directly related to Ln(ppm CO2):

deltaT= 3.825 x (Ln(560) – Ln(280)) i.e. + 2.6513°C

Calculation base 2 = temperature anomaly linked to CO2 forcing calculated according to the Giec formula, i.e. deltaFgiec = 5.35 x Ln(ppm CO/280):

deltaT= 0.749 x (deltaFgiec(560) – deltaFgiec(280)) i.e. + 2.6510°C

Calculation base 3 = temperature anomaly linked to CO2 forcing calculated according to your hypothesis, i.e. deltaFclive = 6.6 x Ln(ppm CO2/100):

deltaT= 0.5793 x (deltaFclive(560) – deltaFclive(280)) i.e. + 2.6515°C

Linear regressions have a slightly higher correlation than logarithmic ones, but with monthly anomaly data, the signal is very noisy.

I redid the same calculation starting from 1880, but the estimates of temperature anomalies or CO2 levels are less reliable. The values ??are a bit stronger +2.67 to 2.68.

These indices are neither the TCR nor the ECS, which are calculated differently.

Yes all very reasonable.

TCR is the immediate climate response for a doubling of CO2. So I think it probably is ~1.4C

ECS is the response after waiting a few hundred years following a doubling of CO2. This can’t be measured. I also doubt whether the climate system ever is in equilibrium.

i don’t think it is safe to just average temperatures.

We need to consider the actual energy involved. Dry air takes far less energy to shift in temperature, so a better understanding is one of system energy.

This reduces the impact of polar temperature swings, as the air there is often low in water content.