This is a guest post by “Frank” based on his comment to Stabilising Climate. He argues that CO2 levels will stabilise at ~520 ppm, which agrees with an estimate given by Ferdinand Engelbeen here quoted below:

My rough estimate is that with the current emissions twice the current sink rate at ~110 ppmv above steady state, one need ~220 ppmv above steady state to get rid of the full ~4.3 ppmv/year human emissions. That is a level of 510 ppmv, far above the 440 ppmv of Clive… The observed e-fold decay rate of the extra CO2 in the atmosphere is around 51 years, surprisingly linear over the past 57 years. No reduction in sink rate to see

**Now here is Frank’s argument**

Man is currently emitting enough CO2 to increase atmospheric CO2 by 4 ppm/yr, but we only observe an increase in atmospheric CO2 of 2 ppm/yr. Presumably that means that 400 ppm of CO2 is enough to drive 2 ppm/yr from the atmosphere. My intuition says that if we cut emissions in half, sinks would continue to take up 2 ppm/yr (until sinks begin to “saturate”) and atmospheric CO2 would stabilize at the current level. On the other hand, you say that a 1% increase in CO2 to 404 ppm is enough to double the rate at which CO2 disappears into sinks from 2 to 4 ppm/yr. Since this is a blog, I simply talk about emissions and uptake in units of ppm/yr rather than Gtons/year – it is much more intuitive and the math is far simpler.

I think maybe this is the correct mathematical formulation :

= rate of change in

= rate constant for Uptake of CO2 by all Natural sinks.

= rate constant for Emission of CO2 by Sink #1 by Natural processes.

= concentration of CO2 in Sink #1. Could be DIC in ocean.

= rate constant for Emission of CO2 by Sink #2 by Natural processes.

= conc. of CO2 in Sink #2. Could be organic material in soil in CO2 equiv. Or plant matter growing above ground. Some of this is emitted as CH4 and then oxidized, … Other sinks and rate constants.

= rate of Emission of CO2 by Anthropogenic mechanisms.

During the 10 millennia before the Industrial Revolution, and were both near zero and a steady state relationship existed.

If we assume that the size of the sinks is much larger than the amount of CO2 they have taken up so far, and that their rate constants are independent of global warming, then the second and third terms on the right hand side haven’t changed. For the largest sink that is the deep ocean, we know that the MOC takes about a millennia to overturn the ocean. So the deep ocean sink isn’t going to saturate in the near future. The mixed layer of the ocean is rapidly mixed by wind, so the CO2 content of the mixed layer is always in equilibrium with the atmosphere and that equilibrium is only slightly temperature sensitive. It doesn’t saturate either. The land sinks are a little trickier, but let’s make the assumption their rate constants and capacity haven’t yet changed appreciably. So we can substitute:

Today:

Substituting:

To double-check for consistency, go back to 1960 when (IIRC)

I really should look up the 1960 values and not trust my memory. However, no sign of saturation here either.

So what happens if we continue to emit a constant (not growing) 4 ppm/yr, and nothing else changes:

Therefore needs to rise to 520 ppm (280+240) for to be zero and atmospheric CO2 to stabilize when we are emitting the equivalent of 4 ppm/yr.

And my intuitive answer that we need to cut back to 2 ppm/yr to stabilize near 400 ppm (for as long as the sinks don’t saturate) agrees with this mathematics.

Are sinks saturating?

Reservoirs of carbon (in GtC) in the ocean (blue labels), in biomass in the sea and on land (tan and green labels), in the atmosphere (light blue label) and in anthropogenic emissions. Fluxes of Carbon between reservoirs are depicted by the arrows, the numbers represent GtC. (From: IPCC)

We’ve emitted 240 ppm of CO2, 120 ppm is in the atmosphere, 60 ppm-equivalents is in the ocean, and 60 ppm-equivalents on land. This Figure is using units of GtC and 400 ppm = 750 GtC. The land biomass reservoir is about 2000 ppm-eq and the deep ocean reservoir is about 20,000 ppm-eq. The increased CO2 stored in these reservoirs since 1750 is trivial compared with their size, so there isn’t an obvious reason why emission from the reservoirs should have increased already or will increase in the future.

In my main equation above, I should include a term for the increase in photosynthesis (primary productivity) with rising CO2. The incorporation of CO2 into organic material is the rate limiting step, so it could also have increased by a factor of 400/280 – at least in areas where water and other nutrients (N, P, K, and micronutrients) are not limiting.

**CB comment:** No-one is suggesting that it is a good idea to keep CO2 emissions at current levels for centuries to come, but stabilising emissions now is a more realistic goal than reducing them to zero, and limits atmospheric CO2 levels. This gives more time to develop a far more rational future energy and transport policy.

Pingback: A realistic CO2 stabilisation scenario – Climate Collections

The basic problem with this whole argument is that you’re essentially assuming that the ocean is infinite and that it can instantaneously mix the CO2 throughout the entire ocean column (so that there is no net change to the partial pressure in the ocean). It isn’t, and it can’t.

ATTP: My equation doesn’t assume that the ocean is infinite, but my calculations assumed that it will take a long time for the finite size of the ocean to become important. The k_un rate constant covers all uptake processes. Let’s consider the ocean component k_un2. The amount taken up in 1960 was k_un2*[315 ppm] and the amount taken up today is k_un2*[400 ppm]. k_un2 takes in account the volume of water downwelling and the solubility of CO2 at the temperature of downwelling. If we think of the MOC as a convective conveyor belt, for about the next millennium, upwelling water will release CO2 at a rate proportional to k_en2*[280 ppm], because the upwelling water hasn’t mixed with the downwelling water that is richer in CO2. After that, upwelling water will have downwelled at a time with CO2 has risen above 280 ppm. At that point, the deep ocean sink will have begun to “saturate”. However, the equation I proposed has a term for the CO2 concentration in sink2 and therefore takes the finite size of the ocean into account. On the other hand, my calculation assumes CO2 in the upwelling part of sink2 has not changed. If this assumption were not correct, k_un derived from 1960 data would differ from k_un derived from current data suggests. As I discussed before, I assume the mixed layer is in rapid equilibrium with the mixed layer and downwelling and upwelling occurs from and into the bottom of the mixed layer (not the atmosphere).

Scientists have used man-made CFC11 as a tracer downwelling of water since about 1950 (when CFC11 emission became important). Like CO2, its solubility is temperature-dependent. As best I can tell, it is currently only found in significant quantity where deep-water is forming in polar regions or sunk into the polar deep ocean.

http://journals.ametsoc.org/doi/full/10.1175/JCLI3758.1

Molecular diffusion is too slow to transport CO2 (or heat) into the deep ocean, so CO2 transport must be by convection. When fluids flow past each other or the bottom of the ocean in the MOC (or other currents), there will be turbulence and therefore turbulent diffusion or “Eddy diffusion”. So, both the “conveyor belt” and “diffusion” models appear to be over-simplified models for what happens in the ocean. I’m confused every time I transport of heat or CO2 in the ocean discussed in terms of a global diffusion parameter, which I think is the approach in the Bern model.

Lots of assumptions being made here, without testing their validity, or the sensitivity of the conclusions to their validity. How can this be described as a “realistic” stabilisation scenario as several of the assumptions are obviously invalid (e.g. that the mixed layer is in equilibrium with the atmosphere – if that were true, there would be no uptake, ATTP mentions others).

If this is considered a “realistic” scenario, does that mean that you retract your previous analysis (and its republication at WUWT)?

“No-one is suggesting that it is a good idea to keep CO2 emissions at current levels for centuries to come, but stabilising emissions now is a more realistic goal than reducing them to zero, and limits atmospheric CO2 levels. This gives more time to develop a far more rational future energy and transport policy.”It doesn’t help to pretend that we can stabilize atmospheric CO2 in the near future without very substantial cuts in emissions, on the basis of over-simplistic analyses, the short comings of which have already been pointed out to you, and in contradiction of the work of the scientists who have studied this problem for decades.

I integrated the Berne model using Nick Stokes parameters 500 years into the future (2620) assuming emissions fixed at 2013 levels. The airborne fraction falls from 0.55 to 0.2. It is levelling off to the a0 component which represents their infinite lifetime component.

Atmospheric CO2 also levels off rising to ~1200 ppm after 500 years time of constant emissions. If instead emissions are reduced to half 2013 levels, and then held constant CO2 rises to 750 ppm.

“Atmospheric CO2 also levels off rising to ~1200 ppm after 500 years time of constant emissions. ”

Guess we’d better not do that then!

This is (to say the least) rather different to the results from your model and that of “Frank”. Unlike those models, however the Bern model (or more accurately the impulse-response-of-the-bern-cc-model model) doesn’t make basic assumptions that are obviously incorrect.

Of course it makes assumptions. All models do.

The carbon cycle is immensely complicated see quote from P. 20 of The Global Carbon Cycle by David Archer.

“Of course it makes assumptions. All models do. “you ought to read posts a bit more carefully. I didn’t say the Bern model doesn’t make assumptions, I said that “doesn’t make basic assumptions

that are obviously incorrect.” [emphasismine]. Unlike your model, and unlike Frank’s.“The conclusion we come to is that the natural carbon cycle is a wild card, as large an uncertainty as that of our own CO2 emissions”Yes, and it could be that the outcome is

worsethan is implied by the Bern model. Assuming that the uncertainty suits your argument is cherry picking, the rational approach is to look at the expected outcome, taking into account what we know about the physics (e.g. the Bern model), and take into consideration the uncertaintyin both directions.Frank’s model is constructed from the point of view of a chemist describing first-order reactions: the rate is proportional to the concentration of the reactant. This is correct for CO2_atm –> biomass, as long as RuBisCo (the enzyme that uses CO2) is the rate-limiting step in the growth of biomass. It should be correct for transport from the mixed layer into the deep ocean by the downwelling branch of the MOC. Transport of CO2 by molecular diffusion is far too slow: Transport must be by convection with turbulent mixing at the boundaries of the current. I cited a paper on uptake of CFC11 by the ocean above.

The Figure above showing all of the transport and interconversions of CO2 as arrows would normally be converted by a person of my background (chemistry, biochem) into the rate equation I presented – though I didn’t include all of the processes. I’d love to see the Bern model derived starting from such a Figure. The Bern model SEEMS to involve what a chemist would call zeroth-order reactions and a curve fitting exercise using purely arbitrary rate constants and reservoirs sizes adjusted to match historical data. Given enough parameters, one can fit almost any data set. Perhaps I’m wrong about the Bern Model, but I reasonably familiar with rate equations.

I should also point out that the uncertainty regarding our emissions is not actually very large, so I think you are rather overplaying the Archer quote.

No you are misquoting him. The uncertainty is related to glaical cycles.

I think you should probably include the full quote, which is really referring to carbon cycle feedbacks.

Yes and CO2 acts as a feedback during glacial cycles.

Please stop trying to just score points.

It is clear that ATTP is not just scoring points because in context it is clear that the quote is suggesting the uncertainty is towards things being

worsethan implied by the Bern model.It has nothing to do with the Bern model which is not mentioned once in the entire book!

I wasn’t trying to score points (maybe you should stop being so defensive?). The point of that quote is mainly that the carbon cycle could stop being a negative feedback.

Clive,

You’re the one who keeps bringing up the Bern model. As has been pointed out on numerous occasions, the Bern model is not intended as a complete model of the carbon cycle. It doesn’t include the slow carbon sinks (or volcanic outgassing), it doesn’t include that the residual will depend on the magnitude of the cumulative emissions, and it doesn’t include that there are other possible carbon cycle feedbacks.

Indeed, Clive brought up the Bern model in his comment here, and he also introduced the Archer quote was introduced here in a response to one of my subsequent comments about the Bern model, in which case one wonders why Clive brought it up if

“It has nothing to do with the Bern model which is not mentioned once in the entire book!”!How can *I* be misquoting him, given that I just cut-and-pasted the quote from *your* comment? Was I in error in trusting you to have quoted him accurately?

The Bern model allows one to accurately fit the faster rate constants and reservoir sizes so that they produce agreement with historical data. Parameters for longer term processes are derived from small discrepancies between historical and model data without slower processes. As such, the slower parameters are probably highly uncertain. Extrapolating millennia of behavior from such parameters doesn’t make much sense to me.

The equations I wrote will be reasonable in the future because they rely on the actual amount of CO2 (or altered form of CO2) present at any time in the future. And they can be adapted to future change: If the rate of deepwater formation slow, the rate constant for that process can shrink. If downwelling water is warmer, it will hold less CO2 and the rate constant can reflect that. The Bern model (impulse-response) implies that the fate of emitted CO2 is pre-ordained when CO2 is emitted. That seems crazy to me, possibly because I don’t understand the origins of the Bern model. I’d only apply my equations as long as I thought that the key rate constants weren’t likely to have changed, say another half century.

A CO2 reservoir doesn’t saturate because it can’t “hold” anymore CO2 or CO2-equivalent. It saturates because the rate of emission of CO2 from the sink becomes equal to the uptake by the sink. And the rate of emission depends on the concentration of CO2 (or equivalent) in the sink. The more biomass in the soil, the faster micro-organisms will convert it back to CO2.

FWIW: If we have a first order reversible reaction:

A B or CO2 CO2_sink

dA/dt = -kA + k’B and

dB/dt = -k’B + kA

where k and k’ are the forward and reverse rate constants. If the sum of A + B = C (constant), then at equilibrium:

A_eq = C[k’/(k+k’)]

B_eq = C[k/(k+k’)]

Now we add a small amount of A, which I will call a.

A(t=0) = C[k’/(k+k’)] + a

B(t=0) = C[k/(k+k’)]

and the new equilibrium will be

A_eq’ = (C+a)[k’/(k+k’)]

B_eq’ = (C+a)[k/(k+k’)]

How fast do we approach the new equilibrium? Below is derived from

http://staff.um.edu.mt/jgri1/teaching/che2372/notes/07/02/equil_relax.html#p1

http://staff.um.edu.mt/jgri1/teaching/che2372/notes/07/02/equil_relax.html#p2

dA/dt = -kA + k’B = -kA + k'(C+a-A) = k'(C+a) – A(k+k’)

1/[ -(k+k’)A + k'(C+a) ]*dA = dt

1/[ -(k+k’)A + -(k+k’){k’/-(k+k’)}(C+a) ]*dA = dt

-[1/(k+k’)] * [1/[ A – {k’/(k+k’)}(C+a) ]]*dA = dt

1/[ A – {k’/(k+k’)}(C+a) ]*dA = -(k+k’)dt

No we integrate the left hand side (dz/(z+b) type) from A = Aeq to A(t) and the right hand side from t=0 to t=t.

ln{ [A(t) + b]/[Aeq + b] } = -(k+k’)t

[A(t) + b]/[Aeq + b] = exp[-(k+k’)t]

where b = – {k’/(k+k’)}(C+a) = -Aeq – {k’a/(k+k’)}

[A(t) + b]/[Aeq + b] = [A(t) – Aeq – {k’a/(k+k’)}] / [ Aeq – Aeq – {k’a/(k+k’)

[A(t) + b]/[Aeq + b] = [A(t) – Aeq – {k’a/(k+k’)}] / [ -{k’a/(k+k’) ]

A(t) – Aeq – {k’a/(k+k’)} = {k’/(k+k’)}a*exp[-(k+k’)t]

A(t) = Aeq + {k’/(k+k’)}a – {k’/(k+k’)}a*exp[-(k+k’)t]

Or perhaps more clearly:

A(t) = Aeq + a*{k’/(k+k’)}*{1-exp[-(k+k’)t]}

And miraculously, the limits appear to check out:

If a = 0, A(t) = Aeq (the equilibrium isn’t changed if we add zero A)

For t = 0, A(0) = Aeq (OK)

for t = infinity, A(t) = Aeq + {k’/(k+k’)}a (correct limit)

Rate dA/dt = -a*k’*exp[-(k+k’)t]

The new equilibrium is approached at a rate proportional to a and exp[-(k+k’)t], where the e-fold time is inversely proportional to the sum of the rate constants.

This much looks like the first term of the Bern model. If I were to add terms for other sinks and I could do the calculus and algebra, would I get more terms of the Bern model?

Or better still, if I take the derivative of the Bern equation, do we have a solution for the coupled rate equations with additional sinks B_2, B_3 ???? That would show that my model in part is equivalent to the Bern model.

dA/dt = -kA + k’B and other terms

dB/dt = -k’B + kA and other terms

“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

Upton Sinclair

Clive: I may have learned enough about this subject to make some useful comments. A model for the flux of carbon dioxides through all of the compartments in the environment (such as in the Figure I linked) is often called an Earth System Model or ESM. When the kinetics of this flux is described by a series of coupled linear time-invariant equations, mathematics with which I was previous unfamiliar allow one to transform into an mathematically equivalent system: the sum of a series of exponential functions. This mathematics can be found in WIkipedia under the name of impulse-responce functions (such as the Bern model), Laplace transforms, and Green’s functions etc. Assuming that I understand correctly, readers familiar with this mathematics can look at the simple approach to equilibrium I laboriously solved above and immediately write a simple exponential that can be fit to data. That exponential won’t be based on the real rate constants k and k’, it will have a single time constant and pre-exponential factor to be fit to data. And they can do so for a many compartment system, not just the simple two-compartment system I laboriously solved above.

The other interesting thing about these systems of equations is that fitting the output from a single pulse of CO2 (an impulse) affords one access to all of the parameters (time constants and weighting factors that define the response) that characterize the mathematically equivalent system. Parameters obtained from fitting the response to the release of a single pulse of CO2 are applicable to a series of annual pulses (though this violates my intuition).

In the real world (historically and in the future) and in AOGCMs trying to calculate the carbon flux in every grid cell and in ESMs, the responses are not perfectly linear and time invariant. Nevertheless you can abstract a impulse response model that imitates the behavior of the system. The historical record of CO2 allows one to fit the faster responses reasonably well, but the parameters for the slower responses come from ESM that projects the future. With an impulse-response model for an ESM or AOGCM, one can calculate CO2 fluxes under a variety of scenarios without having to run the whole model.

So the Bern model appears to be model of an ESM and the ESM correctly hindcasts the historical record of CO2 accumulation in the atmosphere. For more see:

Atmos. Chem. Phys. Discuss., doi:10.5194/acp-2016-405, 2016

http://www.atmos-chem-phys-discuss.net/acp-2016-405/acp-2016-405.pdf

Future emissions of CO2 over the remainder of the century are uncertain and a strong function of future climate policy (Van Vuuren et al., 2011). Future climate changes, and their associated impacts, will largely be determined by future cumulative carbon dioxide emissions (Matthews et al., 2009; Allen et al., 2009; Meinshausen et al., 2009), but linking specific CO2 emission scenarios to future transient climate change requires a model of the interacting climate-carbon-cycle system. Comprehensive Earth System Models (ESMs) directly capture the physical processes that govern the coupled evolution of atmospheric carbon concentrations and the associated climate response (Friedlingstein et al., 2006). However, such models are typically highly computationally intensive and can therefore only be run for a few representative future emission scenarios (Taylor et al., 2012). For analysis of arbitrary emissions scenarios, as required for the integrated assessment of climate policy and calculation of the social cost of carbon, a computationally efficient representation of the Earth system is required (Marten, 2011).

Simplified representations of the coupled climate-carbon-cycle system take many forms (Hof et al., 2012). A key test for simplified ESMs is whether they correctly capture the physics of the co-evolution of atmospheric CO2 concentrations and global mean temperature under idealised settings. Following a CO2 pulse emission of 100GtC under present-day climate conditions, ESMs (and Earth System Models of Intermediate Complexity – EMICs) display a rapid-draw down of CO2 with the concentration anomaly reduced by approximately 40% from peak after 20 years and by 60% after 100 years, followed by a much slower decay of concentrations leaving approximately 25% of peak concentration anomaly remaining after 1000 years (Joos et al., 2013). The effect of this longevity of fossil carbon in the atmosphere, combined with the gradual “recalcitrant” thermal adjustment of the climate system (Held et al., 2010), is to induce a global mean surface temperature (GMST) response to a pulse emission of CO2 of a rapid warming over approximately a decade to a plateau value of GMST anomaly (Joos et al., 2013). Warming does not noticeably decrease from this value over the following several hundred years, indicating that, short of artificial CO2 removal (CDR) or other geoengineering methods, CO2-induced warming is essentially permanent on human-relevant timescales.

A second important feature of more complex climate-carbon-cycle models is the increase in airborne fraction (the percentage of emitted CO2 that remains in the atmosphere after a period of time) over time in scenarios involving substantial levels of 15 emissions or warming (Millar et al., 2016). An emergent feature of the CMIP5 full-complexity ESMs appears to be that this increase approximately cancels the logarithmic relationship between CO2 concentrations and radiative forcing, yielding an approximately linear relationship between cumulative CO2 emissions and CO2-induced warming (Matthews et al., 2009; Gillett et al., 2013).

http://onlinelibrary.wiley.com/doi/10.1046/j.1365-2486.1999.00235.x/abstract

Impulse response functions of terrestrial carbon cycle models: method and application

MV Thompson, JT Randerson – Global Change Biology, 1999

To provide a common currency for model comparison, validation and manipulation, we suggest and describe the use of impulse response functions, a concept well-developed in other fields, but only partially developed for use in terrestrial carbon cycle modelling. In this paper, we describe the derivation of impulse response functions, and then examine (i) the dynamics of a simple five-box biosphere carbon model; (ii) the dynamics of the CASA biosphere model, a spatially explicit NPP and soil carbon biogeochemistry model; and (iii) various diagnostics of the two models, including the latitudinal distribution of mean age, mean residence time and turnover time. We also (i) deconvolve the past history of terrestrial NPP from an estimate of past carbon sequestration using a derived impulse response function to test the performance of impulse response functions during periods of historical climate change; (ii) convolve impulse response functions from both the simple five-box model and the CASA model against a historical record of atmospheric ?13C to estimate the size of the terrestrial 13C isotopic disequilibrium; and (iii) convolve the same impulse response functions against a historical record of atmospheric 14C to estimate the 14C content and isotopic disequilibrium of the terrestrial biosphere at the 1° × 1° scale. Given their utility in model comparison, and the fact that they facilitate a number of numerical calculations that are difficult to perform with the complex carbon turnover models from which they are derived, we strongly urge the inclusion of impulse response functions as a diagnostic of the perturbation response of terrestrial carbon cycle models.

Joos: Atmos. Chem. Phys., 13, 2793–2825, 2013 Used in AR5.

http://www.atmos-chem-phys.net/13/2793/2013/acp-13-2793-2013.pdf

CO2 is, unlike most other agents, not destroyed by chem- ical reactions in the atmosphere or deposited on the earth surface, but redistributed within the major carbon reser- voirs atmosphere, ocean, land biosphere involving multiple timescales for exchange among and for overturning within these reservoirs. A substantial fraction of the initial perturba- tion by the emission pulse remains in the atmosphere and the ocean for millennia. This fraction is only removed by ocean- sediment interactions and interactions with the weathering and burial cycle of carbon involving timescales from many millennia to hundred thousand years (Archer et al., 2009).

The continuum of timescales involved in the redistribution of CO2 can be approximated in practice by a few timescales only. It is usually sufficient to consider three to four terms in the sum in Eq. (5). Then the coefficients aCO2 ,i and ?CO2 ,i have no direct process-based meaning, but are fitting pa- rameters chosen to represent a given model-based IRFCO2 . The IRF of a model is normally computed by calculating the response to a pulse-like perturbation. In our case, the IRF for atmospheric CO2 is computed within the suite of carbon cycle-climate models by monitoring the simulated decrease of an initial atmospheric CO2 perturbation due to a pulse- like CO2 release into the model atmosphere. Similarly, IRFs for surface temperature, ocean heat uptake, sea level rise or any other variable of interest are obtained by monitoring its simulated evolution after the initial perturbation.

The IRFs or Green’s functions computed in this study are also useful to characterize the carbon cycle-climate models. The theoretical justification is that IRFs represent a complete characterization of the response of a linear system to an ex- ternal perturbation. For CO2, the value of the IRF at any par- ticular time is the fraction of the initially added carbon which is still found in the atmosphere. In a linear approximation, the change in atmospheric CO2 inventory at time tcan be repre- sented as the sum of earlier anthropogenic emissions, e, at time t? multiplied by the fraction still remaining airborne af- ter time t ? t?, IRFCO2 (t ? t?):

where CO2 (t0 ) is the atmospheric CO2 inventory

when the system was in (approximate) steady state. The IRF is thus a first-order approximation how excess anthro- pogenic carbon is removed from the atmosphere by a partic- ular model.

Non-linearities in the carbon cycle-climate system, how- ever, limit the accuracy of the above equation substantially. The IRFCO2 is not an invariant function, but depends on the magnitude of the carbon emissions (Maier-Reimer and Has- selmann, 1987). Non-linearities arise from the non-linearity of the carbonate chemistry in the ocean, from changes in ocean circulation with global warming that affect the surface- to-deep transport of excess anthropogenic CO2 as well as from other effects such as non-linear dependencies of ter- restrial productivity or soil overturning rates on climate and atmospheric CO2. It has been shown that the atmospheric re- sponse, as simulated with a comprehensive model, is better approximated using oceanic and terrestrial impulse response functions that include major non-linearities of the carbon cy- cle (Joos et al., 1996; Meyer et al., 1999).

Frank,

You are becoming an expert!

I too have been reading similar arguments e.g. from Joos :

Are complex models correct? Just because they are complex does not mean they are correct. There are many underlying assumptions made in ESM’s which may or may not be accurate. Non-linearity causes any errors to explode, so simply have to be ignored.

Hence Fig 10 in AR5. How convenient! If this is true, then we need a simple physical reason why this is the case today but not say 5 million years ago.

The first test of any model must be that it can describe the past. If you look at the following post then you will find that the BERN model has gone through at least two iterations and neither is capable of reproducing the Moana-Loa CO2 data.

Clive: Your Bern model vs Mauna Loa measurements is somewhat confusing. If I understand correctly (and perhaps this is where I’m going wrong), even in 1960 cumulative emissions were twice atmospheric accumulation. In other words, the airborne free fraction was 50%. So if M-L says 315 ppm in 1960, then cumulative emission should 350 ppm: 280 (pi) + 35 (accumulated) = 315 (observed) + 35 ppm (taken up) = 350 ppm (cumulative emissions). In 2015, the free fraction allegedly is still about 50%. 280 + 120 = 400 ppm (observed) + 120 (taken up) = 520 ppm (cumulative emissions). When observed atmospheric CO2 had increased by 100 ppm (about 2005) cumulative emissions “should have been” about 480 ppm.

IF the blue curve needs to be higher, then the performance of the Bern models doesn’t appear to be as horrible as it does here. The non-blue curves are all about halfway between cumulative emission (100% free fraction) and 280 ppm (0% free fraction).

I think I have calculated it correctly. I use 2.13 GT Carbon as being equivalent to 1ppm of CO2 in the atmosphere. I add up all the yearly global emissions data from CDIAC from 1750 to 1960 assuming it all stays in the atmosphere. I then get the blue curve. I agree it looks strange – almost as if the level in 1750 was really higher than 280ppm.

Now I’m trying to rationalize the existence of saturation and a larger free fraction using the IPCC Figure showing the size and fluxes to and from various sinks.

The half-life of radioactive 14C from atmospheric testing is about 5 years and this drawing show 20% of atmospheric CO2 (150 GtC) being taken up and exchanged with larger reservoirs every year. Obvious, the model was designed to produce this result.

If there were no respiration of continental biomass, photosynthesis could deplete the atmosphere of CO2 at an initial rate that would complete the job in 12 years, but which will slow as CO2 is depleted. If there were no photosynthesis, respiration would deplete land biomass at an initial rate that would complete the job in 73 years, but which would slow as biomass is depleted. For the continental biomass sink to saturate, more CO2 needs to be flowing into continental biomass. The current imbalance (2.5 GtC?) would take nearly two millennia to double the amount of biomass via the current imbalance. If a steady state existed pre-industrially and respiration hasn’t increased, then primary productivity has only increased by about 2% (including human agriculture) as CO2 has increased almost 50%.

Assuming the surface ocean (mixed layer) is in equilibrium with the atmosphere, I can understand some net outgassing of CO2 (2 ppm) due to warming. However, I can’t see why a deep ocean produced by downwelling when CO2 was 280 ppm should be outgassing 10% more CO2 through upwelling than it takes up by downwelling of water in equilibrium with 400 ppm.

I think I’ll quit here. Too much I don’t understand to expect to see why the free fraction is expected to decrease.