30 years of IPCC assessment reports – How well have they done?

Over the last 30 years there have been 5 IPCC assessment reports on climate change.  I decided to compare each  warming predictions made at the time of publication to the consequently measured  temperatures. The objective is twofold. Firstly how well did the predictions fare with time?  Secondly have we learned anything new about climate change in the last 30 years?

  1. FAR

The First Assessment Report (FAR) was published in 1990. It was probably the first public alarm raised that the world was warming as a consequence of CO2 emissions. As a spinoff there was also a very good book written by the FAR chairman John Houghton – “Global Warming – The Complete Briefing”.  Here is the comparison of the “model” to resultant temperatures.

The first prediction of global warming in 1990. The data are my own 3D version of HadCRUT4.6

The trend in temperatures is confirmed but by the end of  2019 the degree of warming was less than the FAR 1990 forecast. In reality emissions have indeed followed the Business as Usual trajectory, but warming instead turned out about 0.3C less than that forecast.

2. SAR

The second assessment report (SAR) dates from 1995. Here is the SAR comparison between my 3D HadCRUT4 results shown in red with  the UK Met Office Model.

My 3D results for HadCRUT4.6 are shown in red. The results are the same as Cowtan & Way.

We see that the model which reduces the effect of CO2 by including aerosols agrees today reasonably well with the data. However since aerosols have actually fallen significantly since 1995 this result is not as good as it looks.  I would call this a moderate success for SAR but only assuming just CO2 level forcing without feedbacks. Next we look at the third assessment report (TAR).

3. TAR

The third assessment report was published in 2001. The TAR model predictions were actually lower than those of both FAR and SAR,  perhaps reflecting a real drop off in the measured temperatures. HadCRUT3 had by then showed a definite pause (hiatus) in global warming was underway following the super el Nino in 1998. Since 2001 many more stations have consequently been added (and some removed) so that that as we will see the hiatus has today essentially disappeared. Here though  is the comparison  of the TAR ‘projections’  compared to the current temperature data as of 2019. The temperature data  are again my own 3D version of the HadCRUT4.6 measurements.

The blue points are HadCRUT4.6 calculated using Spherical Triangulation.

The agreement between models and data  looks much better. However note that the absolute temperature change is only relative to that of 1990 (FAR publication date). Even then my impression is  that we are most likely following their blue B2 ensemble curve.

4. AR4

The fourth assessment report was published in 2005 and included a  shorter term prediction diagram compared to the HadCRUT data as it was available then. The updated figure below shows the updated comparison as of Jan 2020. The black squares are the latest HadCRU4.6 data as downloaded from the Hadley site. The small black circles are the old H4 data points (as available in 2005) from the original report. They differ from the current H4 results because CRU have in the meantime updated all their station data. In this respect note how the 1998 temperature has dropped a little while the 2005 value increased sufficiently so that the hiatus starts to disappear.

AR4 model comparison to HadCRUT4.6 updated for 2019

The final agreement is not too bad but the data nevertheless still lie below the model means for all reasonable SRES scenarios.

5. AR5

This finally brings us up to the Fifth Assessment report comparison. When AR5 was published in 2013 the hiatus in warming was still clearly visible in all temperature series, and as a consequence all models were running hot by comparison. Since then however many more stations have been added in the Arctic and the large el Nino of 2016 has now apparently evaporated the hiatus. Despite all this how do the models now compare with the the modern temperature data as of January 2020?

Here is the up to date comparison of Figure 11.25a to the data.

IPCC AR5 Figure 11.25a updated for 2019. The green trend is HadCRUT4.6. The Cyan trend is the 3D-version. The data is skimming along the bottom sensitivity range of all 45 CMIP5 models.

It is clear that the warming trend lies at the lower end of the CMIP5 ensemble. Only models with lower sensitivity can adequately describe the temperature data.

All five comparisons across a 30 year period of assessment reports say the same thing. There is an obvious warming trend in global surface temperatures consistent with being caused by the anthropogenic increases in atmospheric CO2 levels. However this trend lies at the lower end of all model projections that have been made into the future. It is very easy for models to describe the known historic temperature record,  simply because they have been tuned to do exactly that. The real test of climate models is whether they can predict future warming and all the evidence of the last 5 assessment reports shows that most of them fail to do that.

It is normal practice in science that theoretical predictions which fail experimental tests are rejected or at the very least modified. Climate science is different. The news from the latest modelling ensemble CMIP6 is that the new generation of  ESMs are even more sensitive to CO2 than the 7 year old CMIP5 models. CMIP6  produces even stronger warming trends in stark contrast to the actual observations!   Where is the scientific accountability?  Has not climate science perhaps simply merged with climate activism?

Hold on to your hats !

 

This entry was posted in AGW, Climate Change, climate science, CRU, Hadley, IPCC and tagged , . Bookmark the permalink.

23 Responses to 30 years of IPCC assessment reports – How well have they done?

  1. Hugo says:

    Hi I love your hard work.

    Maybe a silly question.
    Are there also assements ont the measuring points. and the way data is collected.

    • Clive Best says:

      Thanks,

      The raw measurements are recorded accurately, but the instrumentation changes subtly with time. Sometimes sites change location slightly. As a result there is a slightly opaque “homogenisation” method applied which assumes that nearby stations perhaps 1K km apart follow the same trend.

      This could all be fine, but the details of station selection (or rejection) remain somewhat obtuse. Until everything is transparent there will always remain a doubt that the final result may have been manipulated slightly. I am pretty sure there still remains a latent bias in the homogenisation assumptions.

      • Hugo says:

        Thank you for the reply.

        About the collection points is there more relevant data available.
        Type make date of manufacturing way of data calibration, location, woods shade on top of a building on the runway of an airport, way of collecting, interval. urbanization grade etc. etc.

        I read something about lots of collection points being discarded because of their location. To far away !!
        And lots of colleting points now in urban areas. where before it was standing in the middle of nowhere.

    • Alex says:

      I’d be curious to see the average earth temperature vs the total surface area of (desert + urban areas + agricultural land).

  2. chapprg1 says:

    Clive, excellent review as usual;
    The model outputs being published obviously still include the figment of positive atmospheric loop feedback. I need not school feedback system engineers on this point as it has been done on the internet many times by eminently qualified feedback system engineers such as refinery design consultants.

    Atmospheric greenhouse blankets add no new energy to the system so far as atmospheric feedback is concerned. Blankets do not add energy to a system but merely try to slow down the power from the source thus requiring the system warming to maintain constant power in>out of the closed system.

    The melting arctic ice melt feedback now finally being ‘mentioned’ will of course provide a positive feedback term from additional solar power capture. It would be interesting to see some quantitative estimates on this. Keep in mind that the newly exposed ocean is subject to a Cos^2(lat) x Cos (lat) solar intensity geometric term. Cos^3(77deg)=~1% of max term possible at the equator and it it is rapidly approaching 0% when the melting ice reaches the pole (all annual averages). Be d*** glad it it is going warmer since if it were going in the cooling direction toward say 30deg, it would be increasing rapidly toward a 65% term as it does in the ice ages and quite probably plays a strong hand in their 90,000 year duration.

    Thank you for your continuing attention to this AGW egregious attack on common sense and science.
    Ron Chappell, arationofreason

    • Clive Best says:

      Thanks Ron,

      What doesn’t change in the short term is the insolation at the poles. The North Pole spends 6 months in darkness so the melting ice feedback is zero. There will always be winter sea ice in the Arctic. However multi-annual ice cover will continue to reduce.

  3. Tony says:

    “egregious attack on common sense and science”

    Egregious: outstandingly bad; shocking. Gee, you have rather high standards Ron.

    The AR5 graph above shows the instrument temperature about 0.1C below the model average. Will you be saying the same thing if we have a couple of warm years and it goes 0.1 above the average?

  4. Pingback: 30 years of IPCC assessment reports – How well have they done? | Clive Best – Red Candle Blog

  5. Gerard says:

    Is it true that the original version of CMIP6 included a range of solar forcings which, when used in existing models meant that the only way the models would work was if the role of CO2 was set at zero. Modellers then asked these solar forcings be removed from the data so they could go back to business as usual.
    https://youtu.be/NYoOcaqCzxo

  6. Scott Trail says:

    Clive,

    Fantastic article. It is obvious there are those pushing an agenda, using the most dire climate model predictions for political gain, despite the data. I’m glad you have objectively analyzed the data and pointed that out.

    I see discussions of feedback loops in the comments. How do the current climate models account for clouds, ocean currents, snow cover, and other mechanisms that drive our climate?

    Scott

  7. Mick Wilson says:

    Hi Clive, and so long, no see.

    I, too, have sought to take my own analytical journey into the projections. My initial question was a simple one: in how many stations with “sufficiently long-term data” could Mann’s hockey stick be detected? If Mann’s aggregated case was compelling then logically there ought be many individual cases contributing to it, so, I thought to seek them out.

    My second question harkens back furthet, to my university studies in numerical methods and propagation of errors. Since the first visit we had from CRU to UNEP/GRID back in about 1992, I have been very uncomfortable with the lack of confidence statements about projection results. I am not impressed that parameter sweeps yield envelopes of results. I wish to aee how imprecision of physical quantities embedded in model inputs impact numerically on projected physical states – how does model X propagate an 8nitial uncertainty of 0.n% over 2,000,000 hourly iteration? On 5 degree cells? 1 degree cells?

    I get the impression you have related concerns, hence my signing up to your site.

    • Clive Best says:

      Hi Mick,

      Yes it is a long time. If you remember it used to be called Global Change and included environmental degradation via remote sensing. The greenhouse effect is really not so hard to understand, but modelling the global climate is impossible to understand, because it includes many ad hok assumptions and parameterisations of very complex things like clouds, carbon cycles, and farming ! Even then they don’t do so well because they all diverge into the future – hence the term “projections”. The models all diverge because they all use different fudge factors. You are right about error propagation. Any small errors rapidly diverge these projections – hence models are tuned so as to at least describe the past. One way they do that is by adjusting aerosols. Aerosols cool the planet by reflecting sunlight, so if the models need dampening so as to reproduce post 1980 temperatures then the aerosols knob is turned up.

      Whenever models are compared to data they rely on comparing temperature anomalies. The relative agreement depends on which baseline the anomalies are compared. Check out The baseline problem

      So where are we? Well the earth is clearly warming and so far has warmed by about 1C on average. Warming is concentrated mainly at higher latitudes and on land. My gut feeling is that the earth probably will warm another degree by 2100 but this is not really a big problem, since the last interglacial was probably slightly warmer. So we have 50 years to find an alternative source of energy. I am unconvinced renewable energy can ever work pertly because it is not renewable. You need steel, heavy machinery, shipping, silicon (molten sand), lithium, cobalt etc. etc. So it has to be nuclear energy – eventually nuclear fusion.

      cheers!

  8. Dan Pangburn says:

    Average global water vapor has been accurately measured by NASA/RSS using satellite based instrumentation for more than 3 decades. It has been increasing with a trend of 1.47% per decade. According to simple calculations using data from Hitran, increasing WV has been 10 times more effective at increasing average global ground level temperature than increasing CO2. Increased cooling from increased CO2 in the stratosphere counters the small warming from increased CO2 at ground level. Because the amount that WV can increase is limited, Global Warming is limited and there is no emergency. https://watervaporandwarming.blogspot.com

  9. Harry tenBrink says:

    Dan
    You even pop up here with your WV without reference

  10. Harry tenBrink says:

    Dan
    You are completely wrong with the stratosphere cooling the surface as shown to you at researchgate

    • Dan Pangburn says:

      You continue to fail to understand what I said. I can explain it to you but I cannot understand it for you.

      • Given a heat of vaporization of H2O of ~0.4eV, a global warming of ~1C over 3 decades will raise the partial pressure of water vapor also by ~1.5% per decade.

        Conventional GHG theory says that CO2 acts as a catalyst, boosting the H20 vapor pressure which via an Arrhenius-rate positive feedback further enhances the GHG warming until it reaches an equilibrium set-point.

        OTOH, you are saying that the higher H20 is just a spontaneous byproduct of a change in temperature with no cause associated for that change.

        Nice job of circular reasoning leading you back to where you started.

      • 1.5% per decade is about what GHG theory would predict. Keep talking in circles Dan.

        • Dan Pangburn says:

          The difference is small. Apparently you missed it. Or perhaps you did not even look.
          The observation that WV leads is corroborated by simple calculations using data from Hitran.

          • Ridiculous — compare the arguments

            The rest of the world: Increase in warming by addition of atmospheric CO2 provides a catalyst for increases in water vapor leading to further warming

            Dan Pangburn: Farmers are irrigating their crops with more water, leading to warming

          • Dan Pangburn says:

            PP,
            Apparently you are too stubborn to look or you would know better. Land area currently under irrigation is more than 4 times the area of France. That land was previously dry or they would not go to the expense of irrigation. Sources of the increased WV are calculated using published data. Average global WV has been accurately measured using satellite instrumentation by NASA/RSS since Jan 1988. The analysis is included in https://watervaporandwarming.blogspot.com

            The difference in slope of WV increase trends between measured and calculated from HadCRUT4 temperature increase is about 27 % (Fig 7). That is, WV has increased about 27% faster than it would if by temperature increase alone, and this comparison is conservative because other things besides WV have contributed to warming. Any exceedance demonstrates that another source of WV is leading.

            The Hitran based assessment showing that WV increase has contributed about 10 times more to temperature increase than CO2 increase is corroborative.

  11. brink1948 says:

    Dan
    Hitran is ONLY a radiative transmission code. You know that don’t you
    How then do you extract temperture from it
    A second issue is that WV is very varying spacially and with time. You never replied on our questions in ResearchGate how this high variablity is accounted for in your increase with the precision of one part in ten thousand
    Harry ten Brink

Leave a Reply