Whatever happened to the Global Warming Hiatus ?

The last IPCC assessment in 2013 showed a clear pause in global warming lasting 16 years from  1998 to 2012 – the notorious hiatus. As a direct consequence of this  AR5 estimates of climate sensitivity were reduced and CMIP5 models appeared to clearly overestimate trends. Following the first release of HadCRUT4 in 2012  the ‘headline’ that followed was that 2005 and 2010 were now marginally warmer than 1998. This was the first dent in removing the hiatus. Since then each new version of H4 has showed further incremental warming trends, such that by 2019 the hiatus has now completely vanished. Anyone mentioning it today is likely to be ridiculed by the climate science community. So how did this reversal happen within just 7 years? I decided to find out exactly why the post 1998 temperature record changed so dramatically in such a short period of time.

In what follows I always use the same algorithm as CRU for the station data and then blend that with the Hadley SST data. I have checked that I can reproduce exactly the latest HadCRUT4.6 results based on the current 7820 stations from CRU merged with  HadSST3. Back in 2012 I downloaded the original station data from CRU –  CRUTEM3. I have also downloaded the latest CRUTEM4 station data.

Figure 1 compares the latest HadCRUT4.6 results with the last version of HadCRUT3.

Figure 1. Comparison of HadCRUT3 and the latest HadCRUT4.6 Notice how all trends pivot around the 1998 El Nino peak.

I had assumed that the reason for the apparent trend change was because CRUTEM4 had added many new weather stations in the Arctic (removing some in S.America as well), while additionally the SST data had also been updated (HadSST2 moved to HADSST3). However, as I show below, my assumption simply isn’t true.

To investigate I recalculated a ‘modern’ version of HadCRUT3 by using only the original 4100 stations (used by CRUTEM3) from CRUTEM4 station data.  The list of these stations are defined here. I then merged these with  both the older HadSST2 and HADSST3 to derive annual global temperature anomalies. Figure 2 shows the result. I get almost exactly the same values as the full 7820 stations in HadCRUT4. It certainly does not reproduce HadCRUT3 !

Figure 2. The black curve is based on “modern” CRUTEM3 stations combined with HADSST3 and the Yellow  curve is “modern” CRUTEM3 stations with HADSST2

This result provides two conclusions.

  1. Modern CRUTEM3 stations give a different result to the original CRUTEM3 stations.
  2. SST data is not responsible  for the difference between HadCRUT4 and HadCRUT3

To confirm point 1) I used exactly the same code to regenerate HadCRUT3 temperature series using the original CRUTEM3 station data as opposed to the ‘modern’ values based on CRUTEM4.

Figure 3: Comparison of HadCRUT3 with my calculation using the original CRUTEM3 station data.

The original CRUTEM3 station data I had previously downloaded in 2012. These are combined with HADSST2 data. Now we see that  the agreement with the H3 annual temperatures is very good, and indeed reproduces the hiatus.

So the conclusion is very simple. The monthly temperature values in over 4000 CRUTEM3 stations have all been continuously changed, and it is these changes alone that have resulted in transforming the 16 year long hiatus in global warming into a rising temperature trend. Furthermore all these updates have only affected temperatures AFTER 1998! Temperatures before 1998 have hardly changed at all, which is the second requirement needed to eliminate the hiatus.

P.S. I am sure there are excellent arguments as to why pair-wise ‘homogenisation’ is wonderful but why then does it only affect data after 1998 ?


About Clive Best

PhD High Energy Physics Worked at CERN, Rutherford Lab, JET, JRC, OSVision
This entry was posted in AGW, climate science, IPCC, UK Met Office. Bookmark the permalink.

92 Responses to Whatever happened to the Global Warming Hiatus ?

  1. Thank you, a very interesting post.

  2. paulski0 says:

    The lowering of the lower likely bound on sensitivity had nothing directly to do with “the hiatus”. It was primarily a response to the weakening of the anthropogenic aerosol forcing best estimate.

    HadCRUT4 was released in the first few months of 2012 and was the primary surface temperature data source used in IPCC AR5. The AR5 reported 1998-2012 trend was 0.05ºC/decade and the current 1998-2012 trend in HadCRUT4 is 0.055ºC/decade.

    Note that CRU did not introduce any major new homogenisation scheme of their own into CRUTEM4 compared with CRUTEM3. In fact the opposite – they removed some of the homogenisation they had previously applied because the National Meteorological Services which they use as their sources provided their own homogenised series. The CRUTEM4 write-up notes that there are now (as of 2012) only 219 station series with any CRU-applied adjustments, out of about 5000 series, and there doesn’t seem to be any change in homogenisation practice for those 219. However, they do note some errors of application regarding previous homogenisation assessments which were corrected in CRUTEM4 e.g.:

    These comparisons showed that the adjustments for stations in the Southern Hemisphere (SH) outside of Africa reported by Jones et al. [1986] had not actually been applied to the station data used in CRUTEM3.

    • Clive Best says:

      You’re right. The first version of HadCRUT4 was released shortly before AR5. This first version showed a mild increase in temperatures such that 2005 and 2010 were slightly above 1998.

      Box TS.3 “For example in HadCRUT4 the trend is 0.04C per decade over 1998-2012”. Had they instead used HADCRUT3 there would have been negative.

      I suspect most of the homogenisation responsibility lies with GHCN. They are continuously refining it. Perhaps CRU just use their results. I am not sure.

      • paulski0 says:

        Odd, it says 0.05C per decade in the main report and the SPM, but never mind. The current HadCRUT4 1998-2012 trend is 0.055C per decade, so it really hasn’t changed much since 2013 for that period. The increased number of stations in recent years I think caused that small uplift.

        It looks to me like the additional data from stations which dropped out at various points post-1990 plus improved coverage from entirely new stations explains everything here.

        I don’t think they do use GHCN adjusted data, but I also <a href=""don't think the net effect of adjustments in GHCN or Berkeley is noticeable over 1998-2012 anyway. So I can’t see any basis for blaming homogenisation. The Berkeley Land-only trend over 1998-2012 from Raw data is slightly greater than the adjusted trend, and quite a bit greater than CRUTEM4.

      • Steven Mosher says:

        “P.S. I am sure there are excellent arguments as to why pair-wise ‘homogenisation’ is wonderful but why then does it only affect data after 1998 ?”

        CRU dont use pairwise.
        CRU get data from NWS
        Before they had a mix of NWS data and GHCN data.
        But then climategate..

  3. paulski0 says:

    Another thing to note, looking at figure 1 in the CRUTEM4 paper, is that there was a very large NH station dropoff after 1990 in the CRUTEM3 database, which was substantially improved for CRUTEM4. So while you may have the same stations in your CRUTEM3 and CRUTEM4 datasets, a high fraction of the CRUTEM3 stations won’t have any data after 1990.

    • Clive Best says:

      When CRUTEM4 first came out in 2012 I studied the changes in stations. Red shows the new stations and blue shows those dropped.

      Initially the change in temperature change was rather small

      However this warming trend has grown with time. The original post is here

  4. CMay says:

    Clive, I witnessed the change in the RSS data from the beginning of the month and the end of the month. Now the RSS and H4 data parallel one another and it used to parallel UAH.

    Is this just more shenanigans and deceit?

    It has been a while for me. It is good to be back.

    • Clive Best says:

      I must admit that I don’t yet understand what changes were made to RSS V4.


      The method used to make adjustments for drifting satellite measurement time was changed.

      This also brings them into better agreement with GISS.

      • CMay says:

        On the RSS data, you may remember when Monckton used the RSS data to show the pause. It took me a while to find it but this may account for the changes.

        “Mears also wrote on his website he was getting a lot of questions from those worried “denialists” were using his satellite data to cast doubt on global warming, which has some skeptics worried he’s looking for an excuse to find warming.

        “I suspect Carl Mears grew tired of global warming ‘denialists’ using the RSS satellite data to demonstrate an 18-year ‘pause,’” Spencer said. “So, now that problem is solved.”

        I don’t analyze GISS simply because I recall how they adjusted the dust bowl year colder so they could show more warming.

        We are plagued by progressives with a climate change agenda. Ethics and morality are not constraints for them. The ends justify the means.

        • cce says:

          When RSS ran cooler than UAH, Spencer told skeptics if they wanted the lowest possible trend, they should use RSS, which of course they did. When UAH 6 came out, it had a trend comparable to RSS 3.3 which S&C considered biased low just a few years before. Much applause. But when RSS 4 came out and had a larger trend, suddenly they were the villains again. I say “again” because RSS had the highest trend for a long time, but eventually sunk below UAH. That was when it become the star of Monckton’s “no warming since X” charts.

        • Olof R says:

          I think Mears et al realised that something was wrong with the RSS v3.3 diurnal drift correction of AMSU satellites (1998 and on), since their dataset was biased low compared to UAH 5.6 which only used “reference” AMSU satellites with little or no drift, that didn’t need correction.

          RSS validated their v4 AMSU correction using the experimental series REF_SAT and MIN_DRIFT, ie satellites with no or little drift.

          Spencer and Christy actually “unvalidated” their new v6 AMSU drift correction (unnoticed) since it disagreed with their own AMSU reference series (v 5.6). They published their method paper in an obscure Korean journal which maybe explains why this obvious flaw passed the review unnoticed..

  5. Pingback: Update on Warming Hiatus | Science Matters

  6. Pingback: Whatever happened to the Global Warming Hiatus? – Climate Collections

  7. Another Ian says:


    In case you haven’t seen what Chiefio has been doing with GHCN data here’s a pointer

    “GHCN v3.3 vs v4 Baseline End 2015”


    More posts on that set of comparisons there

    • Clive Best says:

      No I hadn’t seen that, but I recently calculated the temperature anomalies for V4C and V4U(‘uncorrected’) and compared them to V3 using my 3D algorithm.

      There is a noticeable post 1998 boost in warming in V4 as well. I think Chiefio is seeing that the underlying data have also been changed. That sounds to me like they may have introduced a new ‘homogenisation’ algorithm . I can’t believe it is due to simply having more stations.

  8. Nick Stokes says:

    “I am sure there are excellent arguments as to why pair-wise ‘homogenisation’ is wonderful”
    Yes. But hardly relevant here, since AFAIK CRUTEM don’t use it. In fact I’m surprised by the results here, as I would not expect the existing changes to have changed very much, unless there is a revised homogenisation at source. I think it would be useful to look at the two CRUTEM datasets directly to identify and analyse changes.

    • paulski0 says:


      See the NH station count in Figure 1 of the CRUTEM4 report. There was a huge decline in CRUTEM3 after 1990 which was largely plugged for CRUTEM4. Clive may be trying to compare like-for-like with CRUTEM3 and CRUTEM4 stations, but the fact is that most of the stations he’s using don’t have any data in the CRUTEM3 database during the period of interest, whereas they do in CRUTEM4.

      A similar dropoff occurs in the Southern Hemisphere from the mid-2000s in CRUTEM3, but not CRUTEM4. That’s also about when the SH divergence between CRUTEM3 and CRUTEM4 begins (early CRUTEM4, the current version shows earlier divergence, likely due to the addition of more stations).

  9. The previous versions of HADCRUT4 are here


    and you can see the steady upward creep. For example, in the versions 4.1 – 4.6 the 2013 figure goes up like this:


  10. It looks as though GISS are doing the same thing. See this graph comparing 2017 and 2019 versions.


    • Nick Stokes says:

      Very unwise to rely on a Heller graph. You may notice that it claims to be supported by a spreadsheet, but the spreadsheet finishes in 2017. The GISS history page is here. There is no perceptible difference between 2017 and 2019. I have a GISS file from 11/2015. The annual averages, compared with latest, are:
      2015 2019
      2014 0.74 0.73
      2013 0.65 0.65
      2012 0.63 0.62
      2011 0.60 0.59

      • No, you are looking at the wrong data set Nick. Heller clearly says he’s looking at the land data. Yours look more like land+ocean.

        • Nick Stokes says:

          I’m looking at the standard GISS dataset that is widely known and publicised. This is a standard Heller trick where he pulls some obscure data and presents it as if it were the main set that everyone knows – which has not changed. GISS does not have a land index. It does have a “Met stations only” index, which is a continuation of the paper of Hansen and Lebedeff, 1987, where lacking SST data they sought to use met stations to represent the whole globe. No-one ever cites that except for some gotcha purpose.

          • You are talking complete nonsense.

            1. It is not “obscure data”, it’s clearly posted on the GISS site.

            2. They do have a land index. They call it “Annual Mean Temperature Change over Land and over Ocean” on their website.

            3. You claim you are posting the latest numbers from the standard GISS dataset, but you are not. There latest number for 2013 is 0.68, not 0.65 as you write. Have Gavin and his friends fiddled the numbers upwards again since you wrote your comment?

          • Nick Stokes says:

            “They do have a land index”
            They do not have a land index. They maintain and post two indices. One is described on their front page:
            Combined Land-Surface Air and Sea-Surface Water Temperature Anomalies (Land-Ocean Temperature Index, LOTI) file GLB.Ts+dSST.txt

            The other is the Met stations index. This is mentioned only on their history page, where they say
            “For historical reasons we also maintain a calculation of the anomalies that would result if one only used the meteorological station data. This estimate is not affected by issues in ocean data processing, but because the land is warming faster than the ocean, it has a larger trend than the land-ocean index that is now our standard product.” The file is GLB.Ts.txt

            It is that latter index that is used in the spreadsheet that Heller provided, but up to 2017 only. His numbers are based on the historical data provided here. It is described as global there.

            GISS does not provide a monthly land index, but they do graph an annual land and ocean breakdown of the LOTI index. There are no historical values provided for that.

            What Heller has done is to graph the Ts index for 2001 and 2017, per the spreadsheet, with the Land only values for 2019. They are quite different things. A giveaway is that his 2019 plot reaches a maximum of about 1.46C for 2016, which is the Land value. The Ts value is 1.26 for 2016.

            There is a correct graph, corresponding to Hellers, of historic Ts on the History page.

            Heller gets away with this stuff because he knows that his readership just doesn’t care what he puts in his graphs.

          • Nick Stokes says:

            ps I dug into the gory details. The graph work in his spreadsheet is not in the displayed “graphs” sheet, which only goes to 2017, but in the “Copy of Graphs” sheet, which goes to 2019. Column F is headed 2017 version. But is is in fact the GISS Ts data, as indicated in the sheet labelled 2017_07_v3_GLB.Ts. Column G is labelled 2019 version, but is in fact the Land annual averages, as shown in the page you linked, or his sheet labelled (wrongly) “graph” (at the end). They are different things. But Heller never explains his graphs, and his readers don’t care.

          • They do have a land index. I gave you the link. It’s described as “Land Surface Air Temperature” on their website! You’ve written a string of falsehoods here.

          • Nick Stokes says:

            ” It’s described as “Land Surface Air Temperature” on their website! You’ve written a string of falsehoods here.”

            It is not an index. It is a graph of annual data. It is on a page titled “Analysis Graphs and Plots”. Nowhere does GISS describe it as an index.

            What is an absolute falsehood is your citation of Heller’s dishonest graph. It takes 2017 data from one actual (but obscure) global index Ts for as at 2017 (and at 2001), and data from this land temperature graph for 2019, and claims that the difference indicates GISS data tampering. And you echo that, and don’t seem to care what it is based on.

            However, I was unfair in saying that none of Heller’s readership cares. I see that someone has noted that the 2019 data is not Ts data. That comment has been there for 24 hours, but no-one has responded.

  11. Benjamine Dover says:

    All those charts are making my head hurt. Can someone just tell us what the global temperature should always be?

    I would like to compare it to the global temperature reading recorded from all the calibrated temperature sensors in the 1500’s.

    • Andy Espersen says:

      That is one way of looking at it. Another is to ask the simple question : Does it matter whether the pause is over or not? Global temperatures can be expected to rise another degree or so over the next couple of hundred of years anyway, if this Dansgaard-Oestchler cycle runs truer to average length (and maybe even a bit higher with the rising CO2 levels!). Don’t panic! Don’t flee to Antarctica just yet!

  12. vancenm22 says:

    AGW/CC has become an industry for power and graft, as well as a political cult, that’s too big to fail. I’m looking forward to how these grifters transition back to peddling “anthropogenic” cooling (like they did in the Seventies), given that the 400K year Antarctic ice core temperature record clearly shows that we’re at the bitter end of the latest inter-glacial period, and add the latest solar minimum, which looks to be a combination of a number of solar cycles that may very well lead to vastly reduced output, all of which our climate “scientists” completely ignore due to their inconvenience.

    • Windchaser says:

      “all of which our climate “scientists” completely ignore due to their inconvenience.”

      Eh? Read the scientific literature. They ignore neither of those. They’ve looked at each of them, extensively and in detail. I can point you to multiple papers examining how much CO2 we’d have to have in the atmosphere in order to avoid slipping back into the next glacial period.

      Scientists didn’t ignore these questions. They simply found that, when you crunch the numbers, the warming effects of CO2 outweigh the cooling effects of this orbital cycle or of a period with cooler solar irradiance.

      If you have calculations of your own that show that these will dominate over the warming effects of CO2, by all means, please present them. But… it kinda sounds like you’re not keeping up-to-date with what the science already is, much less doing calculations of your own.

  13. CMay says:


    Even though there is no future in it I decided to analyze the H3 data that I had that goes back to 2013 which is before the big El Nino of 2016.

    I used the solution that I already had for the latest H4 data just to see if that would produce acceptable results.

    I am pleased. The solution is made up of 107 cycles.

    There are a few things worth noting in this figure. Dr. Curry and Dr. Lindzen have mentioned the warming from 1910 to 1945 and the warming from the 1970s until around 2000 as being roughly the same. This figure explains it. The 209-year cycle was split by the 67-year cycle. BTW, it also gave us Leonard Nimoy and “In Search of The Coming Ice Age.”

    The next figure highlights the fact that the slope of the pause line is negative for these data. The El Nino of 16 changed all this.

    The next figure gives you the whole story until 2100.

    BTW, I have other data that shows something quite similar. I did an analysis of the PSMSL tide measurements and it is showing a downward trend in sea level rise will happen shortly. It looks like the 209-year cycle will be responsible.

    How do the climate models compare with the measured data? Quite frankly they all suck except for low values of ECS. The internet seems to keep championing the IPCC worst case scenario even though it is terrible/ How can you create an urgent need for political action with a low value of ECS?

    I hope this helps somebody. I have put a lot of effort into it. What I have for the H4 data looks like what you see here.

    • Windchaser says:

      As somebody’s who done quite a bit of Fourier analysis myself, how are you not sure you’re just curve-fitting?

      • CMay says:

        I too have long experience with FFTs. I can remember the first desktop analyzer, the Nicolet 444.

        We used it on rotating equipment and with the goal being to reduce the vibration of the equipment. I have 35 years of experience in this. I can’t recall a single time where a frequency with a S/N ratio that was identified was false. This is not curve fitting.

        • Windchaser says:

          This is not a system where the output (climate) is expected to be caused mainly by periodic forcings. That may’ve been the case for the output (vibrations) in your work, caused by your periodic input (rotations within the equipment). But climate is a very physically-different system, dominated by non-periodic forcings and stochastic events.

          While I admire your long experience with the subject, this still warrants a statistical analysis. Going with your gut is not enough; it’s not scientific.

          The beautiful – and dangerous – thing about Fourier analysis is that you can fit any smooth function to the sum of the harmonics. That doesn’t imply that the harmonics have explanatory power.

          • CMay says:

            I am going to respectfully disagree with you. In reading you first paragraph some of what you say is conjecture. You used the word expected. That must be proven too.

            Perhaps, the best way to answer this is that I read Dr. Curry’s posts. She continues to maintain that climate models must account for natural variability. In a manner that is what I think I have done.

            I don’t think CO2 can be justified as the control knob and the performance of the climate models is so bad would you like our nuclear reactors be designed with an equivalent program?

            If such models were used to get us to the moon I think we would have many more dead astronauts.

            I just don’t see how the climate models, in their current state, can be used to make policy decisions. Doesn’t anybody stand back and say they aren’t working and then look for reasons why.

            I am done with Paul when he accuses me of coming up with 107 arbitrary cycles.

            On the cycles, the Eddy and DeVries cycle are in there. I can’t identify all of them and don’t feel the need to.

          • CMay said:

            “I am done with Paul when he accuses me of coming up with 107 arbitrary cycles.”

            But that’s exactly what you said you did. Quoting, you stated “The solution is made up of 107 cycles.”

            All I asked was that you name these cycles, because unless these are identified, they have been arbitrarily chosen and spaced to match the time-series.

          • Windchaser says:

            You can disagree or not, but unless you show that these cycles are statistically significant, or unless you can show that they have a physical cause, this work doesn’t pass scientific muster. You would (rightfully) not be able to get it published in any decent paper, and you would (rightfully) get criticism at any scientific conference.

            Why “rightfully”? Because most of us have tried fitting curves to data before, and we know how easy it is to get misleading results. Many of us have been suckered before.
            Heck, numerically, the reason Fourier transforms are so good for getting results is because they converge exponentially. Quite literally, they are mathematically optimal for fitting to a smooth function. There’s a proof for it.

            If you want to account for natural variability, you would do far more to demonstrate the physical mechanisms behind natural variability, or the actual physical modes.
            Take ENSO, for example. We understand physically how it actually works; how the tropical winds push warm water across the Pacific to build up a pool of warm water during La Nina and neutral conditions, which then contributes to a reversal of winds, a suppression of cold-water upwelling in the eastern pacific, and a redistribution of warm waters during the El Nino conditions.

            Science advances through greater understanding of the physics. While rough correlations often help point in the direction of where a real connection lies, just as often, the correlations turn out to be ephemeral.

            In the physical sciences, if you don’t have a plausible, testable physical mechanism to back up your statistical “finding”, you have nothing. And until you test that hypothesized mechanism, your conclusions are correctly seen as weak.

          • CMay says:

            Your reply was pretty nasty.

            I am going to look past it. I think it may come down to the fact that you have never had to work a real world problem.

            As I said I worked on rotating equipment the FFT never lied to me. It identified where I had a a vibration issue I had to solve. Here is where you are missing it. The FFT identified the problem but it did not tell me what was physically causing it. That was my job. I had to identify the physical basis but that would have been impossible without the FFT.

            Was the issue rotationally ordered? Was it electrically related? Was it non-ordered that might indicate a resonance. Without the FFT I was dead in the water.

            I did come up with a physical explanation for every one of them and that was only enabled by the FFT. It is a vital tool. You can come up with the physical explanation later. BtW, I had to do a lot of research to find physical explanations for each and every one of them. That also made it possible to come up with the appropriate remedial action.

            I realize the climate is far more complex but maybe there is a chance you could use the FFT results to derive your physical explanation too.

            I already mentioned earlier that the Eddy cycle and the DeVries cycle are in there. I have another long cycle at 350 years which matches solar activity. So don’t tell me there is nothing physical in there. I also have the standard 11 and 22 year cycles.

            Earlier today I posted some NOAA graphs on Nino region 3.0.

            The March graph shows a predicted El Nino of 2.0 in July and in June it is now gone..

            If NOAA was a subscription weather service I think you might ask for your money to be returned. The GCMs are crap. They may be physical models but their projections suck.

            I’ll give you mine and before we get there the physical model output will look more like mine. On a monthly basis the fit hardly changes.

            The model consists of monthly data till 1990, weekly data from 1990 until 2014, and daily thereafter.

            Almost every design improvement I implemented in my equipment came from test data. The models weren’t up to it. Sounds familiar.

          • CMay said:

            “I already mentioned earlier that the Eddy cycle and the DeVries cycle are in there. I have another long cycle at 350 years which matches solar activity. So don’t tell me there is nothing physical in there. I also have the standard 11 and 22 year cycles.”

            OK, that’s 5 out of 107. How about the other 102 cycles?

    • Rachael Ford says:

      Can you explain in layman terms why SL should fall soon, and when and the rate please. I have a physicist friend very interested in how sl projections are exaggerated and data faked

      • CMay says:


        I would be glad to oblige. I like the PSMSL data. A potion of the record is prior to the end of the LIA around 1850 it shows declining sea level. Below is my cyclic analysis of that record:

        This is a surprisingly good fit to the measured data. The correlation coefficient is quite high.

        Projecting forward a bit I get this:

        I was surprised by this but then where it bends over is what I am showing for H3 or H4 projections. I admit that I am not willing to bet the farm on this.

        I will have to check further but I think I have similar behavior shown in the C & W data.

        I hope you find this useful.

        • barry says:

          Would you please compare your (quadratic?) fit with 2 linear regressions broken at 1862?

          Because I think the linear fit beginning in 1862 is going to have a much better coefficient than the curve you have there. And we would not expect to see an upward trend until CO2 gets going if we were going with that theory.

          Also, you have simply invented the data for the dropping curve in the latter period, so you have done some pretty intense curve-fitting there – including fabricating data to make it look good!

          But what results do you get comparatively regarding correlation? Can we have some numbers? I trust you will report them accurately.

  14. “I am pleased. The solution is made up of 107 cycles.”

    You should be concerned about over-fitting. A Fourier series won’t represent global temperature variability accurately because the volcanic disturbances add delta spikes (with tail responses) that break up natural cycles. The delta spikes contain a wide range of Fourier components so these require many factors to be accurately fit, but will cause problems outside of the training interval.

    Or you can try fitting to ENSO, which is immune to volcanic disturbances, but is the biggest contributor to the global temperature variation.

    • CMay says:

      My answer to you is not crisp. What I furnish below is in response to the volcano issue from some years ago:


      I sent this to you yesterday.

      Salvatore Del Prete says:

      July 8, 2014 at 10:39 am

      I have many other studies which show this to be fact which is one of the parts of my solar/climate connections.

      Quite right. Seismic activity is NOT independent of solar activity:

      NASA:Volcanic eruptions and solar activity
      The historical record of large volcanic eruptions from 1500 to 1980, as contained in two recent catalogs, is subjected to detailed time series analysis. Two weak, but probably statistically significant, periodicities of ~11 and ~80 years are detected. Both cycles appear to correlate with well-known cycles of solar activity; the phasing is such that the frequency of volcanic eruptions increases (decreases) slightly around the times of solar minimum (maximum). The weak quasi-biennial solar cycle is not obviously seen in the eruption data, nor are the two slow lunar tidal cycles of 8.85 and 18.6 years. Time series analysis of the volcanogenic acidities in a deep ice core from Greenland, covering the years 553-1972, reveals several very long periods ranging from ~80 to ~350 years and are similar to the very slow solar cycles previously detected in auroral and carbon 14 records. Solar flares are believed to cause changes in atmospheric circulation patterns that abruptly alter the earth’s spin. The resulting jolt probably triggers small earthquakes which may temporarily relieve some of the stress in volcanic magma chambers, thereby weakening, postponing, or even aborting imminent large eruptions. In addition, decreased atmospheric precipitation around the years of solar maximum may cause a relative deficit of phreatomagmatic eruptions at those times.

      I think of a volcano as being a singular event and based upon my discussion with Dr. Whitlow a singular event would be treated as noise in the FFT analysis. ”

      Perhaps, if the Jovian planets can influence solar activity then they could also influence plate movements here on Earth.

      • “My answer to you is not crisp. “

        It’s soggy. I fit to ENSO and can use three fixed lunar tidal factors and the annual cycle to account for the behavior by applying Laplace’s Tidal Equations along the equator.

        This is versus your 107 arbitrarily-chosen cycles.

        • CMay says:

          The cycles are certainly not arbitrary. How do you go about picking 107 arbitrary cycles? I don’t want to try.

          The 107 cycles come from Dr. David Evans Optimal Fourier Transform.

          BTW, I also analyze the four Nino regions. I can document the NOAA projections for some time. Based on what I have seen in March they were predicting an El Nino by the end of the year.

          Here it is in June.

          If NOAA was a subscription weather service I think you would want your money back. The GCMs are not working..

          • “The cycles are certainly not arbitrary.”

            They are arbitrary in the sense that they likely do not derive from any other known physical forcing. For example, no one does a conventional tidal analysis by using periods outside those specified as known tidal factors.

            So if your 107 cycles are not arbitrary, can you please label them or describe where they originated from — and “Dr. David Evans Optimal Fourier Transform” is not a valid response.

  15. MarkR says:

    Hi Clive,

    This is interesting. What does a plot of the differences look like going back before 1990 if you use the same set of CRUTEM3 and CRUTEM4 stations?

    Have you looked at the gridded differences through time to see if there are particular regions that’re affected?

    Also, how do you define a “hiatus”?

    • Clive Best says:

      Yes I am starting to look into that. Hiatus just means pause – so a zero or negative temperature trend from 1998 to 2014.

      • MarkR says:

        I’ll be reading what you find. Looking forward particularly to seeing the longer term differences all the way back for the CRUTEM period.

        That’s a rather open definition of “pause” or “hiatus” – do 1978–1987, 1998–1997 and 2015–now also count?

  16. climanrecon says:

    Could it be that the changes in output in the 21st century are related to the fact that many station records end in this century, with 2007 and 2011 being very common end dates, at least according to the station headers files from CRU? A similar issue can be seen in GHCNM, usually described as containing thousands of station records, but in fact most of those stations are no longer reporting data to GHCNM (but are still operational), to the point where I would say that GHCNM is no longer fit for purpose.

    • Clive Best says:

      A basic problem is that you cannot get hold of the raw temperature measurement data used by H4. You only have their monthly averages and these keep changing.

      • Nick Stokes says:

        “A basic problem is that you cannot get hold of the raw temperature measurement data used by H4”
        The station monthly averages are here (22 Mb zip file). I’ve checked a few station averages present in both this and the 2011 version, and have seen very little difference.

    • Nick Stokes says:

      “in fact most of those stations are no longer reporting data to GHCNM”

      In May 2019, GHCN V4 has so far 8783 stations reporting. You can see a detailed sphere map here.

      • climanrecon says:

        Last time I looked at V4 (unadjusted), in Feb 2018, it had only 101 stations for the whole of Australia, woefully inadequate for doing homogenisation, which would require around 1000 stations, just for Australia. The rainfall data in V2 is even worse, a mere 29 stations for Australia.

        • Nick Stokes says:

          “woefully inadequate for doing homogenisation”
          That is an unsupported assertion. In fact 135 Australian stations reported in May 2019. That is perfectly adequate for estimating the average temperature.

          • climanrecon says:

            Yes, 135 properly homogenised stations would be adequate to represent the temperature history of Australia, but 135 unadjusted records are useless. ACORN-SAT uses thousands of stations for its homogenisation.

  17. johnmclean7 says:

    Nice work Clive.

    The 1998 point of overlap is very interesting. A plot of the differences might have been useful because it looks to me like the differences in annual values from 1990 to 1997 are almost identical and the differences after 1998 go through periods of a few years of similar differences and then switch.

    You might like to look at regional temperatures – hemisphere, quadrant, continent, latitude bands, longitude bands – to see how the HadCRUT3 and HadCRUT4 temperature patterns differ.

    Have you tried plotting coverage, preferably hemisphere or even smaller regions, over the period of your graphs? Temperatures over land are much more variable than sea surface temperatures. Things get complex because 50% of the Earth’s surface is between 30N and 30S, and that’s where the direct influence of ENSO events is felt, the mid-latitude effects are more indirect or circuitous. And don’t forget that more than 75% of the tropics is ocean.

    It might even be worth identifying changes in station data up to 2014. I can’t immediately see how this would make HadCRUT3GL sometimes agree with HadCRUT4GL and sometimes not, but you have to remember that the plotted values are the net results of a lot of data processing, so the answer could lie in the individual components.

    John McLean
    (author of the HadCRUT4 audit published Oct,. 2018)

  18. Dan Pangburn says:

    Most assessments do not correctly account for the WV increase.

    In the period 1988-2002 water vapor molecules increased more than 5 times as fast as CO2 molecules and about twice as fast as calculated from the average global temperature increase. Since 1900 WV molecules increased approximately 3.6 times as fast as CO2 molecules.

    The increased water vapor is countering the planet cooling due to the quiet sun and might prevent another Little Ice Age which the quiet sun portends.

    There are at least 7 compelling observations that CO2 has little or no effect on average global temperature. http://diyclimateanalysis.blogspot.com

    • DP said:

      “In the period 1988-2002 water vapor molecules increased more than 5 times as fast as CO2 molecules and about twice as fast as calculated from the average global temperature increase. Since 1900 WV molecules increased approximately 3.6 times as fast as CO2 molecules.”

      So here’s a guy trying to reconcile an expected increase in specific humidity due to AGW with cherry-picked gibberish.

      • Dan Pangburn says:

        Apparently you lack knowledge about water vapor. The increase in WV that I referred to has been measured as Total Precipitable Water (TPW) since 1988, not ‘expected’. It stopped increasing in about 2002-2005 except for the 2015-2016 el Nino and subsequent minor ones which are still playing out. The numerical data is measured by satellite and reported monthly by NASA/RSS. This is graphed as Figure 3 in http://diyclimateanalysis.blogspot.com along with a rational extrapolation to 1900. The numerical data is accessible through this: http://www.remss.com/measurements/atmospheric-water-vapor/tpw-1-deg-product

        Some of the calculations which show that the WV increase is about twice what results from the increased vapor pressure of warmer liquid water are shown in Section 8 of http://globalclimatedrivers2.blogspot.com This link also shows the likely source of the increased WV above that resulting from increased average global temperature.

        • A D-K diagnosis. The issue is that as AGW becomes more apparent, other causal effects will start to manifest themselves. For example, specific humidity will increase as average temperature increases. Unfortunately, can’t do anything about these D-K types that reverse the causality and try to argue the opposite. Your idea that humidity is spontaneously increasing and dragging the temperature along with it doesn’t pass the smell test.

          • Dan Pangburn says:

            Apparently you missed this “…WV increase is about twice what results from the increased vapor pressure of warmer liquid water…”. If you would take the blinders off and look at the links that I provided you might know better.

            Not only do you not grasp what I show, you are apparently too locked in your thinking to recognize what is wrong with your unsourced graph. It obviously is not GLOBAL water vapor. Globally WV can not change that quickly. Examples of GLOBAL WV (TPW) are shown here https://wattsupwiththat.com/2018/06/09/does-global-warming-increase-total-atmospheric-water-vapor-tpw/ . The slope of the trend of your graph is about 0.75 % per decade while the three global sources are about 1.63, 1.54, and 1.4 % per decade since 1988; average about 1.5 % per decade.

            If you weren’t too stubborn and/or arrogant to look at the other link you might have become aware of where the extra water vapor came from.

            It appears from your activities that you lack the necessary engineering/science skill to truly understand this stuff with the result that wrt climate you are ignorant and unaware of your ignorance. I identify some of the necessary skills in Section 1 of http://globalclimatedrivers2.blogspot.com Look there if you are curious enough to find out what you are missing.

          • You’re looking at a trend with roughly a single degree of freedom, trying to figure out what the humidity increase should be considering the range of climate conditions around the world. Essentially, what you’re doing is pointless because of your decision to concentrate on a confounding factor,

          • Dan Pangburn says:

            With statements such as this “…trying to figure out what the humidity increase should be…” you reveal your lack of knowledge. There is nothing wrong with that, nobody knows everything. But you fall behind when you put blinders on, close the door and refuse to look at anything that you don’t already know.

            Global average water vapor is MEASURED by satellite and reported monthly by NASA/RSS. I gave you links to the data and a graph of it. I also gave you a link to an article where two other sources corroborate the % increase.

            An analysis (Section 9 in http://globalclimatedrivers2.blogspot.com ) of the sources of the increased WV reveals that it is about 86% from crop irrigation. The increase in WV correlates with increase in irrigation which had substantial increased growth rate 1960-2005.

            The WV level appears to have leveled off in about 2002 until the 2015-2016 el Nino. (WV is a ghg which causes warming but its increase is also an effect of warming as mandated by its vapor pressure vs temperature characteristic and demonstrated by its increase during an el Nino).

          • DP said:
            “An analysis (Section 9 in http://globalclimatedrivers2.blogspot.com ) of the sources of the increased WV reveals that it is about 86% from crop irrigation. “

            Delusional grasping at straws. Why so much desperation?

          • Dan Pangburn says:

            Why so much stubbornness? Do the analysis yourself or identify what’s wrong with mine. All the data references are there.

          • You are delusional if you think you can reverse the causality.

  19. climanrecon says:

    HadCRUT4 obviously differs from HadCRUT3, but how close is either of them to the right answer? Maybe that is an unfair question, you could argue that the approach is “broad-brush” and cannot be expected to get exactly the right answer. One reason to doubt HadCRUT4 as the right answer in recent years is that an awful lot of the station records end in 2012, as can be seen by viewing the index file in the latest station data zip:


    Wordpad with landscape page set-up works well to view the index file.

    Take Brazil as an example, a few of its stations continue to 2106, but around 100 of them end in 2012. Thus, before 2012 the currently reporting stations make a negligible contribution to the Brazil average, but after 2012 they suddenly make a dominant contribution. There has to be a resulting distortion, negligible in the global average on its own, but the same thing happens in many countries.

    Note that the Freedom of Information decision against CRU came in 2011, did many of the suppliers of the data object to their increasingly valuable information being given away for free?

  20. The latest adjustment to disappear the hiatus is in sea surface temperatures, a new version called HadSST4. Needless to say, the new version has more warming than the previous one, HadSST3. In fact the raw data shows no warming over the pause period.


    So when this is incorporated into HadCRUT (HadCRUT5?) there will be yet another upward tweak in that.

    • Nick Stokes says:

      It seems HADSST4 is finally implementing the adjustment required by results in John Kennedy’s paper of 2011 – “Karlization”. Over the last 30 years, drifter buoys gave a big expansion of SST coverage, and made up an increasing fraction of the global average. But as Kennedy and others proved, with a huge number of coincident ship/buoy readings, the buoys returned temperatures on average 0.12°C lower, reducing the trend. This spurious effect was largely responsible for the “Pause”. In these cases, with a clear artefact, there is no choice but to adjust for it. HAD should not have delayed so long.

      On another topic, I see that there is no substantive answer to the data failure Genava and I were pointing out at Tony Heller’s blog. I have set it out more completely here.

      • Clive Best says:

        How do we know the drifter buoys are “correct”? Surely they can introduce a warming bias after say 2000 compared to older bucket data, especially if the normalisation period is unchanged.

        • I may be wrong (Nick can probably clarify) but I don’t think it’s that the drifter buoys are regarded as “correct”. As I understand it, there is a difference between the ship measurements and the buoy measurements. When you combine them, you should correct for this difference, or else you’ll introduce a bias. It doesn’t matter if you shift the ship measurements to the buoy measurements, or vice versa. However, since buoys are starting to dominate, it probably makes more sense to shift the ships to the buoys, than the buoys to the ships.

          • Clive Best says:

            The problem with that argument is that there were no drifter buoys before ~1990 so they do not affect the normalisation period or the early SST data. So in this case it would appear to boost only recent data.

            I just went swimming here in Italy. It’s a hot day and if I float vertically I estimate my shoulders are at least 5C warmer than my feet. Which is correct?

          • So in this case it would appear to boost only recent data.

            No, as I understand it, you would apply the correction across the whole time series (assuming you adjust the ships to the buoys).

          • Nick Stokes says:

            “It doesn’t matter if you shift the ship measurements to the buoy measurements, or vice versa. However, since buoys are starting to dominate, it probably makes more sense to shift the ships to the buoys, than the buoys to the ships.”
            Yes, it doesn’t matter, when taking anomalies. Arithmetically, the result is the same. The usual convention with adjustments is to adjust relative to present, as best convenient, since recent numbers are the most visited.

          • Clive Best says:

            Any buoy correction must be dependent on location and on season. Furthermore since bucket corrections have already been applied previously, then these too really need to be redone using exactly the same methodology. Is that the case?

          • Nick,
            Thanks. I should have made clear that I was referring to anomalies.

          • Nick Stokes says:

            “Any buoy correction must be dependent on location and on season.”
            I can’t see why. It is basically an instrument calibration. On some tens of thousands of occasions, it happened that a ship reading and a buoy reading were taken in near proximity. Buoy readings were on average 0.12C lower. That might be expected vary with absolute temperature, but not otherwise with season. In fact, Kennedy’s paper did separate by oceans, but they were reasonably consistent.

            “these too really need to be redone”
            Why? This is one merit of adjusting relative to present. Old bucket ship readings were already adjusted relative to recent ship. Buoys can be adjusted relative to recent (ship) without disturbing that.

  21. bindidon says:

    I would never think of defending Heller aka Goddard.

    His cheeky misinterpretations of the differences between GHCN unadjusted / adjusted I still remember too well.

    But I do not understand Nick when he claims that GISS has no land data (or rather, had until GHCN V3 was abandoned in favor of V4).

    Here are the data I used for years:

    What is this strange difference between ‘land-only’ and ‘meteorological’ data?

    Of all the main time series that I compared to my (quite amateur) evaluation of GHCN daily, GISS ‘land-only’ was the best.

    The others (CRUTEM4, NOAA land, BEST land) had far higher trends for 1979-2018 (GISS: 0.22 ° C / decade, NOAA: 0.29).

    As always, you only complain about the disappearance of the things you need.

    That Roy Spencer and Karl Mears give up their outdated time series and GHCN V3 makes room for V4 is only too understandable. But that GISSTEMP makes his time series disappear is simply unacceptable.

    • Nick Stokes says:

      “What is this strange difference between ‘land-only’ and ‘meteorological’ data?”
      In an important sense, GISS does not “have” data at all, land or otherwise. They use GHCN etc data to calculate regional averages which they publish. In this case, they use (probably the same set of) land data to calculate estimates of two different things:
      1. Average over Land, as plotted in the graph linked by Paul Matthews.
      2. Average over globe, as indicated in the GLB.Ts index that you link

      I explain the difference in detail in a post here. Functionally, the difference is in weighting. For GLB.Ts stations around the oceans, especially islands, are upweighted to represent large areas of ocean.

      Here is a plot of GISS Land (graph), versions from 2017 and 2019:

      And here is a plot of GISS GLB.Ts, also versions from 2017 and 2019:

      They are different, because they represent different regions. But Heller plotted the 2017 version of Ts and the 2019 version of Land and claimed the difference represented “tampering”.

      My comments at Heller’s post from nearly two weeks ago are here. No response, or retraction, to be seen.

Leave a Reply